dataset
stringclasses 45
values | id
stringlengths 17
64
| messages
listlengths 2
2
|
---|---|---|
science.ncbi_ner | science.ncbi_ner.589 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\": [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nExpression of the murine Duchenne muscular dystrophy gene in muscle and brain. Complementary DNA clones were isolated that represent the 5 terminal 2. 5 kilobases of the murine Duchenne muscular dystrophy (Dmd) messenger RNA (mRNA). Mouse Dmd mRNA was detectable in skeletal and cardiac muscle and at a level approximately 90 percent lower in brain. Dmd mRNA is also present, but at much lower than normal levels, in both the muscle and brain of three different strains of dystrophic mdx mice. The identification of Dmd mRNA in brain raises the possibility of a relation between human Duchenne muscular dystrophy (DMD) gene expression and the mental retardation found in some DMD males. These results also provide evidence that the mdx mutations are allelic variants of mouse Dmd gene mutations.",
"role": "user"
},
{
"content": "{\"Disease\": [\"mental retardation\", \"Duchenne muscular dystrophy\", \"DMD\", \"dystrophic\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.19872 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nSoftware-defined networking (SDN) allows operators to control the behavior of a network by programatically managing the forwarding rules installed on switches. However, as is common in distributed systems, it can be difficult to ensure that certain consistency properties are preserved during periods of reconfiguration. The widely-accepted notion of PER-PACKET CONSISTENCY requires every packet to be forwarded using the new configuration or the old configuration, but not a mixture of the two. If switches can be updated in some (partial) order which guarantees that per-packet consistency is preserved, we call this order a CONSISTENT ORDER UPDATE. In particular, switches that are incomparable in this order can be updated in parallel. We call a consistent order update OPTIMAL if it allows maximal parallelism. This paper presents a polynomial-time algorithm for finding an optimal consistent order update. This contrasts with other recent results in the literature, which show that for other classes of properties (e.g., loop-freedom and waypoint enforcement), the optimal update problem is NP-complete.\n\nCited Abstract(s):\n[1]: \n\n[2]: We study complexity and algorithms for network updates in the setting of Software Defined Networks. Our focus lies on consistent updates for the case of updating forwarding rules in a loop free manner and the migration of flows without congestion. In both cases, we study how the power of two affects the respective problem setting. For loop freedom, we show that scheduling consistent updates for two destinations is NP-hard for a sublinear number of rounds. We also consider the dynamic case, and show that this problem is NP-hard as well via a reduction from Feedback Arc Set. While the power of two increases the complexity for loop freedom, the converse is true when allowing to split flows twice. For the NP-hard problem of consistently migrating unsplittable flows to new routes while respecting waypointing and service chains, we prove that two-splittability allows the problem to be tractable again.\n\n[3]: Datacenter networks (DCNs) are constantly evolving due to various updates such as switch upgrades and VM migrations. Each update must be carefully planned and executed in order to avoid disrupting many of the mission-critical, interactive applications hosted in DCNs. The key challenge arises from the inherent difficulty in synchronizing the changes to many devices, which may result in unforeseen transient link load spikes or even congestions. We present one primitive, zUpdate, to perform congestion-free network updates under asynchronous switch and traffic matrix changes. We formulate the update problem using a network model and apply our model to a variety of representative update scenarios in DCNs. We develop novel techniques to handle several practical challenges in realizing zUpdate as well as implement the zUpdate prototype on OpenFlow switches and deploy it on a testbed that resembles real DCN topology. Our results, from both real-world experiments and large-scale trace-driven simulations, show that zUpdate can effectively perform congestion-free updates in production DCNs.\n\n[4]: We present Dionysus, a system for fast, consistent network updates in software-defined networks. Dionysus encodes as a graph the consistency-related dependencies among updates at individual switches, and it then dynamically schedules these updates based on runtime differences in the update speeds of different switches. This dynamic scheduling is the key to its speed; prior update methods are slow because they pre-determine a schedule, which does not adapt to runtime conditions. Testbed experiments and data-driven simulations show that Dionysus improves the median update speed by 53--88 in both wide area and data center networks compared to prior methods.",
"role": "user"
},
{
"content": "There are various approaches for producing a sequence of switch updates guaranteed to respect certain path-based consistency properties (e.g., properties representable using temporal logic, etc.). For example, use counter-example guided search and incremental LTL model checking, FLIP [2] uses integer linear programming, and CCG [1] uses custom reachability-based graph algorithms. Other works such as Dionysus [4] , zUpdate [3] , and , seek to perform updates with respect to quantitative properties.",
"role": "assistant"
}
] |
science.ddi_ner | science.ddi_ner.320 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCaution should be exercised when administering ETOPOPHOS with drugs that are known to inhibit phosphatase activities (e.g., levamisole hydrochloride). High-dose cyclosporin A resulting in concentrations above 2000 ng/mL administered with oral etoposide has led to an 80% increase in etoposide exposure with a 38% decrease in total body clearance of etoposide compared to etoposide alone.",
"role": "user"
},
{
"content": "{\"Brand\": [\"ETOPOPHOS\"], \"Drug\": [\"cyclosporin A\", \"levamisole hydrochloride\", \"etoposide\"], \"DrugGroup\": [], \"UnapprovedDrug\": []}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.4057 | [
{
"content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: a runaway immune system reaction could be one mechanism behind severe COVID-19 cases.\n\nAbstract:\nSARS-CoV-2 is the coronavirus agent of the COVID-19 pandemic causing high mortalities.\nIn contrast, the widely spread human coronaviruses OC43, HKU1, 229E, and NL63 tend to cause only mild symptoms.\nThe present study shows, by in silico analysis, that these common human viruses are expected to induce immune memory against SARS-CoV-2 by sharing protein fragments (antigen epitopes) for presentation to the immune system by MHC class I. A list of such epitopes is provided.\nThe number of these epitopes and the prevalence of the common coronaviruses suggest that a large part of the world population has some degree of specific immunity against SARS-CoV-2 already, even without having been infected by that virus.\nFor inducing protection, booster vaccinations enhancing existing immunity are less demanding than primary vaccinations against new antigens.\nTherefore, for the discussion on vaccination strategies against COVID-19, the available immune memory against related viruses should be part of the consideration.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.10369 | [
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nAttempting tobacco cessation--an oral physician's perspective.\nAbstract 1:\nINTRODUCTION Tobacco use is a global health care problem . Repetitive exposure to nicotine produces neuroadaptation result ing in nicotine dependence . Smoking is associated with a range of diseases , causing high levels of morbidity and mortality and is one of the leading causes of preventable deaths , with more than 4.6 million smokers worldwide dying each year from smoking related illnesses . Stopping smoking has major health benefits . Quitting at any age provides both short and long term benefits . MATERIAL S AND METHODS 45 patients attending the outpatient department at the Oxford Dental College , Bangalore , were r and omly allocated to three groups of interventions namely placebo , counseling and nicotine replacement therapy ( NRT ) . Initially each one was assessed for carbon monoxide levels using a breath analyser ( pico smokerlyser bedfont UK ) . They were followed up for six months and the carbon monoxide levels were again assessed using the same instrument . The paired t test was used to compare the results before and after the intervention . RESULTS The scores before the initiation of intervention and after treatment were compared and all three interventions were found to be statistically significant after six months . It was noticed that patients with very low or low dependence followed by high dependence had good response in the placebo group ( 68 % and 47.6 % respectively ) , in the counseling group maximum response was seen in the medium followed by the very low group ( 61 % and 59 % respectively ) , and maximum response was seen in very high followed by the very low group with NRT ( 78.7 % and 60.5 % respectively ) . CONCLUSION The inference that can be drawn from the present study is that non-invasive , non pharmacological methods like placebo and counseling are effective in low to medium groups , and NRT is effective with higher nicotine dependence\n\n\nTitle 2:\nComparison of a high and a low intensity smoking cessation intervention in a dentistry setting in Sweden – a randomized trial\nAbstract 2:\nBackground Tobacco is still the number one life style risk factor for ill health and premature death and also one of the major contributors to oral problems and diseases . Dentistry may be a potential setting for several aspects of clinical public health interventions and there is a growing interest in several countries to develop tobacco cessation support in dentistry setting . The aim of the present study was to assess the relative effectiveness of a high intensity intervention compared with a low intensity intervention for smoking cessation support in a dental clinic setting . Methods 300 smokers attending dental or general health care were r and omly assigned to two arms and referred to the local dental clinic for smoking cessation support . One arm received support with low intensity treatment ( LIT ) , whereas the other group was assigned to high intensity treatment ( HIT ) support . The main outcome measures included self-reported point prevalence and continuous abstinence ( ≥ 183 days ) at the 12-month follow-up . Results Follow-up question naires were returned from 86 % of the participants . People in the HIT-arm were twice as likely to report continuous abstinence compared with the LIT-arm ( 18 % vs. 9 % , p = 0.02 ) . There was a difference ( not significant ) between the arms in point prevalence abstinence in favour of the HIT- protocol ( 23 % vs. 16 % ) . However , point prevalence cessation rates in the LIT-arm reporting additional support were relatively high ( 23 % ) compared with available data assessing abstinence in smokers trying to quit without professional support . ConclusionS creening for willingness to quit smoking within the health care system and offering smoking cessation support within dentistry may be an effective model for smoking cessation support in Sweden . The LIT approach is less expensive and time consuming and may be appropriate as a first treatment option , but should be integrated with other forms of available support in the community . The more extensive and expensive HIT- protocol should be offered to those who are unable to quit with the LIT approach in combination with other support . Trial Registration Trial registration number :\n\n\nTitle 3:\nHypnotherapy is more effective than nicotine replacement therapy for smoking cessation: results of a randomized controlled trial.\nAbstract 3:\nBACKGROUND The efficacy of pharmacotherapy for smoking cessation is well documented . However , due to relapse rates and side effects , hypnotherapy is gaining attention as an alternative treatment option . The aim of this one-center r and omized study was to compare the efficacy of hypnotherapy alone , as well as hypnotherapy with nicotine replacement therapy ( NRT ) , to conventional NRT in patients hospitalized with a cardiac or pulmonary illness . METHODS We evaluated self-reported and biochemically verified 7-day prevalence smoking abstinence rates at 12 and 26 weeks post-hospitalization . Patients ( n=164 ) were r and omized into one of three counseling-based treatment groups : NRT for 30 days ( NRT ; n=41 ) , a 90-min hypnotherapy session ( H ; n=39 ) , and NRT with hypnotherapy ( HNRT ; n=37 ) . Treatment groups were compared to a \" self-quit \" group of 35 patients who refused intervention . RESULTS Hypnotherapy patients were more likely than NRT patients to be nonsmokers at 12 weeks ( 43.9 % vs. 28.2 % ; p=0.14 ) and 26 weeks after hospitalization ( 36.6 % vs. 18.0 % ; p=0.06 ) . Smoking abstinence rates in the HNRT group were similar to the H group . There was no difference in smoking abstinence rates at 26 weeks between \" self quit \" and participants in any of the treatment groups . In multivariable regression analysis adjusting for diagnosis and demographic characteristics , H and HNRT were over three times more likely than NRT participants to abstain at 26-weeks post-discharge ( RR=3.6 ; p=0.03 and RR=3.2 ; p=0.04 , respectively ) . CONCLUSION Hypnotherapy is more effective than NRT in improving smoking abstinence in patients hospitalized for a smoking-related illness , and could be an asset to post-discharge smoking cessation programs",
"role": "user"
},
{
"content": "Conclusions There is considerable scientific evidence describing the programs for smoking cessation used in dentistry , which support their effectiveness .",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.7478 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nFor many applications with limited computation, communication, storage and energy resources, there is an imperative need of computer vision methods that could select an informative subset of the input video for efficient processing at or near real time. In the literature, there are two relevant groups of approaches: generating a trailer for a video or fast-forwarding while watching processing the video. The first group is supported by video summarization techniques, which require processing of the entire video to select an important subset for showing to users. In the second group, current fast-forwarding methods depend on either manual control or automatic adaptation of playback speed, which often do not present an accurate representation and may still require processing of every frame. In this paper, we introduce FastForwardNet (FFNet), a reinforcement learning agent that gets inspiration from video summarization and does fast-forwarding differently. It is an online framework that automatically fast-forwards a video and presents a representative subset of frames to users on the fly. It does not require processing the entire video, but just the portion that is selected by the fast-forward agent, which makes the process very computationally efficient. The online nature of our proposed method also enables the users to begin fast-forwarding at any point of the video. Experiments on two real-world datasets demonstrate that our method can provide better representation of the input video with much less processing requirement.\n\nCited Abstract(s):\n[1]: With the widespread availability of video cameras, we are facing an ever-growing enormous collection of unedited and unstructured video data. Due to lack of an automatic way to generate summaries from this large collection of consumer videos, they can be tedious and time consuming to index or search. In this work, we propose online video highlighting, a principled way of generating short video summarizing the most important and interesting contents of an unedited and unstructured video, costly both time-wise and financially for manual processing. Specifically, our method learns a dictionary from given video using group sparse coding, and updates atoms in the dictionary on-the-fly. A summary video is then generated by combining segments that cannot be sparsely reconstructed using the learned dictionary. The online fashion of our proposed method enables it to process arbitrarily long videos and start generating summaries before seeing the end of the video. Moreover, the processing time required by our proposed method is close to the original video length, achieving quasi real-time summarization speed. Theoretical analysis, together with experimental results on more than 12 hours of surveillance and YouTube videos are provided, demonstrating the effectiveness of online video highlighting.\n\n[2]: Techniques of video summarization have attracted significant research interests in the past decade due to the rapid progress in video recording, computation, and communication technologies. However, most of the existing methods analyze the video in an off-line manner, which greatly reduces the flexibility of the system. On-line summarization, which can progressively process video during video recording, is then proposed for a wide range of applications. In this paper, an on-line summarization method using Gaussian mixture model is proposed. As shown in the experiments, the proposed method outperforms other on-line methods in both summarization quality and computational efficiency. It can generate summarization with a shorter latency and much lower computation resource requirements.\n\n[3]: We consider the problem of subset selection in the online setting, where data arrive incrementally. Instead of storing and running subset selection on the entire dataset, we propose an incremental subset selection framework that, at each time instant, uses the previously selected set of representatives and the new batch of data in order to update the set of representatives. We cast the problem as an integer binary optimization minimizing the encoding cost of the data via representatives regularized by the number of selected items. As the proposed optimization is, in general, NP-hard and non-convex, we study a greedy approach based on unconstrained submodular optimization and also propose an efficient convex relaxation. We show that, under appropriate conditions, the solution of our proposed convex algorithm achieves the global optimal solution of the non-convex problem. Our results also address the conventional problem of subset selection in the offline setting, as a special case. By extensive experiments on the problem of video summarization, we demonstrate that our proposed online subset selection algorithms perform well on real data, capturing diverse representative events in videos, while they obtain objective function values close to the offline setting.",
"role": "user"
},
{
"content": "Most relevant to our approach is the work of online video summarization, which compiles the most salient and informative portion of a video by automatically scanning through the video stream, in an online fashion, to remove repetitive and uninteresting content. Various strategies have been studied, including Gaussian mixture model [2] , online dictionary learning [1] , and submodular optimization [3] . Our approach significantly differs from these methods in that it only processes a subset of frames instead of the entire video. To the best of our knowledge, this is the first work to address video fast-forwarding in generating an informative summary from a video.",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.1161 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTestosterone metabolites mediate its effects on myocardial damage induced by ischemia/reperfusion in male Wistar rats. The role of testosterone in cardiovascular (CV) homeostasis is in controversy, and the exact effects of testosterone on the cardiovascular system remain poorly understood. Testosterone is metabolized by aromatase into 17β-estradiol and by 5α-reductase into dihydrotestosterone (DHT). Thus, identification of these metabolites in the heart may help to explain the controversy regarding the cardiovascular effects of testosterone. We analyzed the expression patterns of these testosterone-metabolizing enzymes and assessed the effect of its enzymatic activity inhibition on ischemia (40 min)/reperfusion (4h, I/R) via the left anterior descendent coronary artery in intact and gonadectomized male rats. Myocardial damage was measured as percentage of infarcted area vs. area at risk. Aromatase and 5α-reductase protein expression was found in the left ventricle of intact and orchidectomized rats. Exogenous testosterone had no effect on I/R induced myocardial damage in intact male rats, meanwhile exogenous testosterone protects against I/R injury in orchidectomized rats. However, enzymatic inhibition of aromatase increased myocardial damage in the presence of testosterone, while enzymatic inhibition of 5α-reductase significantly decreased the level of myocardial damage. Our results also showed that sub-chronic inhibition of 5α-reductase resulted in myocardial protection in both groups. Furthermore, in orchidectomized and intact male rats IV treatment with DHT induces a significant increase in the myocardial damage induced by I/R. Thus, the effect of testosterone on cardiovascular pathophysiology could be related, at least in part to changes in the balance of testosterone 5α-reduction and aromatization.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"Testosterone\", \"17\\u03b2-estradiol\", \"DHT\", \"testosterone\", \"dihydrotestosterone\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.9464 | [
{
"content": "Given a short query with keyphrases about a research idea in 2016 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: Unsupervised representation learning is important in learning generalizable models. In this paper, we hypothesize\n\nKeyphrases: unsupervised representation learning\n\nDataset description:\nCC100: This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots.\nVoxPopuli: VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning.\nCASIA-FASD: CASIA-FASD is a small face anti-spoofing dataset containing 50 subjects.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\n\nOptions: A: CC100 B: VoxPopuli C: CASIA-FASD D: ImageNet",
"role": "user"
},
{
"content": "ImageNet",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.495 | [
{
"content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nPreterminal and terminal Schwann cells serve as a growth substrate that guides the nodal sprouts to the original endplates of denervated muscle fibers (Son and Thompson, 1995b; Koirala et al., 2000).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.466 | [
{
"content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Functional myeloid-derived suppressor cells expand in blood but infect airways of covid-19 patients and predict disease severity\n\nAbstract:\nThe immunopathology of COVID-19 remains enigmatic, exhibiting immunodysregulation and T cell lymphopenia.\nMonocytic myeloid-derived suppressor cells (M-MDSC) are T cell suppressors that expand in inflammatory conditions, but their role in acute respiratory infections remains unclear.\nWe studied blood and airways of COVID-19 patients across disease severity at multiple timepoints.\nM-MDSC frequencies were elevated in blood but not in nasopharyngeal or endotracheal aspirates of COVID-19 patients compared to controls.\nM-MDSCs isolated from COVID-19 patients suppressed T cell proliferation and IFNg production partly via an arginase-1 (Arg-1) dependent mechanism.\nFurthermore, patients showed increased Arg-1 and IL-6 plasma levels.\nCOVID-19 patients had fewer T cells, and displayed downregulated expression of the CD3ζ chain.\nOrdinal regression showed that early M-MDSC frequency predicted subsequent disease severity.\nIn conclusion, M-MDSCs expand in blood of COVID-19 patients, suppress T cells and strongly associate with disease severity, suggesting a role for M-MDSCs in the dysregulated COVID-19 immune response.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"In conclusion, M-MDSCs expand in blood of COVID-19 patients, suppress T cells and strongly associate with disease severity, suggesting a role for M-MDSCs in the dysregulated COVID-19 immune response.\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.3466 | [
{
"content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: There is no research showing that using or consuming any natural or herbal products, including essential oils, eucalyptus oil or witch hazel, will prevent infection with the 2019 novel coronavirus.\n\nAbstract:\nCOVID-19 is an infectious disease characterized by several important systemic problems such as severe pneumonia.\nOne of mechanisms responsible of these systemic problems is the release of pro-inflammatory cytokines, such as interleukin (IL)-1beta and IL-6(Conti et al., 2020).\nThe binding of virus to TRL (Toll Like Receptor) can determine the release of pro-IL-1beta with successive activation and production of active mature IL-1beta, responsible of lung inflammation, fever and fibrosis(Conti et al., 2020).\nThe interstitial pneumonia is linked to an over-production of IL-6.\nBased on this principle, several researchers started the use of an anti-arthritis drug, tocilizumab, for its anti-IL-6 action.",
"role": "user"
},
{
"content": "{\"verdict\": \"NEI\", \"evidence\": []}",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.250.1 | [
{
"content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Bias in Semantic and Discourse Interpretation\n\nIn this paper, we show how game-theoretic work on conversation combined with a theory of discourse structure provides a framework for studying interpretive bias. Interpretive bias is an essential feature of learning and understanding but also something that can be used to pervert or subvert the truth. The framework we develop here provides tools for understanding and analyzing the range of interpretive biases and the factors that contribute to them.\n\nIntroduction\nBias is generally considered to be a negative term: a biased story is seen as one that perverts or subverts the truth by offering a partial or incomplete perspective on the facts. But bias is in fact essential to understanding: one cannot interpret a set of facts—something humans are disposed to try to do even in the presence of data that is nothing but noise [38]—without relying on a bias or hypothesis to guide that interpretation. Suppose someone presents you with the sequence INLINEFORM0 and tells you to guess the next number. To make an educated guess, you must understand this sequence as instantiating a particular pattern; otherwise, every possible continuation of the sequence will be equally probable for you. Formulating a hypothesis about what pattern is at work will allow you to predict how the sequence will play out, putting you in a position to make a reasonable guess as to what comes after 3. Formulating the hypothesis that this sequence is structured by the Fibonacci function (even if you don't know its name), for example, will lead you to guess that the next number is 5; formulating the hypothesis that the sequence is structured by the successor function but that every odd successor is repeated once will lead you to guess that it is 3. Detecting a certain pattern allows you to determine what we will call a history: a set of given entities or eventualities and a set of relations linking those entities together. The sequence of numbers INLINEFORM1 and the set of relation instances that the Fibonacci sequence entails as holding between them is one example of a history. Bias, then, is the set of features, constraints, and assumptions that lead an interpreter to select one history—one way of stitching together a set of observed data—over another.\nBias is also operative in linguistic interpretation. An interpreter's bias surfaces, for example, when the interpreter connects bits of information content together to resolve ambiguities. Consider: . Julie isn't coming. The meeting has been cancelled.\nWhile these clauses are not explicitly connected, an interpreter will typically have antecedent biases that lead her to interpret eventualities described by the two clauses as figuring in one of two histories: one in which the eventuality described by the first clause caused the second, or one in which the second caused the first. Any time that structural connections are left implicit by speakers—and this is much if not most of the time in text— interpreters will be left to infer these connections and thereby potentially create their own history or version of events.\nEvery model of data, every history over that data, comes with a bias that allows us to use observed facts to make predictions; bias even determines what kind of predictions the model is meant to make. Bayesian inference, which underlies many powerful models of inference and machine learning, likewise relies on bias in several ways: the estimate of a state given evidence depends upon a prior probability distribution over states, on assumptions about what parameters are probabilistically independent, and on assumptions about the kind of conditional probability distribution that each parameter abides by (e.g., normal distribution, noisy-or, bimodal). Each of these generates a (potentially different) history.\n\nEpistemic ME games\nThe second ingredient needed to understand interpretive bias is the connection between on the one hand the purpose and assumption behind telling a story and on the other the particular way in which that story is told. A history puts the entities to be understood into a structure that serves certain purposes or conversational goals BIBREF18 . Sometimes the history attempts to get at the “truth”, the true causal and taxonomic structure of a set of events. But a history may also serve other purposes—e.g., to persuade, or to dupe an audience. Over the past five years, BIBREF4 , BIBREF19 , BIBREF20 , BIBREF21 have developed an account of conversational purposes or goals and how they guide strategic reasoning in a framework called Message Exchange (ME) Games. ME games provide a general and formally precise framework for not only the analysis of conversational purposes and conversational strategies, but also for the typology of dialogue games from BIBREF22 and finally for the analysis of strategies for achieving what we would intuitively call “unbiased interpretation”, as we shall see in the next section. In fact in ME Games, conversational goals are analyzed as properties, and hence sets, of conversations; these are the conversations that “go well” for the player. ME games bring together the linguistic analysis of SDRT with a game theoretic approach to strategic reasoning; in an ME game, players alternate making sequences of discourse moves such as those described in SDRT, and a player wins if the conversation constructed belongs to her winning condition, which is a subset of the set of all possible conversational plays. ME games are designed to analyze the interaction between conversational structure, purposes and assumptions, in the absence of assumptions about cooperativity or other cognitive hypotheses, which can cause problems of interpretability in other frameworks BIBREF23 . ME games also assume a Jury that sets the winning conditions and thus evaluates whether the conversational moves made by players or conversationalists are successful or not. The Jury can be one or both of the players themselves or some exogenous body.\nTo define an ME game, we first fix a finite set of players INLINEFORM0 and let INLINEFORM1 range over INLINEFORM2 . For simplicity, we consider here the case where there are only two players, that is INLINEFORM3 , but the notions can be easily lifted to the case where there are more than two players. Here, Player INLINEFORM4 will denote the opponent of Player INLINEFORM5 . We need a vocabulary INLINEFORM6 of moves or actions; these are the discourse moves as defined by the language of SDRT. The intuitive idea behind an ME game is that a conversation proceeds in turns where in each turn one of the players `speaks' or plays a string of elements from INLINEFORM7 . In addition, in the case of conversations, it is essential to keep track of “who says what”. To model this, each player INLINEFORM8 was assigned a copy INLINEFORM9 of the vocabulary INLINEFORM10 which is simply given as INLINEFORM11 . As BIBREF4 argues, a conversation may proceed indefinitely, and so conversations correspond to plays of ME games, typically denoted as INLINEFORM12 , which are the union of finite or infinite sequences in INLINEFORM13 , denoted as INLINEFORM14 and INLINEFORM15 respectively. The set of all possible conversations is thus INLINEFORM16 and is denoted as INLINEFORM17 . [ME game BIBREF4 ] A Message Exchange game (ME game), INLINEFORM18 , is a tuple INLINEFORM19 where INLINEFORM20 is a Jury. Due to the ambiguities in language, discourse moves in SDRT are underspecified formulas that may yield more than one fully specified discourse structure or histories for the conversation; a resulting play in an ME game thus forms one or more histories or complete discourse structures for the entire conversation.\nTo make ME games into a truly realistic model of conversation requires taking account of the limited information available to conversational participants. BIBREF0 imported the notion of a type space from epistemic game theory BIBREF24 to take account of this. The type of a player INLINEFORM0 or the Jury is an abstract object that is used to code-up anything and everything about INLINEFORM1 or the Jury, including her behavior, the way she strategizes, her personal biases, etc. BIBREF24 . Let INLINEFORM2 denote the set of strategies for Player INLINEFORM3 in an ME game; let INLINEFORM4 ; and let INLINEFORM5 be the set of strategies of INLINEFORM6 given play INLINEFORM7 . [Harsanyi type space BIBREF24 ] A Harsanyi type space for INLINEFORM8 is a tuple INLINEFORM9 such that INLINEFORM10 and INLINEFORM11 , for each INLINEFORM12 , are non-empty (at-most countable) sets called the Jury-types and INLINEFORM13 -types respectively and INLINEFORM14 and INLINEFORM15 are the beliefs of Player INLINEFORM16 and the Jury respectively at play INLINEFORM17 . BIBREF0 defines the beliefs of the players and Jury using the following functions. [Belief function] For every play INLINEFORM18 the (first order) belief INLINEFORM19 of player INLINEFORM20 at INLINEFORM21 is a pair of measurable functions INLINEFORM22 where INLINEFORM23 is the belief function and INLINEFORM24 is the interpretation function defined as: INLINEFORM25 INLINEFORM26 \nwhere INLINEFORM0 is the set of probability distributions over the corresponding set. Similarly the (first order) belief INLINEFORM1 of the Jury is a pair of measurable functions INLINEFORM2 where the belief function INLINEFORM3 and the interpretation function INLINEFORM4 are defined as: INLINEFORM5 INLINEFORM6 \n Composing INLINEFORM0 and INLINEFORM1 together over their respective outputs reveals a correspondence between interpretations of plays and types for a fixed Jury type INLINEFORM2 : every history yields a distribution over types for the players and every tuple of types for the players and the Jury fixes a distribution over histories. We'll call this the types/history correspondence.\nAn epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury.\nOutside of language, statisticians study bias; and sample bias is currently an important topic. To do so, they exploit statistical models with a set of parameters and random variables, which play the role of our types in interpretive bias. But for us, the interpretive process is already well underway once the model, with its constraints, features and explanatory hypotheses, is posited; at least a partial history, or set of histories, has already been created.\nThe ME model in BIBREF0 not only makes histories dependent on biases but also conditionally updates an agent's bias, the probability distribution, given the interpretation of the conversation or more generally a course of events as it has so far unfolded and crucially as the agent has so far interpreted it. This means that certain biases are reinforced as a history develops, and in turn strengthen the probability of histories generated by such biases in virtue of the types/histories correspondence. We now turn to an analysis of SECREF3 discussed in BIBREF4 , BIBREF0 where arguably this happens.\n\nQuestion: Which interpretative biases are analyzed in this paper?",
"role": "user"
},
{
"content": "{\"answer\": [\"in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury\"], \"evidence\": [\"An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we\\u0027ve defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury.\"]}",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.1196 | [
{
"content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is there a dependence between chromatin organization and dorsoventral gene expression in Drosophila?\n\nContext: Independence of chromatin conformation and gene regulation during Drosophila dorsoventral patterning.\nHere, using the dorsoventral patterning of the Drosophila melanogaster embryo as a model system, we provide evidence for the independence of chromatin organization and dorsoventral gene expression. We define tissue-specific enhancers and link them to expression patterns using single-cell RNA-seq. Surprisingly, despite tissue-specific chromatin states and gene expression, chromatin organization is largely maintained across tissues. Our results indicate that tissue-specific chromatin conformation is not necessary for tissue-specific gene expression but rather acts as a scaffold facilitating gene expression when enhancers become active.",
"role": "user"
},
{
"content": "no",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.3310 | [
{
"content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nThe receptor for advanced glycation end products—RAGE— is a multi-ligand molecule belonging to the immunoglobulin superfamily (Neeper et al. 1992; Schmidt et al. 1992, 1994; Thornalley 1998; Baynes 2003).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.917 | [
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: This review focuses on exploring various host resilience mechanisms that could be exploited for treatment of severe acute respiratory syndrome coronavirus, Middle East respiratory syndrome coronavirus and other respiratory viruses that cause acute lung injury and acute respiratory distress syndrome. Text: The 21st century was heralded with the emergence of two novel coronaviruses (CoV) that have unusually high pathogenicity and mortality [1] [2] [3] [4] [5] . Severe acute respiratory syndrome coronavirus (SARS-Cov) was first identified in 2003 [6] [7] [8] [9] . While there was initially great concern about SARS-CoV, once no new cases emerged, funding and research decreased. However, a decade later Middle East respiratory syndrome coronavirus (MERS-CoV), also known as HCoV-EMC, emerged initially in Saudi Arabia [3, 10] . SARS-CoV infected about 8000 people, and resulted in the deaths of approximately 10% of those infected [11] . While MERS-CoV is not as widespread as SARS-CoV, it appears to have an even higher mortality rate, with 35-50% of diagnosed infections resulting in death [3, [12] [13] . These deadly betacoronavirus viruses existed in animal reservoirs [4] [5] 9, [14] [15] . Recently, other CoVs have been detected in animal populations raising the possibility that we will see a repeat of these types of outbreaks in the near future [11, [16] [17] [18] [19] [20] . Both these zoonotic viruses cause a much more severe disease than what is typically seen for CoVs, making them a global health concern. Both SARS-CoV and MERS-CoV result in severe lung pathology. Many infected patients have acute lung injury (ALI), a condition that is diagnosed based on the presence of pulmonary edema and respiratory failure without a cardiac cause. In some patients there is a progression to the more severe form of ALI, acute respiratory distress syndrome (ARDS) [21] [22] [23] . In order to survive a given infection, a successful host must not only be able to clear the pathogen, but tolerate damage caused by the pathogen itself and also by the host's immune response [24] [25] [26] . We refer to resilience as the ability of a host to tolerate the effects of pathogens and the immune response to pathogens. A resilient host is able to return to a state of health after responding to an infection [24, [27] [28] . Most currently available treatment options for infectious diseases are antimicrobials, For reprint orders, please contact: reprints@futuremedicine.com REviEW Jamieson future science group and thus target the pathogen itself. Given the damage that pathogens can cause this focus on rapid pathogen clearance is understandable. However, an equally important medical intervention is to increase the ability of the host to tolerate the direct and indirect effects of the pathogen, and this is an area that is just beginning to be explored [29] . Damage to the lung epithelium by respiratory pathogens is a common cause of decreased resilience [30] [31] [32] . This review explores some of the probable host resilience pathways to viral infections, with a particular focus on the emerging coronaviruses. We will also examine factors that make some patients disease tolerant and other patients less tolerant to the viral infection. These factors can serve as a guide to new potential therapies for improved patient care. Both SARS-CoV and MERS-CoV are typified by a rapid progression to ARDS, however, there are some distinct differences in the infectivity and pathogenicity. The two viruses have different receptors leading to different cellular tropism, and SARS-CoV is more ubiquitous in the cell type and species it can infect. SARS-CoV uses the ACE2 receptor to gain entry to cells, while MERS-CoV uses the ectopeptidase DPP4 [33] [34] [35] [36] . Unlike SARS-CoV infection, which causes primarily a severe respiratory syndrome, MERS-CoV infection can also lead to kidney failure [37, 38] . SARS-CoV also spreads more rapidly between hosts, while MERS-CoV has been more easily contained, but it is unclear if this is due to the affected patient populations and regions [3] [4] 39 ]. Since MERS-CoV is a very recently discovered virus, [40, 41] more research has been done on SARS-CoV. However, given the similarities it is hoped that some of these findings can also be applied to MERS-CoV, and other potential emerging zoonotic coronaviruses. Both viral infections elicit a very strong inflammatory response, and are also able to circumvent the immune response. There appears to be several ways that these viruses evade and otherwise redirect the immune response [1, [42] [43] [44] [45] . The pathways that lead to the induction of the antiviral type I interferon (IFN) response are common targets of many viruses, and coronaviruses are no exception. SARS-CoV and MERS-CoV are contained in double membrane vesicles (DMVs), that prevents sensing of its genome [1, 46] . As with most coronaviruses several viral proteins suppress the type I IFN response, and other aspects of innate antiviral immunity [47] . These alterations of the type I IFN response appear to play a role in immunopathology in more than one way. In patients with high initial viral titers there is a poor prognosis [39, 48] . This indicates that reduction of the antiviral response may lead to direct viral-induced pathology. There is also evidence that the delayed type I IFN response can lead to misregulation of the immune response that can cause immunopathology. In a mouse model of SARS-CoV infection, the type I IFN response is delayed [49] . The delay of this potent antiviral response leads to decreased viral clearance, at the same time there is an increase in inflammatory cells of the immune system that cause excessive immunopathology [49] . In this case, the delayed antiviral response not only causes immunopathology, it also fails to properly control the viral replication. While more research is needed, it appears that MERS has a similar effect on the innate immune response [5, 50] . The current treatment and prevention options for SARS-CoV and MERS-CoV are limited. So far there are no licensed vaccines for SAR-CoV or MERS-CoV, although several strategies have been tried in animal models [51, 52] . There are also no antiviral strategies that are clearly effective in controlled trials. During outbreaks several antiviral strategies were empirically tried, but these uncontrolled studies gave mixed results [5, 39] . The main antivirals used were ribavirin, lopinavir and ritonavir [38, 53] . These were often used in combination with IFN therapy [54] . However, retrospective analysis of these data has not led to clear conclusions of the efficacy of these treatment options. Research in this area is still ongoing and it is hoped that we will soon have effective strategies to treat novel CoV [3,36,38,40, [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] . The lack of effective antivirals makes it necessary to examine other potential treatments for SARS-CoV and MERS-CoV. Even if there were effective strategies to decrease viral burden, for these viruses, the potential for new emerging zoonotic CoVs presents additional complications. Vaccines cannot be produced in time to stop the spread of an emerging virus. In addition, as was demonstrated during SARS-CoV and MERS-CoV outbreaks, there is always a challenge during a crisis situation to know which Host resilience to emerging coronaviruses REviEW future science group www.futuremedicine.com antiviral will work on a given virus. One method of addressing this is to develop broad-spectrum antivirals that target conserved features of a given class of virus [65] . However, given the fast mutation rates of viruses there are several challenges to this strategy. Another method is to increase the ability of a given patient to tolerate the disease, i.e., target host resilience mechanisms. So far this has largely been in the form of supportive care, which relies on mechanical ventilation and oxygenation [29, 39, 66] . Since SARS-CoV and MERS-CoV were discovered relatively recently there is a lack of both patient and experimental data. However, many other viruses cause ALI and ARDS, including influenza A virus (IAV). By looking at data from other high pathology viruses we can extrapolate various pathways that could be targeted during infection with these emerging CoVs. This can add to our understanding of disease resilience mechanisms that we have learned from direct studies of SARS-CoV and MERS-CoV. Increased understanding of host resilience mechanisms can lead to future host-based therapies that could increase patient survival [29] . One common theme that emerges in many respiratory viruses including SARS-CoV and MERS-CoV is that much of the pathology is due to an excessive inflammatory response. A study from Josset et al. examines the cell host response to both MERS-CoV and SARS-CoV, and discovered that MERS-CoV dysregulates the host transcriptome to a much greater extent than SARS-CoV [67] . It demonstrates that glucocorticoids may be a potential way of altering the changes in the host transcriptome at late time points after infection. If host gene responses are maintained this may increase disease resilience. Given the severe disease that manifested during the SARS-CoV outbreak, many different treatment options were empirically tried on human patients. One immunomodulatory treatment that was tried during the SARS-CoV outbreak was systemic corticosteroids. This was tried with and without the use of type I IFNs and other therapies that could directly target the virus [68] . Retrospective analysis revealed that, when given at the correct time and to the appropriate patients, corticosteroid use could decrease mortality and also length of hospital stays [68] . In addition, there is some evidence that simultaneous treatment with IFNs could increase the potential benefits [69] . Although these treatments are not without complications, and there has been a lack of a randomized controlled trial [5, 39] . Corticosteroids are broadly immunosuppressive and have many physiological effects [5, 39] . Several recent studies have suggested that other compounds could be useful in increasing host resilience to viral lung infections. A recent paper demonstrates that topoisomerase I can protect against inflammation-induced death from a variety of viral infections including IAV [70] . Blockade of C5a complement signaling has also been suggested as a possible option in decreasing inflammation during IAV infection [71] . Other immunomodulators include celecoxib, mesalazine and eritoran [72, 73] . Another class of drugs that have been suggested are statins. They act to stabilize the activation of aspects of the innate immune response and prevent excessive inflammation [74] . However, decreasing immunopathology by immunomodulation is problematic because it can lead to increased pathogen burden, and thus increase virus-induced pathology [75, 76] . Another potential treatment option is increasing tissue repair pathways to increase host resilience to disease. This has been shown by bioinformatics [77] , as well as in several animal models [30-31,78-79]. These therapies have been shown in cell culture model systems or animal models to be effective, but have not been demonstrated in human patients. The correct timing of the treatments is essential. Early intervention has been shown to be the most effective in some cases, but other therapies work better when given slightly later during the course of the infection. As the onset of symptoms varies slightly from patient to patient the need for precise timing will be a challenge. Examination of potential treatment options for SARS-CoV and MERS-CoV should include consideration of host resilience [29] . In addition to the viral effects, and the pathology caused by the immune response, there are various comorbidities associated with SARS-CoV and MERS-CoV that lead to adverse outcomes. Interestingly, these additional risk factors that lead to a more severe disease are different between the two viruses. It is unclear if these differences are due to distinct populations affected by the viruses, because of properties of the virus themselves, or both. Understanding these factors could be a key to increasing host resilience to the infections. MERS-CoV patients had increased morbidity and mortality if they were obese, immunocompromised, diabetic or had cardiac disease [4, 12] . REviEW Jamieson future science group Risk factors for SARS-CoV patients included an older age and male [39] . Immune factors that increased mortality for SARS-CoV were a higher neutrophil count and low T-cell counts [5, 39, 77] . One factor that increased disease for patients infected with SARS-CoV and MERS-CoV was infection with other viruses or bacteria [5, 39] . This is similar to what is seen with many other respiratory infections. A recent study looking at malaria infections in animal models and human patients demonstrated that resilient hosts can be predicted [28] . Clinical studies have started to correlate specific biomarkers with disease outcomes in ARDS patients [80] . By understanding risk factors for disease severity we can perhaps predict if a host may be nonresilient and tailor the treatment options appropriately. A clear advantage of targeting host resilience pathways is that these therapies can be used to treat a variety of different infections. In addition, there is no need to develop a vaccine or understand the antiviral susceptibility of a new virus. Toward this end, understanding why some patients or patient populations have increased susceptibility is of paramount importance. In addition, a need for good model systems to study responses to these new emerging coronaviruses is essential. Research into both these subjects will lead us toward improved treatment of emerging viruses that cause ALI, such as SARS-CoV and MERS-CoV. The author has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. No writing assistance was utilized in the production of this manuscript. • Severe acute respiratory syndrome coronavirus and Middle East respiratory syndrome coronavirus are zoonotic coronaviruses that cause acute lung injury and acute respiratory distress syndrome. • Antivirals have limited effects on the course of the infection with these coronaviruses. • There is currently no vaccine for either severe acute respiratory syndrome coronavirus or Middle East respiratory syndrome coronavirus. • Host resilience is the ability of a host to tolerate the effects of an infection and return to a state of health.\n\nQuestion: What role does initial viral titer play in the prognosis of SARS-CoV and MERS-CoV?",
"role": "user"
},
{
"content": "In patients with high initial viral titers there is a poor prognosis",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1399 | [
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMen and women aged > 18 years\nAngiographically confirmed acute massive pulmonary embolism with involvement of Central pulmonary arteries.\nendovascular mechanical thrombus fragmentation + thrombolytic therapy (using recombinant tissue activator of plasminogen), performed for treatment of the above-mentioned pulmonary embolism in less than 48 hours before randomization. The patient should be randomized no earlier than 24 hours after procedures endovascular mechanical thrombus fragmentation + thrombolytic therapy\nWritten informed consent signed by patient.",
"role": "user"
},
{
"content": "{\"Condition\": [\"endovascular mechanical thrombus fragmentation\", \"involvement of Central pulmonary arteries\", \"pulmonary embolism\"], \"Device\": [], \"Drug\": [\"recombinant tissue activator of plasminogen\"], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"women\", \"aged\", \"Men\"], \"Procedure\": [\"Angiographically\", \"thrombolytic therapy\", \"treatment\"], \"Qualifier\": [\"acute\", \"Angiographically confirmed\", \"massive\"], \"ReferencePoint\": [], \"Scope\": [\"endovascular mechanical thrombus fragmentation + thrombolytic therapy (using recombinant tissue activator of plasminogen)\"], \"Temporal\": [\"in less than 48 hours before randomization\"], \"Value\": [\"\\u003e 18 years\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.70899 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\na first order phase transition from hadronic to exotic matter phases may proceed through the nucleation of droplets of the new phase .\nthe formation of droplets of quark matter @xcite and antikaon condensed matter @xcite in neutrino - free neutron stars ( ns ) was studied using the homogeneous nucleation theory of langer @xcite .\ndroplets of new phase may appear in the metastable nuclear matter due to thermal fluctuations .\nthe droplets of the stable phase with radii larger than a critical radius survives and grows if the latent heat is transported from the surface of the droplet to the metastable state .\nthis heat transportation occurs through thermal dissipation @xcite and viscous damping @xcite .\nwe have seen that the onset of antikaon condensate influences the shear viscosity of ns matter composed of neutron(n ) , proton(p ) , electron(e ) and muon(@xmath1 ) that can interact by strong or electromagnetic interactions @xcite .\neffect of shear viscosity on the nucleation of the antikaon condensed matter was recently been studied @xcite in deleptonised ns , after the neutrinos are emitted . here\nwe investigate the contribution of neutrinos to the shear viscosity and nucleation of antikaon condensates . in the pns where the temperature is of the order of a few 10s of mev , neutrinos\nare trapped because their mean free paths under these conditions are small compared to the radius of the star . on the other hand , they are very effective at transporting both heat and momentum because their mean free paths are orders of magnitude larger than that for other particles .\nwe adopt the homogeneous nucleation theory of langer @xcite to calculate the thermal fluctuation rate for a first order phase transition from the charge - neutral and beta - equilibrated nuclear matter to @xmath2 condensed matter in a neutrino - trapped pns .\nthe thermal nucleation rate is given by @xmath3 where @xmath4 is the change in free energy required to activate the formation of the critical droplet .\n@xmath5 , is the prefactor of which @xmath6 is the statistical prefactor and @xmath7 $ ] is the dynamical prefactor . here\n@xmath8 is the surface tension for the surface separating the two phases , @xmath9 is the correlation length for kaons , @xmath10 is the difference of the enthalpy of the two phases , @xmath11 the thermal conductivity , @xmath12 and @xmath13 are the shear and bulk viscosity respectively .\nthe free energy is maximum at this critical radius given by @xmath14 finally the thermal nucleation time is given by @xmath15 , where @xmath16 is the volume of the core , where the thermodynamic variables is assumed to remain constant . to calculate shear viscosity in the pns , which is mostly contributed by the neutrinos @xcite\n, we consider the scattering of neutrinos @xmath17 where n = n , p , e. shear viscosity of neutrinos due to scattering is calculated using the coupled boltzmann transport equation @xcite . for the deleptonised ns\n, total shear viscosity is given by @xmath18 as in ref .\n@xcite . in order to calculate the shear viscosity and critical radius ( @xmath19 ) ,\nwe need to know the eos , that we construct at finite temperature using the relativistic mean field model @xcite .\nthe interaction between baryons is mediated by the exchange of scalar ( @xmath8 ) and vector ( @xmath20 ) mesons .\nthis picture is consistently extended to include the kaons .\nwe use the parameter sets of gm1 model @xcite for nucleon - meson coupling constants .\nthe kaon - meson coupling constants are determined using quark model , isospin counting rule and the real part of @xmath2 potential depth , that we take as -160 mev in our calculation @xcite .\nthe total shear viscosity is shown as a function of normalised baryon density for different temperatures in fig .\nwe consider two cases i)the deleptonised ns matter , where the total shear viscosity has contribution from all the species such as n , p , e and @xmath1 ; ii)for the neutrino - trapped pns matter ( lepton fraction @xmath21 ) , where the major contribution comes from neutrinos . in both the cases shear viscosity decreases with rising temperature . the prefactor @xmath22 is plotted as a function of temperature for pns in fig . 2 and is compared when it is approximated by @xmath23 from dimensional analysis . here\nwe find the shear viscosity term changes the prefactor by a large order of magnitude compared to @xmath23 approximation .\nthis difference is much more pronounced in pns compared to ns @xcite . in fig .\n3 we display the nucleation time as a function of temperature for a set of values of surface tension at a fixed baryon density for lepton - trapped pns matter and find that both the droplet radii and the thermal nucleation time strongly depend on the surface tension .\nnucleation may not occur in pns for surface tension < 30 mev @xmath24 as the radius of droplet is less than the correlation length ( @xmath25 5 fm ) .\nwe approximate that the radius of the droplet should be greater than @xmath9 in this calculation@xcite . for ns case\n, nucleation is observed to be possible for @xmath26 mev @xmath24 @xcite . larger viscosity there leads to larger value of t which might melt the condensate .\nfinally in fig .\n4 we compare the results of thermal nucleation time taking into account the effect of shear viscosity in the prefactor with that of the prefactor approximated by @xmath23 . for pns we displayed the results for @xmath27 and @xmath28mev @xmath24 .\nwe find the result of the @xmath23 approximation a few orders of magnitude higher than those of our calculation .\nthese results demonstrate the importance of including the shear viscosity in the prefactor of eq .\n( 1 ) in the calculation of thermal nucleation time .\nwe already obtained similar results for ns matter @xcite .\nalso , it may be mentioned that nucleation of antikaon condensates is possible if @xmath29 .\nthat is possible for pns with @xmath30 mev @xmath24 .\nwe have investigated the role of shear viscosity on the thermal nucleation rate for the formation of a critical droplet of antikaon condensed matter . for this\nwe considered a first order phase transition from the nuclear to antikaon condensed matter in pns .\nwe have seen that the droplet radii increase with increasing surface tension .\nwe also compare the nucleation times for ns and pns and found that nucleation is possible for a lower value of surface tension @xmath31 20 mev @xmath24 in ns while it may be possible only for higher value in pns ( @xmath32 30 mev @xmath24 ) .",
"role": "user"
},
{
"content": "a first order phase transition from nuclear matter to antikaon condensed matter may proceed through thermal nucleation of a critical droplet of antikaon condensed matter during the early evolution of proto neutron stars ( pns ) . \n droplets of new phase having radii larger than a critical radius would survive and grow , if the latent heat is transported from the droplet surface to the metastable phase . \n we investigate the effect of shear viscosity on the thermal nucleation time of the droplets of antikaon condensed matter . in this connection \n we particularly study the contribution of neutrinos in the shear viscosity and nucleation in pns . \n address = saha institute of nuclear physics , 1@xmath0af bidhan nagar , kolkata -7000064 , india . \n address = saha institute of nuclear physics , 1@xmath0af bidhan nagar , kolkata -7000064 , india .",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.503.1 | [
{
"content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Learning Explicit and Implicit Structures for Targeted Sentiment Analysis\n\nTargeted sentiment analysis is the task of jointly predicting target entities and their associated sentiment information. Existing research efforts mostly regard this joint task as a sequence labeling problem, building models that can capture explicit structures in the output space. However, the importance of capturing implicit global structural information that resides in the input space is largely unexplored. In this work, we argue that both types of information (implicit and explicit structural information) are crucial for building a successful targeted sentiment analysis model. Our experimental results show that properly capturing both information is able to lead to better performance than competitive existing approaches. We also conduct extensive experiments to investigate our model's effectiveness and robustness.\n\nIntroduction\nAccepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing).\nTargeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative.\nExisting research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures.\nHowever, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space.\nOn the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost.\nHowever, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging.\nMotivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work:\nWe propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence.\nWe conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model.\n\nExperimental Setup ::: Hyperparameters\nWe adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25.\n\nExperimental Setup ::: Training and Implementation\nWe train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26.\n\nExperimental Setup ::: Baselines\nWe consider the following baselines:\nPipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together.\nJoint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags.\nBi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU.\nHBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU.\nSS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings.\nSA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space.\nE-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space.\nEI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space.\n\nResults and Discussion ::: Main Results ::: Robustness\nOverall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space.\nWe analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups.\nFurthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities.\n\nResults and Discussion ::: Additional Experiments\nWe also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\\%$ of training data as development set for the selection of the best model during training.\nWe report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian.\n\nFigure 2: The structured output for representing entities and their sentiments with boundaries.\n\nTable 2: Main Results. fixed stands for chain structures and flexible for latent structures. discrete, emb and char denote discrete features, word embeddings and character embeddings respectively. SA represents self-attention.\n\nTable 4: Effect of Implicit Structures\n\nFigure 7: Results of different lengths on English\n\nQuestion: How is the robustness of the model evaluated?",
"role": "user"
},
{
"content": "{\"answer\": [\"10-fold cross validation\"], \"evidence\": [\"We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\\\\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26.\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.51198 | [
{
"content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\na 54-year - old woman was admitted for evaluation of a polypoid gallbladder mass measuring 2.0 cm in diameter and a hepatic low - echoic nodule discovered by us examination during a check - up .\nserum cea , ca 19 - 9 , and alpha - fetoprotein levels were normal .\narterial- and portal - phase spiral ct revealed focal wall thickening with enhancement in the fundus of the gallbladder and a poorly - defined , ovoid hypoattenuating nodule with peripheral rim enhancement measuring 1.0 cm in diameter in the right lobe of the liver .\nfor evaluation of metastasis prior to surgery , the patients underwent ct during arterioportography ( ctap ) and during hepatic arteriography ( ctha ) .\n1a ) , and a highly enhancing nodular lesion in the same area of the right lobe of the liver by ctha ( fig .\nintraoperative us showed that the lesion was a target - appearing hypoechoic nodule measuring 1.0 cm in diameter .\nin addition , two other nodules each with a diameter of 0.5 cm , were present in the same segment of the liver ( fig .\nhistopathologic examination indicated that the gallbladder lesion was an adenocarcinoma with tubular , well - differentiated characteristics and had invaded the perimuscular connective tissue .\nthe hepatic nodules were diagnosed as a solitary necrotic nodule composed of a central necrotic core and a peripheral fibrotic capsule with inflammatory cells ( fig .\na 72-year - old man was admitted for vomiting and poor oral intake , which had lasted for six days .\nsix years earlier , due to gastric cancer , he had undergone a subtotoal gastrectomy with billroth ii anastomosis . on plain radiographs of the abdomen , diffuse dilatation of the proximal jejunal loop , with collapsed distal loop , suggesting jejunal obstruction , was seen .\nabdominal ct showed a dilated proximal jejunal loop , with an abrupt transitional zone and bowel wall thickening .\nin addition , an elliptical , hypoattenuating nodule measuring 1.0 0.3 cm in diameter was observed in the lateral segment of the left lobe of the liver ( fig .\nt2-weighted mr imaging ( t2wi ) indicated that a nodule at the same site was hyperintense , and t1wi that it was isointense , while gadolinium - enhanced study suggested the presence of a subtle , peripheral enhancing nodule ( fig .\nwe regarded these imaging findings as indicative of small bowel obstruction due to tumor recurrence , with hepatic metastasis , and later , as adhesive ileus and nontumorous lesion of liver .\nthe patient underwent surgery and this indicated that the proximal jejunum , about 10 cm from the treitz ligament , had adhered to the operative wound .\nin addition , a hard , whitish mass 1.0 cm in diameter , was found in the liver .\nthe patient underwent adhesiolysis of the jejunal loop and wedge resection of the liver lesion .\na 54-year - old woman was admitted for evaluation of a polypoid gallbladder mass measuring 2.0 cm in diameter and a hepatic low - echoic nodule discovered by us examination during a check - up .\nserum cea , ca 19 - 9 , and alpha - fetoprotein levels were normal .\narterial- and portal - phase spiral ct revealed focal wall thickening with enhancement in the fundus of the gallbladder and a poorly - defined , ovoid hypoattenuating nodule with peripheral rim enhancement measuring 1.0 cm in diameter in the right lobe of the liver .\nfor evaluation of metastasis prior to surgery , the patients underwent ct during arterioportography ( ctap ) and during hepatic arteriography ( ctha ) .\n1a ) , and a highly enhancing nodular lesion in the same area of the right lobe of the liver by ctha ( fig .\nintraoperative us showed that the lesion was a target - appearing hypoechoic nodule measuring 1.0 cm in diameter .\nin addition , two other nodules each with a diameter of 0.5 cm , were present in the same segment of the liver ( fig .\nhistopathologic examination indicated that the gallbladder lesion was an adenocarcinoma with tubular , well - differentiated characteristics and had invaded the perimuscular connective tissue .\nthe hepatic nodules were diagnosed as a solitary necrotic nodule composed of a central necrotic core and a peripheral fibrotic capsule with inflammatory cells ( fig .\na 72-year - old man was admitted for vomiting and poor oral intake , which had lasted for six days .\nsix years earlier , due to gastric cancer , he had undergone a subtotoal gastrectomy with billroth ii anastomosis . on plain radiographs of the abdomen , diffuse dilatation of the proximal jejunal loop ,\nabdominal ct showed a dilated proximal jejunal loop , with an abrupt transitional zone and bowel wall thickening .\nin addition , an elliptical , hypoattenuating nodule measuring 1.0 0.3 cm in diameter was observed in the lateral segment of the left lobe of the liver ( fig .\nt2-weighted mr imaging ( t2wi ) indicated that a nodule at the same site was hyperintense , and t1wi that it was isointense , while gadolinium - enhanced study suggested the presence of a subtle , peripheral enhancing nodule ( fig .\nwe regarded these imaging findings as indicative of small bowel obstruction due to tumor recurrence , with hepatic metastasis , and later , as adhesive ileus and nontumorous lesion of liver .\nthe patient underwent surgery and this indicated that the proximal jejunum , about 10 cm from the treitz ligament , had adhered to the operative wound .\nin addition , a hard , whitish mass 1.0 cm in diameter , was found in the liver .\nthe patient underwent adhesiolysis of the jejunal loop and wedge resection of the liver lesion .\nsolitary necrotic nodule of the liver is an uncommon lesion on the basis of us or ct examination , which has often been diagnosed as necrotic metastasis ( 1 - 4 ) . a solitary necrotic nodule , despite its name\n, may be multiple ( 5 ) , and in a large number of cases , may represent the end - stage of a variety of benign lesions such as larval infestation , sclerosed hemangioma or trauma ( 2 - 4 ) .\nhistologically , the lesion consists of an outer fibrotic capsule with inflammatory cells and a central core of amorphous necrotic material , and occurs in a subcapsular location or in deep liver parenchyma ( 3 ) .\nrecent advances in hepatic surgery have lead to in an increase in the number of candidates for partial hepatic resection due to metastatic diseases .\nthe major roles of imaging are to provide a clear and detailed map of hepatic metastasis and to make a careful preoperative selection of patients , thus avoiding unnecessary surgical exploration .\nnoninvasive imaging techniques , including us , ct , and mr imaging , have recently improved the detection of hepatic metastasis . among these modalities\n, ctap is known to be most sensitive for the detection of focal hepatic lesions , though its high false - positive rate is considered a disadvantage ( 6 ) .\nintraoperative us has also been used to detect small metastases not identified by direct palpation or by other preoperative imaging techniques ( 7 ) .\nhowever , the radiologic features of these modalities are not sufficiently specific for the accurate characterization of liver tumors .\non contrast - enhanced ct , hepatic metastases usually present as focal hypoattenuating areas relative to normal hepatic parenchyma , on ctap as oval or round areas of perfusion defect , and on us as target - appearing hypoechoic nodules . in patients with primary malignancy ,\nthe focal hepatic lesion mimicking hepatic metastasis include liver abscess , inflammatory pseudotumor ( 8) , and focal eosinophilic infiltration ( 9 , 10 ) . in our cases ,\nthe radiological findings of solitary necrotic nodule of the liver on each modality were similar to those of hepatic metastasis .\nbecause solitary necrotic nodule of the liver exhibits central hypoattenuation of the necrotic area and peripheral hyperattenuation due to the presence of fibrosis , differentiation by means of ct between this nodule and hepatic metastasis is very difficult or impossible .\nthe intraoperative us findings of the lesion also simulated hepatic metastasis , even though lesion multiplicity had been elucidated .\na hepatic metastatic tumor usually appears as a well - defined , round mass , but in our cases , a solitary necrotic nodule of the liver seen as a relatively ovoid or elliptical mass .\nsuch a finding is not specific for this condition , however , and for correct diagnosis , needle biopsy may therefore be necessary . in conclusion , solitary necrotic nodule of the liver may be revealed by on radiologic imaging as single or multiple lesions of the liver , a finding which may be misinterpreted as metastatic hepatic tumor .",
"role": "user"
},
{
"content": "we present two cases of solitary necrotic nodules of the liver which on radiologic images mimicked hepatic metastasis . \n solitary necrotic nodule of the liver is a rare but benign entity which histopathologically consists of an outer fibrotic capsule with inflammatory cells and a central core of amorphous necrotic material . \n the lesion was seen on contrast - enhanced ct as an ovoid - shaped hypoattenuating nodule ; on ct during hepatic arteriography as enhancing nodule ; on intraoperative us as a target - appearing hypoechoic nodule ; on t2wi as a hyperintensity nodule , and on dynamic mr as a subtle peripheral enhancing nodule . \n although the radiologic features are not specific , solitary necrotic nodule of the liver should be included in the differential diagnosis of hepatic metastasis .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.5597 | [
{
"content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nWildtypelittermatesweremaintainedonthesamediet.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.126433 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nthe study of relativistic heavy ion collisions provides an excellent means to study the existence of new form of matter at extreme conditions of energy density and temperature@xcite.the bulk properties of the system created in such collisions can be studied via the transverse momentum ( @xmath0 ) distribution of the produced charged particles which reflect the conditions during the kinetic freeze out together with the integrated effects of the collective expansion from very initial stages.the bulk properties of the medium created in such collisions is usually studied through the statistical approach.one of the well known approaches to the particle production is governed by hydrodynamics inspired models based on boltzman - gibbs ( bg ) statistics.the models assumes the occurrence of a local thermal equilibrium and hence has a common fixed temperature @xmath1 associated with the hadronizing system.the transverse momentum(@xmath0 ) distributions were explained by the boltzman - gibbs blast wave ( bgbw ) hydrodynamical model fits to the data to obtain a set of bulk parameters at freeze out such as temperature,@xmath1 and the radial flow velocity ( @xmath2 ) @xcite.the successful application of such hydrodynamical models in limited @xmath0 range also provided evidence of collective motion in the continuously expanding system created in heavy ion collisions .\nfurthermore , the measurement of large elliptic anisotropy in momentum in non - central collisions points towards a nearly thermalized and strongly interacting system.the equilibrium description based on ideal hydrodynamical models where bg statistics can be applied are generally limited to low @xmath0.it is not always a good idea to select or constrain the @xmath0 regime so that the model fitting procedure works .\nhowever , hadronizing systems experience hard qcd scattering processes during initial stages , intrinsic initial state fluctuations due to formation of initial color glass condensate ( cgc ) formation , fluctuations in temperature , initial energy density _ etc . _ which can be interpreted as dynamical non - equilibrium effects.these non - equilibrium processes\nmay not be washed away by multi - particle interactions in the initial qgp ( quark gluon plasma ) state or in the later hadronic phase and can manifest more dominantly in spectral shape at intermediate or high @xmath0@xcite .\nrecent studies on the generation of azimuthal anisotropy and @xmath3 mass splitting using multi - phase transport models suggest the dominant role of anisotropic parton escape mechanism to describe the measured azimuthal anisotropy in small system and heavy ion collisions.the escape mechanism shows that the most of the @xmath3 is developed during the parton cascade stage and the origin of mass splitting is attributed to hadron re - scatterings @xcite.this clearly shows that evolution of the system based on hydrodynamical scenario is not imperative.in the past , tsallis non - extensive statistics have been used extensively to understand the particle production and to describe the evolution of @xmath0 spectra in @xmath4 and heavy ion collisions at various energies@xcite.the @xmath5 parameter in the tsallis statistics characterize the deviation from the assumed conditions of local thermal equilibrium in systems created in such collisions . in previous studies , tsallis distribution have been incorporated into the blast wave model ( tbw ) and have been used successfully to describe the spectra at rhic energies@xcite.the distribution was also applied to describe the data in @xmath4 collisions at lhc energies to observe the onset of radial flow in smaller systems@xcite.however , the observation of finite radial flow in smaller systems hinting towards collectivity in such systems is still under investigation and it is highly debatable whether hydrodynamics can be applied in smaller systems .\nrecently , the weibull model was used to describe successfully the particle production in @xmath4(@xmath6 ) and @xmath7 collisions for a broad range of energies@xcite.a weibull - glauber approach was also developed to understand the charged particle multiplicity and transverse energy distribution in heavy ion collisions@xcite.this is natural to assume as the individual nucleon - nucleon collisions inside the nucleus - nucleus collision has an inherent pqcd parton cascade evolution and fragmentation.the weibull distribution is usually used to describe natural processes where fragmentation and sequential branching is one of the major ingredients of the dynamical evolution@xcite .\nthe aim of the letter is to propose a generalized distribution within the framework of non - equilibrium q - statistics which would describe the @xmath0 distribution successfully for both small systems ( @xmath4 and @xmath8 ) and the system created in heavy ion collisions for all ranges of measured @xmath0 .\nthe successful description of weibull model describing particle production can be utilized to incorporate the tsallis q - statistics to weibull distribution to obtain a more generalized q - weibull distribution.the q - weibull distribution has been previously applied to many complex system in different areas of interest@xcite in this letter , we use q - weibull distribution to describe the @xmath0 distribution of charged hadrons emitted in @xmath4 , @xmath8 and @xmath9 systems for broad range of energies.the distribution provides a generalized statistical framework where both small and large systems can be compared.apart from describing the spectral evolution , the physical interpretation of the model parameters will provide quantitative insight into the dynamical processes in such collisions and help in predicting the spectra for future studies .\nthe most common way to incorporate the q - statistic to the weibull model is to replace the exponential factors by their equivalent q - exponential@xcite .\nthe two parameter weibull distribution is given by the following expression @xmath10 where @xmath11 and @xmath12 are the free parameters .\nthe q - weibull distribution is given by @xcite @xmath13 where @xmath14 the value of the non - extensive @xmath5 parameter is related to the deviation of the system from thermal equilibrium.in general , @xmath15 value is attributed to the presence of intrinsic fluctuations in the system and depends upon the observable being measured .\nit is worthwhile to investigate whether one can describe the @xmath0 distribution of charged hadrons in @xmath4 , @xmath8 and @xmath9 ( heavy ion ) collisions for a broad range of available energies using the q - weibull approach.this would allow the q - weibull function the most generalized parametrization to explain the @xmath0 distribution . in previous studies @xcite ,\ntsallis distribution was successful in explaining the @xmath0 distribution of @xmath4 collisions for the entire @xmath0 range for most of the beam energies studied .\none expects that the q - weibull function should describe the @xmath0 spectrum for charged particles in @xmath4 collisions as it incorporates both the weibull model of particle production and the q - statistics which has been very successful so far.the @xmath0 distribution of charged hadrons were fitted with q - weibull function in @xmath6 and @xmath4 collisions as measured by ua1 experiment @xcite and alice experiment at various energies@xcite , respectively.this is depicted in figure [ f1].the values of the extracted parameters are shown in table1 and table2.it will be interesting to see whether one can extend the functional description to @xmath8 and heavy ion collisions where the system created has higher degree of complexity and richer features of particle production mechanism .\nfigure [ f2 ] shows the description for @xmath16 collisions at 5.02 tev @xcite .\nthe distribution for @xmath17 collisions at 2.76 tev @xcite and @xmath18 @xcite collisions at 200 gev for various centrality classes are shown in figure [ f3 ] and figure [ f4 ] respectively .\nas can be observed from the figures , the q - weibull fits provide an excellent description of the data for different systems and for broad range of energies in the measured @xmath0 range.as q - weibull appears to be the most generalized q - distribution to explain the features of @xmath0 distribution , it seems important to study the evolution of its parameters with beam energy , type of colliding systems and centrality classes considered.this would allow to attribute some dynamics associated with the collisions processes to the parameters and thus one can have better understanding of the complex phenomena and have some quantitative estimates for future studies .\nfigure [ f5 ] shows the centrality dependence of the @xmath5 parameter extracted in @xmath17 collisions at 2.76 tev .\nthe non - extensive parameter @xmath5 which quantifies the deviation from local equilibrium goes up as we move from peripheral to central collisions if we consider the entire @xmath0 regime for the fit.the behavior is on expected lines as the initial pqcd hard scattering processes followed by fragmentation and hadronization dominates the particle production mechanism in hadronic interactions and supports the non - equilibrium scenario in such collisions.the processes becomes more harder in more central collisions and the relative contributions from initial state fluctuations is also more in central collisions.hence , we observe an increasing trend of deviation from equilibrium with centrality .\nhowever , the @xmath5 values shows a slight decreasing trend or remains more or less constant around 1.0 if we constrain the @xmath0 limit of the fit to lower values ( up to 2 @xmath19).this is in agreement with the hydrodynamical expectation where one sees a local equilibrium to be attained as the low @xmath0 hadrons emanate from a nearly thermally equilibrated system.the @xmath5 parameter in @xmath4 collisions is consistently greater than 1.0 and one can see larger @xmath5 values at highest energies as shown in table 1.this supports a similar mechanism of particle production via initial hard - scatterings across all energies concerned with an increase of relative contributions of hard or semi hard processes with energy .\nfigure [ f6 ] shows the behavior of @xmath11 parameter for different centrality in @xmath17 and @xmath18 collisions.the @xmath11 value shows an increase from peripheral to central collisions for both integrated @xmath0 regime and the low @xmath0 regime.this shows that the @xmath11 parameter is related to the mean @xmath0 of the distribution.this can also be related to a common radial velocity ( as per transverse expansion scenario in hydrodynamic evolution ) in the low @xmath0 regime which goes up with centrality .\nthe @xmath11 parameter in @xmath4 collisions is more or less constant across the beam energies considered if one restricts the fit to a common value ( 16 gev / c in this case ) which is expected .\nthe @xmath12 parameter shows a decrease as we move from central to peripheral collisions in both @xmath17 and @xmath18 collisions as shown in figure [ f7 ] .\nhowever , the trend is reversed in the low @xmath0 regime .\nthis indicates that @xmath12 parameter is related to the dynamics of particle production and its value increases with onset of hard qcd scatterings , initial fluctuations and other processes leading to non - equilibrium conditions . to understand the physical implications of the parameters better ,\none requires a systematic study of q - weibull model fits from isr to lhc energies in case of hadronic collisions and from sps to lhc energies for heavy ion collisions.the fits to identified particle spectra will allow deeper understanding of the parameters and characterize the system evolution based the species ( baryon /meson ) and mass of the hadrons.the successful description of the spectra and the detailed interpretation of the model parameters will certainly be useful in providing qualitative estimates for future studies .\ndistribution of charged hadrons in @xmath4 collisions at 0.9 tev , 2.76 tev and 7 tev from pseudo - rapidity region @xmath20 as measured by alice experiment at lhc @xcite . *\n( lower panel ) * @xmath0 distribution of charged hadrons in @xmath6 collisions at 0.2 tev , 0.5 tev and 0.9 tev in pseudo - rapidity region @xmath21 as measured by ua1 experiment.@xcite the solid line is the q - weibull fit to the data points .\nthe data points are properly scaled for visibility ] distribution of charged hadrons in @xmath16 collisions at 5.02 tev for @xmath22 as measured by alice experiment at lhc @xcite .\nthe solid line is the q - weibull fit to the data points .\nthe data points are properly scaled for visibility ] distribution of charged hadrons for various centrality classes in pbpb collisions at 2.76 tev from pseudo - rapidity region @xmath20 as measured by alice experiment at lhc @xcite .\nthe solid line is the q - weibull fit to the data points .\nthe data points are properly scaled for visibility .\nthe upper panel shows the data and fit where the entire range of measured @xmath0 is taken while the lower panel shoes the fit fot @xmath23 distribution of charged hadrons for various centrality classes in @xmath18 collisions at 200 gev in pseudo - rapidity region @xmath24 as measured by phenix experiment at rhic @xcite .\nthe solid line is the q - weibull fit to the data points .\nthe data points are properly scaled for visibility .\nthe upper panel shows the data and fit where the entire range of measured @xmath0 is taken while the lower panel shoes the fit fot @xmath25 . ] as a function of centrality for @xmath25 . *\n( lower panel ) * variation of @xmath5 as a function of centrality for the entire range of @xmath0 measured . ] as a function of centrality for @xmath25 . *\n( lower panel ) * variation of @xmath11 as a function of centrality for the entire range of @xmath0 measured . ] as a function of centrality for @xmath25 . *\n( lower panel ) * variation of @xmath12 as a function of centrality for the entire range of @xmath0 measured . ] .\n[ alicefit ] the parameters @xmath12 , @xmath11 and @xmath5 obtained from the fits of @xmath0 distributions using q - weibull function in @xmath4 collisions ( @xmath20 ) for different energies as measured by alice experiment at lhc .\nthe fits are constrained to an upper limit of @xmath0 for uniform comparison of parameters . [ cols=\"^,^,^,^ \" , ]\nwe have used the tsallis q- statistics in the weibull model of particle production and applied it for the first time to describe the transverse momentum distribution of charged hadrons in different colliding system for a broad range of energies.the q - weibull function successfully describes the @xmath0 distribution for all ranges of @xmath0 measured.the parameter @xmath5 , which characterizes the degree of deviation from thermal equilibrium in a system decreases systematically from peripheral to the central collisions in heavy ion collisions if the @xmath0 of the particles considered are constrained to lower values(less than @xmath26).this indicates an evolution from a non - equilibrated system in peripheral collisions towards a more thermalized system in central heavy ion collisions.however , the trend is reversed if we consider the all inclusive @xmath0 regime which supports an increase of relative contribution of hard pqcd processes in central collisions.the @xmath11 parameter can be associated with the mean @xmath0 or collective expansion velocity of hadrons which shows an expected increase with centrality of collisions.the @xmath12 parameter can be related to system dynamics associated with the collisions types and centrality classes.the quantitative evolution of the @xmath12 parameter and deeper physical interpretation requires systematic studies of identified particle spectra for a broad range of energies.we have successfully demonstrated that q - weibull is the most generalized statistical model which can be used to describe and predict the @xmath0 distribution of charged particles in different collision systems for broad range of energies .\nsadhana dash would like to thank the department of science and technology ( dst ) , india for supporting the present work .",
"role": "user"
},
{
"content": "we have implemented the tsallis q - statistics in the weibull model of particle production known as the q - weibull distribution to describe the transverse - momentum ( @xmath0 ) distribution of the charged hadrons at mid - rapidity measured at rhic and lhc energies . \n the model describes the data remarkably well for the entire @xmath0 range measured in nucleus - nucleus and nucleon - nucleon collisions.the proposed distribution is based on the non - extensive tsallis q - statistics which replaces the usual thermal equilibrium assumption of hydrodynamical models.the parameters of the distribution can be related to the various aspects of complex dynamics associated with such collision process .",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3119 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEarly psychosocial intervention in Alzheimer 's disease : cost utility evaluation alongside the Danish Alzheimer 's Intervention Study ( DAISY ) . OBJECTIVE To assess the cost utility of early psychosocial intervention for patients with Alzheimer 's disease and their primary caregivers . DESIGN Cost utility evaluation alongside a multicentre , randomised controlled trial with 3 years of follow-up . SETTING Primary care and memory clinics in five Danish districts . PARTICIPANTS 330 community-dwelling patients and their primary caregivers . INTERVENTION Psychosocial counselling and support lasting 8-12 months after diagnosis and follow-up at 3 , 6 , 12 and 36 months in the intervention group or follow-up only in the control group . MAIN OUTCOME MEASURES The primary outcome measure was the cost of additional quality-adjusted life years ( QALYs ) . Costs were measured from a societal perspective , including the costs of healthcare , social care , informal care and production loss . QALYs were estimated separately for the patient and the caregiver before aggregation for the main analysis . RESULTS None of the observed cost and QALY measures were significantly different between the intervention and control groups , although a tendency was noted for psychosocial care leading to cost increases with informal care that was not outweighed by the tendency for cost savings with formal care . The probability of psychosocial intervention being cost-effective did not exceed 36 % for any threshold value . The alternative scenario analysis showed that the probability of cost-effectiveness increased over the range of threshold values used if the cost perspective was restricted to formal healthcare . CONCLUSIONS A multifaceted , psychosocial intervention programme was found unlikely to be cost-effective from a societal perspective . The recommendation for practice in settings that are similar to the Danish setting is to provide follow-up with referral to available local support programmes when needed , and to restrict large multifaceted intervention programmes to patients and caregivers with special needs until further evidence for cost-effectiveness emerges . TRIAL REGISTRATION The study was registered in the Clinical Trial Database as ISRCTN74848736 .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"cost utility\", \"control group\", \"control groups\", \"Cost utility\", \"follow-up\", \"Psychosocial counselling and support\", \"cost utility evaluation\"], \"Outcome\": [\"cost\", \"QALY measures\", \"costs of healthcare\", \"social care\", \"informal care\", \"production loss\", \"QALYs\", \"probability of cost-effectiveness increased\", \"cost of additional quality-adjusted life years ( QALYs ) . Costs\", \"cost-effective\"], \"Participant\": [\"patients with Alzheimer \\u0027s disease and their primary caregivers\", \"Alzheimer \\u0027s disease :\", \"330\"]}",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.185 | [
{
"content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nUsing the identifying restrictions of Baker et al. (2012), I find that the estimated response of the economy to a shock to policy uncertainty is much larger when the last 4 years of data are included in the dataset.",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.14154 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nWe design fast deterministic algorithms for distance computation in the congested clique model. Our key contributions include: -- A @math -approximation for all-pairs shortest paths in @math rounds on unweighted undirected graphs. With a small additional additive factor, this also applies for weighted graphs. This is the first sub-polynomial constant-factor approximation for APSP in this model. -- A @math -approximation for multi-source shortest paths from @math sources in @math rounds on weighted undirected graphs. This is the first sub-polynomial algorithm obtaining this approximation for a set of sources of polynomial size. Our main techniques are new distance tools that are obtained via improved algorithms for sparse matrix multiplication, which we leverage to construct efficient hopsets and shortest paths. Furthermore, our techniques extend to additional distance problems for which we improve upon the state-of-the-art, including diameter approximation, and an exact single-source shortest paths algorithm for weighted undirected graphs in @math rounds.\n\nCited Abstract(s):\n[1]: \n\n[2]: \n\n[3]: \n\n[4]: For a positive parameter @math , the @math -bounded distance between a pair of vertices @math in a weighted undirected graph @math is the length of the shortest @math path in @math with at most @math edges, aka hops . For @math as above and @math , a @math -hopset of @math is a graph @math on the same vertex set, such that all distances in @math are @math -approximated by @math -bounded distances in @math . Hopsets are a fundamental graph-theoretic and graph-algorithmic construct, and they are widely used for distance-related problems in a variety of computational settings. Currently existing constructions of hopsets produce hopsets either with @math edges, or with a hopbound @math . In this paper we devise a construction of linear-size hopsets with hopbound @math . This improves the previous bound almost exponentially. We also devise efficient implementations of our construction in PRAM and distributed settings. The only existing PRAM algorithm EN16 for computing hopsets with a constant (i.e., independent of @math ) hopbound requires @math time. We devise a PRAM algorithm with polylogarithmic running time for computing hopsets with a constant hopbound, i.e., our running time is exponentially better than the previous one. Moreover, these hopsets are also significantly sparser than their counterparts from EN16 . We use our hopsets to devise a distributed routing scheme that exhibits near-optimal tradeoff between individual memory requirement @math of vertices throughout preprocessing and routing phases of the algorithm, and stretch @math , along with a near-optimal construction time @math , where @math is the hop-diameter of the input graph.\n\n[5]: \n\n[6]: Let k ≥ 2 be an integer. We show that any undirected and unweighted graph G = (V, E) on n vertices has a subgraph G' = (V, E') with O(kn1+1 k) edges such that for any two vertices u, v ∈ V, if Δ G (u, v) = d, then Δ G' (u, v) = d+O(d1-1 k-1). Furthermore, we show that such subgraphs can be constructed in O(mn1 k) time, where m and n are the number of edges and vertices in the original graph. We also show that it is possible to construct a weighted graph G* = (V, E*) with O(kn1+1 (2k-1)) edges such that for every u, v ∈ V, if Δ G (u, v) = d, then Δ ≤ Δ G* (u, v) = d + O(d1-1 k-1). These are the first such results with additive error terms of the form o(d), i.e., additive error terms that are sublinear in the distance being approximated.\n\n[7]: Abstract A ( β , ϵ ) -hopset is, informally, a weighted edge set that, when added to a graph, allows one to get from point a to point b using a path with at most β edges (“hops”) and length ( 1 + ϵ ) dist ( a , b ) . In this paper we observe that Thorup and Zwick's sublinear additive emulators are also actually ( O ( k ϵ ) k , ϵ ) -hopsets for every ϵ > 0 , and that with a small change to the Thorup–Zwick construction, the size of the hopset can be made O ( n 1 + 1 2 k + 1 − 1 ) . As corollaries, we also shave “k” factors off the size of Thorup and Zwick's [20] sublinear additive emulators and the sparsest known ( 1 + ϵ , O ( k ϵ ) k − 1 ) -spanners, due to Abboud, Bodwin, and Pettie [1] .\n\n[8]: Spanners, emulators, and approximate distance oracles can be viewed as lossy compression schemes that represent an unweighted graph metric in small space, say @math bits. There ...\n\n[9]: \n\n[10]: \n\n[11]: \n\n[12]: We present a deterministic (1+o(1))-approximation O(n1 2+o(1)+D1+o(1))-time algorithm for solving the single-source shortest paths problem on distributed weighted networks (the CONGEST model); here n is the number of nodes in the network and D is its (hop) diameter. This is the first non-trivial deterministic algorithm for this problem. It also improves (i) the running time of the randomized (1+o(1))-approximation O(n1 2D1 4+D)-time algorithm of Nanongkai [STOC 2014] by a factor of as large as n1 8, and (ii) the O(є−1logє−1)-approximation factor of Lenzen and Patt-Shamir’s O(n1 2+є+D)-time algorithm [STOC 2013] within the same running time. Our running time matches the known time lower bound of Ω(n1 2 logn + D) [Das STOC 2011] modulo some lower-order terms, thus essentially settling the status of this problem which was raised at least a decade ago [Elkin SIGACT News 2004]. It also implies a (2+o(1))-approximation O(n1 2+o(1)+D1+o(1))-time algorithm for approximating a network’s weighted diameter which almost matches the lower bound by [PODC 2012]. In achieving this result, we develop two techniques which might be of independent interest and useful in other settings: (i) a deterministic process that replaces the “hitting set argument” commonly used for shortest paths computation in various settings, and (ii) a simple, deterministic, construction of an (no(1), o(1))-hop set of size O(n1+o(1)). We combine these techniques with many distributed algorithmic techniques, some of which from problems that are not directly related to shortest paths, e.g. ruling sets [ STOC 1987], source detection [Lenzen, Peleg PODC 2013], and partial distance estimation [Lenzen, Patt-Shamir PODC 2015]. Our hop set construction also leads to single-source shortest paths algorithms in two other settings: (i) a (1+o(1))-approximation O(no(1))-time algorithm on congested cliques, and (ii) a (1+o(1))-approximation O(no(1)logW)-pass O(n1+o(1)logW)-space streaming algorithm, when edge weights are in 1, 2, …, W . The first result answers an open problem in [Nanongkai, STOC 2014]. The second result partially answers an open problem raised by McGregor in 2006 [ sublinear.info , Problem 14].\n\n[13]:",
"role": "user"
},
{
"content": "Hopsets are a central building block in distance computation and are studied extensively in various computing models [7]8 [13] [11] [5] [2] [3]9 [1] [3] [9] . The most related to our work are two recent constructions of Elkin and Neiman [4] , and Huang and Pettie [7] , which are based on the emulators of Thorup and Zwick [6] , and are near optimal due to existential results [8] . Specifically, [7] construct @math -hopsets of size @math with @math , where recent existential results show that any construction of @math -hopsets with worst case size @math must have @math , where @math is an integer and @math . For a detailed discussion of hopsets see the introduction in [4] [2] .",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.5140 | [
{
"content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: those who have a weakened immune system may be at a higher risk of developing a severe illness as a result of COVID-19.\n\nAbstract:\nThis review evaluates whether pregnancy is a risk factor for COVID-19 by looking at the expression of immune markers such as immune cells and cytokines in order to have a better understanding on the pathophysiology of the disease, thus reducing maternal deaths.\nPregnant women are more at risk of contracting COVID-19 due to their weakened immune system.\nStudies demonstrate that COVID-19 is an immune condition which is marked by reduced lymphocytes and elevated selected proinflammatory cytokines.\nSimilar immune expression has been demonstrated in pregnancy by several studies.\nIn addition, the placenta has been shown to possess ACE2 receptors on the villous cytotrophoblast and the syncytiotrophoblast and findings suggest that the coronavirus enters the host cells via these ACE2 receptors.\nThe immune response in pregnancy increases the risk of contracting COVID-19.\nBoth normal pregnancy and COVID-19 are marked by decreased lymphocytes, NKG2A inhibitory receptors, and increased ACE2, IL-8, IL-10, and IP-10 it therefore safer to conclude that pregnancy is a risk factor for COVID-19 development.\nFurthermore, the presence of the ACE2 receptors in the placenta may increase the risk of mother to baby transmission of the virus.\nTherefore, more studies investigating the link between pregnancy and COVID-19 are needed.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Pregnant women are more at risk of contracting COVID-19 due to their weakened immune system.\", \"Studies demonstrate that COVID-19 is an immune condition which is marked by reduced lymphocytes and elevated selected proinflammatory cytokines.\", \"Similar immune expression has been demonstrated in pregnancy by several studies.\"]}",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.7048 | [
{
"content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nIn early 2020, hospitals faced unprecedented patient volumes resulting from the COVID-19 outbreak. Nurse executives at a faith-based, not-for-profit health care system quickly responded to ensure safe staffing, conservation of personal protective equipment, and implementation of infection prevention strategies. A significant challenge was safe staffing for the expected patient surge. To address this, a team of nurse executives utilized the ADKAR change model to guide a transition from primary to team nursing. The processes varied between hospitals, but core principles and implementation strategies were the same. This article discusses the quick, but methodical, journey one health care system experienced.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.3244 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIntramedullary repair device against volar plating in the reconstruction of extra-articular and simple articular distal radius fractures ; a randomized pilot study . PURPOSE This prospective randomized pilot study reports our institutional experience and early results using Sonoma Wrx ( Sonoma Orthopedic Products , Santa Rosa , CA ) in the treatment of extra-articular and simple intra-articular distal radius fractures . MATERIAL AND METHODS A total of 64 patients , were enrolled in the study . They were considered eligible if they had ; unstable extra-articular distal radius fractures and simple intra-articular distal radius fractures suitable for closed reduction ( AO types ; A2.2 , A2.3 , A3.1 C2.1 , C2.2 ) . Patients in group I received intramedullary fixation using the Sonoma Wrx device and patients in group II received standard volar locking plate fixation . Radiographic criteria of acceptable healing were used for evaluation . RESULTS Two groups were similar in terms of baseline characteristics . Mean time of operation was significantly shorter in Group 1 vs. in group 2 ( 36.81 ± 7.11 vs. 48.97 ± 5.9 minutes , p = 0.001 ) . Time to healing of the fracture was not different between two groups ( 5.45 ± 1.09 vs. 5.70 ± 1.04 weeks for Group 1 vs. 2 , respectively p = 0.36 ) . Overall complications occurred in 9 patients in group 1 and in 15 patients in group 2 ( p = 0.17 ) . Follow-up was completed in all patients with a median time of 12 months and 13 months in group 1 and 2 , respectively . On radiographic evaluation radial inclination , radial height and volar tilt were not significantly different between group 1 and 2 , respectively . There were no significant differences between two groups in regard to wrist rotational degrees measured in last follow-up visit . CONCLUSION Sonoma Wrx Device is reliable and effective in terms of achieving satisfactory outcomes in treatment of distal radius fractures . It may be reasonable to use this device to prevent complications that are related to extensive soft tissue dissection .",
"role": "user"
},
{
"content": "{\"Intervention\": [\".\", \"Intramedullary repair\", \"standard volar locking plate fixation\", \"intramedullary fixation using the Sonoma Wrx device\"], \"Outcome\": [], \"Participant\": [\"fractures suitable\", \"64\"]}",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.15306 | [
{
"content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Mononuclear cell dynamics in M. tuberculosis infection provide opportunities for therapeutic intervention\nArticle:\nAbstract:\nMycobacterium tuberculosis causes chronic infection of mononuclear phagocytes, especially resident( alveolar) macrophages, recruited macrophages, and dendritic cells. Despite the importance of these cells in tuberculosis( TB) pathogenesis and immunity, little is known about the population dynamics of these cells at the sites of infection. We used a combination of congenic monocyte adoptive transfer, and pulse-chase labeling of DNA, to determine the kinetics and characteristics of trafficking, differentiation, and infection of mononuclear phagocytes during the chronic, adaptive immune phase of M. tuberculosis infection in mice. We found that Ly6Chi monocytes traffic rapidly to the lungs, where a subpopulation become Ly6Clo and remain in the lung vascular space, while the remainder migrate into the lung parenchyma and differentiate into Ly6Chi dendritic cells, CD11b+ dendritic cells, and recruited macrophages. As in humans with TB, M. tuberculosis-infected mice have increased numbers of blood monocytes; this is due to increased egress from the bone marrow, and not delayed egress from the blood. Pulse-chase labeling of dividing cells and flow cytometry analysis revealed a T1/2 of ~15 hrs for Ly6Chi monocytes, indicating that they differentiate rapidly upon entry to the parenchyma of infected lungs; in contrast, cells that differentiate from Ly6Chi monocytes turn over more slowly, but diminish in frequency in less than one week. New cells( identified by pulse-chase labeling) acquire bacteria within 1–3 days of appearance in the lungs, indicating that bacteria regularly encounter new cellular niches, even during the chronic stage of infection. Our findings that mononuclear phagocyte populations at the site of M. tuberculosis infection are highly dynamic provide support for specific approaches for host-directed therapies directed at monocytes, including trained immunity, as potential interventions in TB, by replacing cells with limited antimycobacterial capabilities with newly-recruited cells better able to restrict and kill M. tuberculosis.\nIntroduction:\nMononuclear phagocytes( MNP) harbor Mycobacterium tuberculosis in tissues of humans[1] and experimental animals[2–4]; and MNP are essential elements of granulomas, the characteristic tissue lesions in tuberculosis[5, 6]. Although macrophages have been characterized as prominent cellular hosts for M. tuberculosis in vivo, recent studies have revealed the roles of distinct populations of MNP, including tissue-resident( i. e., lung alveolar) macrophages, monocyte-derived recruited macrophages, and dendritic cells, as host cells for the bacteria in experimental animals[4] and humans[7]. Although cells in these subsets exhibit functional differences during M. tuberculosis infection, including the ability to transport bacteria from the lungs to the local lymph nodes[8–10] and their ability to present antigens for activation of CD4 T cells[11], there is little known regarding the population dynamics of MNP in tuberculosis or any other chronic infection. Recent studies of blood monocytes that emigrate from the bone marrow during homeostasis have revealed the potential for these cells to differentiate from Ly6Chi monocytes to several distinct subsets of intravascular and tissue parenchymal cells. A proportion of Ly6Chi monocytes differentiate into Ly6Clo monocytes, which remain in the blood and vascular space of peripheral tissues, where they are considered to 'patrol' the vascular space and respond to inflammatory stimuli[12]. In addition, Ly6Chi monocytes emigrate from the vascular space during homeostasis and differentiate into lung macrophages and dendritic cells[13]. M. tuberculosis infection markedly increases accumulation of recruited macrophages and dendritic cells in the lungs[2, 4, 9, 14, 15], but it is unclear whether the recruited cells are long-lived, or whether they require continuous replenishment by recruitment, local proliferation, or both. Since M. tuberculosis infection is accompanied by apoptosis[16], necrosis[17], and egress from the lungs to the local lymph node[8–10], we hypothesized that mononuclear cell populations in the lungs are dynamic, and their abundance and differentiation may contribute to the outcomes of infection. Initial evidence that MNP populations can be dynamic at a tissue site was obtained in studies by Dannenberg, who used intravenous injection of tritiated thymidine to track the recruitment and decay of MNP at sites of M. bovis BCG injection in the skin of rabbits[18]. Although these experiments revealed evidence for MNP turnover, they could not reveal the differentiation of the labeled cells, and the decay of labeled cells at the site of BCG injection may have been at least partly due to clearing of the attenuated bacteria used as the stimulus. Here, we studied monocyte trafficking, differentiation, and turnover in multiple compartments: blood, the lung vasculature, lung parenchyma, lung-draining lymph nodes, and distant lymph nodes, in uninfected and M. tuberculosis-infected mice, using adoptive transfer and in vivo pulse-chase labeling, coupled with flow cytometry and immunostaining of tissue sections. We selected time points that followed the rapid accumulation of cells that occurs during the early response to infection and that followed the development of adaptive immune responses. Our results extend previously-published results[15] by revealing distinct characteristics of endovascular and tissue parenchymal cells, by revealing that the MNP population is actively turning over during the chronic phase of M. tuberculosis infection, and by finding that new MNP rapidly acquire bacteria at the site of infection. The findings that MNP populations are dynamic during the chronic stage of infection suggests that mechanisms that disrupt MNP trafficking or differentiation may underlie reactivation and progression of TB, and also indicate the potential to employ trained immunity[19] as a therapeutic intervention in tuberculosis, by replacing cells that support intracellular bacterial growth with new cells that are more capable of restricting bacterial growth.\nDiscussion:\nIn this work, we advanced the understanding of host cell and bacterial dynamics during the chronic stage of M. tuberculosis infection, after the development of adaptive immunity. Consistent with reports in sterile inflammation models, we found that Ly6Chi blood monocytes differentiate into multiple subsets of dendritic cells and macrophages in the lungs and lymph nodes of M. tuberculosis-infected mice. Using a recently-described method of distinguishing cells that reside in the vascular space from those in the tissue parenchyma, we determined that Ly6Chi monocytes differentiate into Ly6Clo monocytes that remain in the lung vascular space in M. tuberculosis-infected mice. Using the same method, we determined that differentiation of Ly6Chi cells into other subsets happens after egress from the vascular compartment, without transitioning through a Ly6Clo state. In particular, we determined that Ly6Chi monocytes rapidly differentiate into Ly6Chi CD11c+ MHCII+ population of DC, which is a transient intermediate state followed by differentiation into CD11b+ DC in the lungs. Although our evidence is indirect, we found kinetic evidence that both Ly6Chi and CD11b+ DC migrate and transport bacteria from the lungs to the lung-draining mediastinal lymph node, consistent with previous reports[8–10]. We also determined that the response to M. tuberculosis infection in mice includes peripheral blood monocytosis, as observed in children and adult humans with TB[23–25], and using pulse-chase DNA labeling, we determined that peripheral blood monocytosis is due to enhanced production and egress from the bone marrow and not due to delayed egress from the blood. Pulse-chase DNA labeling further confirmed the capacity and kinetics of differentiation of monocyte-derived cells and established that the monocyte-derived professional phagocyte populations in the lungs and lymph nodes of M. tuberculosis-infected mice are dynamic and turn over with a half-life of less than one week. The finding that the monocyte-derived cell subsets that harbor M. tuberculosis turn over frequently suggested that the bacteria must exchange their cellular niches frequently, and we confirmed this prediction by detecting M. tuberculosis in newly-arrived cells in the lungs, indicating that the bacteria must respond regularly and rapidly to distinct cellular environments. Although the dynamics of mononuclear cells in localized mycobacterial infections has gotten little recent attention, there are precedents in other animal models, and evidence to suggest similar events in humans. Nearly 50 years ago, administration of radiolabeled thymidine was used to characterize mononuclear cell recruitment and death at skin sites of injection of M. bovis BCG in rabbits, and revealed that in contrast to prior assumptions, granuloma mononuclear cell populations are dynamic[18, 36]. However, at the time those experiments were performed, it was not possible to distinguish distinct subsets of mononuclear cells or their differentiation other than by their morphology. More recent studies used adoptive transfer of bone marrow monocytes during M. tuberculosis infection of mice, and revealed tissue-dependent differentiation into multiple subsets of macrophages and dendritic cells, with distinct patterns of differentiation in the lungs and the lung-draining lymph nodes[15]. Our findings extend those results by revealing that Ly6Chi CD11c+ cells appear after Ly6Chi monocytes enter the lung parenchyma and are transient intermediates that subsequently differentiate into CD11chi CD11b+ dendritic cells. This observation helps reconcile apparently different results of analysis of the cells that transport M. tuberculosis from the lungs to the lung-draining mediastinal lymph node: one study attributed this property to inflammatory monocytes[8], while another reported that CD11b+ dendritic cells are responsible for bacterial transport[9]. Our present results indicate that the inflammatory monocytes credited with bacterial transport likely differentiate into Ly6Chi CD11c+ dendritic cells after entering the lung parenchyma, where they acquire bacteria and differentiate further into CD11b+ dendritic cells, which contain bacteria and appear in the draining lymph node[9, 10]. Other recent studies have emphasized the importance of tissue-dependent differentiation of monocytes in tuberculosis, and certain of these have revealed important functional differences of monocyte-derived cell subsets. In particular, administration of poly-ICLC, a potent inducer of type I interferon expression, impaired control of M. tuberculosis in the lungs of mice, due to CCR2-dependent recruitment of monocyte-derived cells that are especially permissive for intracellular bacterial survival and/or growth[14]. In contrast, a recent study reported that interstitial macrophages that depended on CCL2 for their recruitment, are more able to constrain M. tuberculosis fitness as reflected by use of fluorescent reporter plasmids[2]. These seemingly contradictory results emphasize the potential plasticity of cells that differentiate from monocytes, which is likely the product of early programming of monocytes as well as of signals for differentiation and activation at the site of infection. Notably, studies in zebrafish embryos infected with virulent M. marinum have revealed that recruited macrophages are less able to control intracellular mycobacterial growth compared with tissue-resident macrophages[37]. As bacterial spread to recruited macrophages was a function of bacterial phenolic glycolipid, these results indicate that pathogen virulent factors can also act by directing bacteria to cells that are poorly equipped to kill them. Although studies similar to ours have not been performed in humans or with human samples, it is noteworthy that cell subsets comparable to the ones studied here can be detected in human lung tissue[38]. Likewise, recent studies of human monocytes have revealed a comparable pattern of cell turnover and differentiation[39], indicating that in humans with tuberculosis, cell populations localized to the site of infection in granulomas may exhibit dynamic states of turnover and differentiation similar to those that we have found in M. tuberculosis-infected mice. The finding that mononuclear cell populations are in a dynamic state during the chronic stage of M. tuberculosis infection may have clinical value. First, our findings that MNP populations are dynamic, even during the chronic phase of M. tuberculosis infection suggests that mechanisms that perturb the dynamics of MNP trafficking and differentiation may underlie or contribute to the transition from latent TB infection to active TB disease and/or the severity of TB disease. Evidence consistent with this hypothesis is that pulmonary TB in patients with diabetes mellitus, which has poorer outcomes compared with those of pulmonary TB in those without diabetes[40–42] is characterized by decreased frequencies of blood monocyte and dendritic cell populations compared with those in patients without diabetes[43]. Second, a recent study revealed that 'trained hematopoietic stem cells' that produce epigenetically-modified monocytes and macrophages are capable of contributing to control of M. tuberculosis infection in mice, in a manner that is independent of T lymphocytes[19]. Our finding that recruited macrophage populations are dynamic and that newly-arrived cells have access to bacteria even during the chronic stage of infection, implies that modification of cell populations to better restrict and/or kill M. tuberculosis may have therapeutic value in humans with drug-resistant tuberculosis, and may shorten the length of effective treatment, even in those with drug-susceptible tuberculosis.",
"role": "user"
},
{
"content": "During certain chronic infections such as tuberculosis , inflammatory cells , including macrophages and dendritic cells , are recruited to infected tissues where they aggregate to form tissue lesions known as granulomas . Although granulomas can persist long term , the dynamics of the cell populations that comprise granulomas are not well understood . We used a combination of methods to discover that , during chronic infection of mice with Mycobacterium tuberculosis , the monocyte , macrophage , and dendritic cell populations are highly dynamic: recently-proliferated cells traffic rapidly to infected lung tissues , yet they persist with a half-life of less than one week . We also found that recently-proliferated cells become infected with M . tuberculosis as soon as one day after their arrival in the lungs , indicating that the bacteria are regularly moving to new cellular niches , even during the chronic stage of infection . The dynamic nature of the cell populations that encounter M . tuberculosis suggests that interventions such as trained immunity have potential therapeutic roles , by replacing cells that have poor antimycobacterial activity with cells with enhanced antimycobacterial activity . These interventions could improve the outcomes of treatment of drug resistant tuberculosis .",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.35234 | [
{
"content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Toward more accurate pan-specific MHC-peptide binding prediction: a review of current methods and tools. \nChapter Title: Evaluation criteria \n1. Abstract of Cited Paper (BIB001): Diagnostic systems of several kinds are used to distinguish between two classes of events, essentially \"signals\" and \"noise\". For them, analysis in terms of the \"relative operating characteristic\" of signal detection theory provides a precise and valid measure of diagnostic accuracy. It is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performances of diverse systems on a common, easily interpreted scale. Representative values of this measure are reported here for systems in medical imaging, materials testing, weather forecasting, information retrieval, polygraph lie detection, and aptitude testing. Though the measure itself is sound, the values obtained from tests of diagnostic systems often require qualification because the test data on which they are based are of unsure quality. A common set of problems in testing is faced in all fields. How well these problems are handled, or can be handled in a given field, determines the degree of confidence that can be placed in a measured value of accuracy. Some fields fare much better than others. \n2. Abstract of Cited Paper (BIB002): BackgroundBinding of peptides to Major Histocompatibility class II (MHC-II) molecules play a central role in governing responses of the adaptive immune system. MHC-II molecules sample peptides from the extracellular space allowing the immune system to detect the presence of foreign microbes from this compartment. Predicting which peptides bind to an MHC-II molecule is therefore of pivotal importance for understanding the immune response and its effect on host-pathogen interactions. The experimental cost associated with characterizing the binding motif of an MHC-II molecule is significant and large efforts have therefore been placed in developing accurate computer methods capable of predicting this binding event. Prediction of peptide binding to MHC-II is complicated by the open binding cleft of the MHC-II molecule, allowing binding of peptides extending out of the binding groove. Moreover, the genes encoding the MHC molecules are immensely diverse leading to a large set of different MHC molecules each potentially binding a unique set of peptides. Characterizing each MHC-II molecule using peptide-screening binding assays is hence not a viable option.ResultsHere, we present an MHC-II binding prediction algorithm aiming at dealing with these challenges. The method is a pan-specific version of the earlier published allele-specific NN-align algorithm and does not require any pre-alignment of the input data. This allows the method to benefit also from information from alleles covered by limited binding data. The method is evaluated on a large and diverse set of benchmark data, and is shown to significantly out-perform state-of-the-art MHC-II prediction methods. In particular, the method is found to boost the performance for alleles characterized by limited binding data where conventional allele-specific methods tend to achieve poor prediction accuracy.ConclusionsThe method thus shows great potential for efficient boosting the accuracy of MHC-II binding prediction, as accurate predictions can be obtained for novel alleles at highly reduced experimental costs. Pan-specific binding predictions can be obtained for all alleles with know protein sequence and the method can benefit by including data in the training from alleles even where only few binders are known. The method and benchmark data are available at http://www.cbs.dtu.dk/services/NetMHCIIpan-2.0\n\nLiterature Review:",
"role": "user"
},
{
"content": "Three types of experimental data are used-MHC binding peptides, endogenously presented ligands and T-cell epitopes BIB003 , where typically MHC binding peptides are quantitative binding data and ligands/epitopes data are binary. Selection of evaluations measure depends upon types of data. However, area under the receiving operator characteristic (ROC) curve (AUC) BIB002 is a common evaluation criterion on both numerical and binary data, i.e. binding, ligand and epitope data. On the other hand, for evaluation based on quantitative binding data, Pearson correlation coefficient (PCC) BIB001 is used to assess linear correlation between prediction scores and (log-transformed) binding affinities. Spearman rank-order correlation (SRC) is also used when measuring nonlinear correlation BIB001 . When predicting peptides binding to MHC-II, the length of a given peptide can exceed nine, so another important evaluation measure is to identify the binding core that mediates MHCpeptide binding.",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.881 | [
{
"content": "Given a short query with keyphrases about a research idea in 2015 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: We propose a novel deep convolutional network for accurate head pose estimation. By leveraging explicit 3D facial\n\nKeyphrases: face alignment images\n\nDataset description:\nWFLW: The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions.\nAFLW: The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image.\nFAT: Falling Things (FAT) is a dataset for advancing the state-of-the-art in object detection and 3D pose estimation in the context of robotics. It consists of generated photorealistic images with accurate 3D pose annotations for all objects in 60k images.\n300W: The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size.\n\nOptions: A: WFLW B: AFLW C: FAT D: 300W",
"role": "user"
},
{
"content": "300W",
"role": "assistant"
}
] |
science.chemprot_ner | science.chemprot_ner.127 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAnti-diabetic property of Tinospora cordifolia and its active compound is mediated through the expression of Glut-4 in L6 myotubes.\nTinospora cordifolia is a well reported plant possessing numerous medicinal values including anti-diabetic property. Aim of the present study is to study the mechanism of action of Tinospora cordifolia and its active compound in differentiated myocytes, L6 cells. Key marker of diabetes in cells is the insulin dependent glucose transporter-4 (Glut-4) which also responds to exogenous chemicals, and is over expressed up to 5- and 4-fold, by Tinospora cordifolia and palmatine, respectively. Next to Glut-4, the predominant protein influencing glucose metabolism is PPARα and γ whose expressions were also positively modulated. Further, the inhibitors of insulin pathway prevented glucose uptake mediated by Tinospora cordifolia and palmatine which shows that the activity is majorly mediated through insulin pathway.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"palmatine\", \"glucose\"], \"Protein\": [\"insulin dependent glucose transporter-4\", \"PPAR\\u03b1 and \\u03b3\", \"Glut-4\", \"insulin\"]}",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.1322 | [
{
"content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nNone\n\nContext before the citation:\nFollowing Ponzetto and Strube (2006), we consider an anaphoric reference, NPi, correctly resolved if NPi and its closest antecedent are in the same coreference chain in the resulting partition. We report performance in terms of two metrics: (1) the Fmeasure score as computed by the commonly-used MUC scorer (Vilain et al., 1995), and (2) the accuracy on the anaphoric references, computed as the fraction of anaphoric references correctly resolved. As in SC induction, we use the ACE Phase 2 coreference corpus for evaluation purposes, acquiring the coreference classifiers on the 422 training texts and evaluating their output on the 97 test texts.\n\nCitation Sentence:\nFollowing Ponzetto and Strube ( 2006 ) , we consider an anaphoric reference , NPi , correctly resolved if NPi and its closest antecedent are in the same coreference chain in the resulting partition .\n\nContext after the citation:\nIn all of our experiments, we use NPs automatically extracted by an in-house NP chunker and IdentiFinder.",
"role": "user"
},
{
"content": "Uses",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.773 | [
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What are invasomes\n\nContext: Ultra-flexible vesicles, e.g. invasomes and core-multishell (CMS) nanotransporters are efficient drug delivery systems\nInvasomes are vesicles containing in addition to phospholipids a mixture of terpenes (cineole, citral and d-limonene) or only one terpene (citral) and ethanol, as penetration enhancers.\ninvasomes and core-multishell (CMS) nanotransporters are efficient drug delivery systems for dermatological applications.\ninvasomes and core-multishell (CMS) nanotransporters are efficient drug delivery systems for dermatological applications.\nThe present study compares three vesicular systems, cationic LeciPlex, invasomes, and conventional liposomes for their ability to deliver drugs deep into the skin.\nThe present write up elaborates the mechanism of penetration, and compiles literature dental applications of the invasomes, the detection ability, use in photodynamic therapy, pilosebaceous targeting, and for delivery of macromolecules.\nThe present review focuses on the update of the research activity on effectiveness and permeation enhancing effects of invasomes for dermal and topical delivery.",
"role": "user"
},
{
"content": "invasomes and core-multishell (CMS) nanotransporters are efficient drug delivery systems for dermatological applications.",
"role": "assistant"
}
] |
science.pubmedqa_qa | science.pubmedqa_qa.36 | [
{
"content": "You will be shown an abstract from a research paper and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nAbstract:\n(BACKGROUND) The CLASS Act, which was part of the Affordable Care Act of 2010, established a voluntary personal assistance services (PAS) insurance program. However, concerns about enrollment and adverse selection led to repeal of the CLASS Act in 2013.\n(OBJECTIVES) To estimate the number of middle-aged adults interested in purchasing PAS insurance, the sociodemographic, socioeconomic and disability attributes of this population, and the maximum monthly premium they would be willing to pay for such coverage.\n(METHODS) A total of 13,384 adults aged 40-65 answered questions about their interest in PAS insurance in the 2011 Sample Adult National Health Interview Survey. We applied survey weights for the U.S. population and conducted logistic regression analyses to identify personal factors associated with interest in paying for the CLASS program.\n(RESULTS) An estimated 25.8 million adults aged 40-65 (26.7%) said they would be interested in paying for a public insurance program to cover PAS benefits. However, interest in PAS insurance varied by age, race, ethnicity, region, income, disability status, and family experience with ADL assistance. Only 1.6 million adults aged 40-65 (1.8%) said they would be willing to pay $100 per month or more for coverage.\n\nQuestion:\nAfter CLASS--Is a voluntary public insurance program a realistic way to meet the long-term support and service needs of adults with disabilities?\n\nAnswer:",
"role": "user"
},
{
"content": "NO",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.573 | [
{
"content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nPreviously, we reported that HMGA1 is a novel target of ATM following DNA damage (Pentimalli et al., 2008).",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.257 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThrombin and thrombin receptor agonist peptide induce early events of T cell activation and synergize with TCR cross-linking for CD69 expression and interleukin 2 production. Thrombin stimulation of the T leukemic cell line Jurkat induced a transient increase in [ Ca2 + ]i. Proteolytic activity of the enzyme was required for this effect since diisopropyl fluorophosphate-thrombin failed to increase [ Ca2 + ]i. Furthermore, hirudin and anti-thrombin III inhibited the thrombin-induced [ Ca2 + ]i rise in Jurkat T cells. A synthetic thrombin receptor agonist peptide (TRP) of 7 residues (SFLLRNP) was found to be as effective as thrombin for [ Ca2 + ]i mobilization, and both agonists induced Ca2 + release exclusively from internal stores. Thrombin stimulated tyrosine phosphorylation of several proteins of molecular mass 40, 42, 70, 120, and 130 kDa. There was a good correlation between thrombin-induced tyrosine phosphorylation of the latter three proteins and Ca2 + mobilization. Thrombin and TRP also caused translocation of protein kinase C from the cytosol to the plasma membrane. As a likely consequence of these events, thrombin activated the nuclear factor NF-kB. Several cell lines of hematopoietic origin including the leukemic T cell line HPB. ALL and the erythroleukemic cell line K562 were responsive to thrombin, whereas others such as THP1, a myelomonocytic cell line, and BL2, a Burkitt lymphoma were refractory to thrombin or TRP stimulation. The magnitude of the thrombin response in the different cell types paralleled the expression of the thrombin receptor mRNA. We found that activation of Jurkat T cells by a combination of phytohemagglutinin and phorbol 12-myristate 13-acetate led to a dramatic inhibition of thrombin receptor mRNA expression and to a concomitant loss of the thrombin response. Finally, we demonstrate that thrombin and TRP enhanced CD69 expression and interleukin 2 production induced by T cell receptor cross-linking in both Jurkat T cells and peripheral blood lymphocytes. These findings highlight the role of thrombin as a potential regulator of T lymphocyte activation.",
"role": "user"
},
{
"content": "{\"CellLine\": [\"erythroleukemic cell line K562\", \"Jurkat T cells\", \"myelomonocytic cell line\", \"T leukemic cell line Jurkat\", \"leukemic T cell line HPB . ALL\", \"THP1\", \"BL2 ,\", \"Burkitt lymphoma\"], \"CellType\": [\"lymphocytes\", \"T lymphocyte\"], \"DNA\": [], \"Protein\": [\"thrombin receptor\", \"phytohemagglutinin\", \"anti - thrombin III\", \"molecular mass 40 , 42 , 70 , 120 , and 130 kDa\", \"nuclear factor NF - kB\", \"CD69\", \"TCR\", \"protein kinase C\", \"diisopropyl fluorophosphate - thrombin\", \"thrombin\", \"T cell receptor\", \"interleukin 2\", \"Thrombin\", \"enzyme\"], \"RNA\": [\"thrombin receptor mRNA\"]}",
"role": "assistant"
}
] |
science.gnormplus_ner | science.gnormplus_ner.87 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"SSX\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nA human homologue of the yeast HDEL receptor. Retention of resident proteins in the lumen of the endoplasmic reticulum is achieved in both yeast and animal cells by their continual retrieval from the cis-Golgi, or a pre-Golgi compartment. Sorting of these proteins is dependent on a C-terminal tetrapeptide signal, usually Lys-Asp-Glu-Leu (KDEL in the single letter code) in animal cells, His-Asp-Glu-Leu (HDEL) in Saccharomyces cerevisiae. There is evidence that the ERD2 gene encodes the sorting receptor that recognizes HDEL in yeast; its product is an integral membrane protein of relative molecular mass 26,000 (26K) that is not glycosylated. In contrast, Vaux et al. suggest that the mammalian KDEL receptor is a 72K glycoprotein that they detected using an anti-idiotypic antibody approach. If this were so, it would indicate a surprising divergence of the retrieval machinery between yeast and animal cells. We report here that human cells express a protein similar in sequence, size and properties to the ERD2 product, and propose that this protein is the human KDEL receptor.",
"role": "user"
},
{
"content": "{\"Gene\": [\"ERD2\", \"HDEL receptor\", \"KDEL receptor\", \"Golgi\", \"KDEL\", \"HDEL\"]}",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.291 | [
{
"content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nSorafenib (BAY 43-9006, Nexavar), a dual-action inhibitor that targets RAF/MEK/ERK pathway in tumor cells and tyrosine kinases VEGFR/PDGFR in tumor vasculature. Activating mutations in Ras and B-RAF were identified in several human cancers. In addition, several receptor tyrosine kinases, acting upstream of Ras, were found either mutated or overexpressed in human tumors. Because oncogenic activation of the Ras/RAF pathway may lead to a sustained proliferative signal resulting in tumor growth and progression, inhibition of this pathway represents an attractive approach for cancer drug discovery. A novel class of biaryl urea that inhibits C-RAF kinase was discovered using a combination of medicinal and combinatorial chemistry approaches. This effort culminated in the identification of the clinical candidate BAY 43-9006 (Sorafenib, Nexavar), which has recently been approved by the FDA for advanced renal cell carcinoma in phase III clinical trials. Sorafenib inhibited the kinase activity of both C-RAF and B-RAF (wild type and V600E mutant). It inhibited MEK and ERK phosphorylation in various cancer cell lines and tumor xenografts and exhibited potent oral antitumor activity in a broad spectrum of human tumor xenograft models. Further characterization of sorafenib revealed that this molecule was a multikinase inhibitor that targeted the vascular endothelial growth factor receptor family (VEGFR-2 and VEGFR-3) and platelet-derived growth factor receptor family (PDGFR-beta and Kit), which play key roles in tumor progression and angiogenesis. Thus, sorafenib may inhibit tumor growth by a dual mechanism, acting either directly on the tumor (through inhibition of Raf and Kit signaling) and/or on tumor angiogenesis (through inhibition of VEGFR and PDGFR signaling). In phase I and phase II clinical trials, sorafenib showed limited side effects and, more importantly, disease stabilization. This agent is currently being evaluated in phase III clinical trials in renal cell and hepatocellular carcinomas.",
"role": "user"
},
{
"content": "[[\"biaryl urea\", \"INHIBITOR\", \"kinase\"], [\"sorafenib\", \"INHIBITOR\", \"vascular endothelial growth factor receptor\"], [\"Sorafenib\", \"INHIBITOR\", \"C-RAF\"], [\"Nexavar\", \"INHIBITOR\", \"tyrosine kinases\"], [\"sorafenib\", \"INDIRECT-DOWNREGULATOR\", \"Raf\"], [\"sorafenib\", \"INHIBITOR\", \"VEGFR-3\"], [\"Sorafenib\", \"INHIBITOR\", \"B-RAF\"], [\"sorafenib\", \"INDIRECT-DOWNREGULATOR\", \"PDGFR\"], [\"sorafenib\", \"INHIBITOR\", \"VEGFR-2\"], [\"biaryl urea\", \"INHIBITOR\", \"C-RAF\"], [\"sorafenib\", \"INHIBITOR\", \"platelet-derived growth factor receptor\"], [\"Sorafenib\", \"INHIBITOR\", \"V600E\"], [\"sorafenib\", \"INDIRECT-DOWNREGULATOR\", \"VEGFR\"], [\"BAY 43 - 9006\", \"INHIBITOR\", \"tyrosine kinases\"], [\"sorafenib\", \"INHIBITOR\", \"PDGFR-beta\"], [\"sorafenib\", \"INDIRECT-DOWNREGULATOR\", \"Kit\"], [\"Sorafenib\", \"INHIBITOR\", \"kinase\"], [\"sorafenib\", \"INHIBITOR\", \"Kit\"], [\"Sorafenib\", \"INHIBITOR\", \"tyrosine kinases\"]]",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.222 | [
{
"content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is there evidence to suggest that triiodothyronine has neuroprotective properties in traumatic brain injury?\n\nContext: Exogenous T3 administration provides neuroprotection in a murine model of traumatic brain injury.\nTreatment with T3 (1.2μg/100g body weight, i.p.) 1h after TBI resulted in a significant improvement in motor and cognitive recovery after CCI, as well as in marked reduction of lesion volumes.\nWestern blot analysis revealed the ability of T3 to reduce brain trauma through modulation of cytoplasmic-nuclear shuttling of nuclear factor-κB (NF-κB). Twenty-four hours after brain trauma, T3-treated mice also showed significantly lower number of TUNEL(+) apoptotic neurons and curtailed induction of Bax, compared to vehicle control. In addition, T3 significantly enhanced the post-TBI expression of the neuroprotective neurotrophins (BDNF and GDNF) compared to vehicle.\nThe stimulating effect of T3 on peripheral nerve regeneration may have considerable therapeutic potential.\nThe present study provides evidence that the peripheral nervous system has its own system responsible for the local production of 3,5,3'-triiodothyronine, which may play a key role during the regeneration process.\nAlthough it has been hypothesized that T3 may facilitate neuronal regeneration after CNS injury, the 5'-D2 response to brain injury is unknown.\nThe outcome after brain injury is closely correlated with the intensity of these changes, particularly with catecholamine plasma levels and the severity of the low triiodothyronine syndrome.\nThe thyroid hormones triiodothyronine (T3) and L-thyroxine appear to enhance regeneration in the peripheral and central nervous system (CNS).\nT3 treatment influenced the general levels of incorporation of all treated groups over all days postoperation.\nT3 effects appear to involve an increased sensitivity of the cells of the injured nervous system to the hormone.\nT3, when administered over an 8 week period, stimulated axonal regeneration in the dorsal cortex and corpus callosum and promoted healing of the wound in the corpus callosum. The results of this investigation suggest that the use of T3 in the clinical treatment of injury to the central nervous system may be of less value than the work of earlier authors had indicated.\nIn addition, T3 significantly enhanced the post-TBI expression of the neuroprotective neurotrophins (BDNF and GDNF) compared to vehicle",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1522 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCotard's Syndrome in a Patient with Schizophrenia: Case Report and Review of the Literature Jules Cotard described, in 1880, the case of a patient characterized by delusions of negation, immortality, and guilt as well as melancholic anxiety among other clinical features. Later this constellation of symptoms was given the eponym Cotard's syndrome, going through a series of theoretical vicissitudes, considering itself currently as just the presence of nihilistic delusions. The presentation of the complete clinical features described by Cotard is a rare occurrence, especially in the context of schizophrenia. Here we present the case of a 50-year-old male patient with schizophrenia who developed Cotard's syndrome. The patient was treated with aripiprazole, showing improvement after two weeks of treatment. A review of the literature is performed about this case.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"Schizophrenia\", \"guilt\", \"nihilistic delusions\", \"schizophrenia\", \"Cotard\\u0027s Syndrome\", \"Cotard\\u0027s syndrome\", \"immortality\", \"melancholic anxiety\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"aripiprazole\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"Jules Cotard\", \"Cotard\"], \"Finding\": [\"symptoms\", \"presence\"], \"Food\": [], \"HealthCareActivity\": [\"treatment\", \"treated with\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Case Report\", \"eponym\", \"literature\", \"Literature\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"male\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.110 | [
{
"content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: A meta-ct and clinical chemistry based flowchart for rapid covid-19 triage at emergency departments\n\nAbstract:\nBackground Due to the large number of patients with coronavirus disease 19 (COVID-19), rapid diagnosis at the emergency department (ED) is of critical importance.\nIn this study we have developed a flowchart based on two well-known diagnostic methods: the 'corona-score' and the 'CO-RADS'.\nThis flowchart can be used in hospitals that use chest-CT, instead of chest X-ray, for COVID-19 suspected patients at the ED.\nMethods ED patients (n=1904) from the Jeroen Bosch Hospital, Amphia Hospital, HagaHospital, Elisabeth TweeSteden Hospital, Bernhoven Hospital and Slingeland Hospital were included.\nA laboratory-based 'corona-score', without radiology, called the 'lab-corona-score' was combined with a chest-CT based radiology scoring system (CO-RADS), to develop a flowchart.\nThe performance was assessed by sensitivity/specificity analyses using the RT-PCR outcome or the physician's final diagnosis as golden standard.\nResults Out of the 1904 patients, 611 (32.1%) patients tested positive for the SARS-CoV-2 virus.\nThe lab-corona-score alone had an AUC of 0.86, a sensitivity of 87% and a specificity of 88% using cut-off values of 0-2 (negative) and 8-10 (positive).\nOf 255 patients, from the Amphia and Slingeland Hospitals, a CO-RADS score was determined.\nThe flowchart, which combined the 'CO-RADS' with the 'lab-corona-score', was developed based on data from Slingeland Hospital (sensitivity 97%, specificity 96%).\nHereafter, the performance of the flowchart was validated using an independent dataset from Amphia hospital, and reached a sensitivity of 98% and specificity of 93%.\nA decision could be made in 79% of the patients, which was correct in 95% of the cases.\nConclusion This flowchart, based on radiology (CO-RADS) and clinical chemistry parameters (lab-corona-score), results in a rapid and accurate diagnosis of COVID-19 at the ED.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"This flowchart can be used in hospitals that use chest-CT, instead of chest X-ray, for COVID-19 suspected patients at the ED.\", \"Conclusion This flowchart, based on radiology (CO-RADS) and clinical chemistry parameters (lab-corona-score), results in a rapid and accurate diagnosis of COVID-19 at the ED.\"]}",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.111 | [
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is TFBSshape?\n\nContext: TFBSshape: a motif database for DNA shape features of transcription factor binding sites.\nTo utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone.\nTo utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases.\nThe TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE.\nAs demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone.\nDNA structural features refine the description of TF binding specificities and provide mechanistic insights into protein-DNA recognition. Existing motif databases contain extensive nucleotide sequences identified in binding experiments based on their selection by a TF. To utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases.\nTo utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE.\nExisting motif databases contain extensive nucleotide sequences identified in binding experiments based on their selection by a TF. To utilize DNA shape information when analysing the DNA binding specificities of TFs, we developed a new tool, the TFBSshape database (available at http://rohslab.cmb.usc.edu/TFBSshape/), for calculating DNA structural features from nucleotide sequences provided by motif databases.\nThe TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone.\nAs demonstrated for the basic helix-loop-helix and homeodomain TF families, our TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone. .",
"role": "user"
},
{
"content": "To utilize DNA shape information when analysing the DNA binding specificities of TFs, the TFBSshape database was developed for calculating DNA structural features from nucleotide sequences provided by motif databases. The TFBSshape database can be used to generate heat maps and quantitative data for DNA structural features (i.e., minor groove width, roll, propeller twist and helix twist) for 739 TF datasets from 23 different species derived from the motif databases JASPAR and UniPROBE. As demonstrated for the basic helix-loop-helix and homeodomain TF families, TFBSshape database can be used to compare, qualitatively and quantitatively, the DNA binding specificities of closely related TFs and, thus, uncover differential DNA binding specificities that are not apparent from nucleotide sequence alone.",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2910 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nA micellar improved method for trace levels selenium quantification in food samples, alcoholic and nonalcoholic beverages through CPE/FAAS. A useful preconcentration and determination method was proposed for trace selenium in food samples. The procedure is based on complex formation of Pyronine B with Se(IV) ions in the presence of sodium dodecyl sulphate (SDS) and Ponpe 7.5. The variables affecting complex formation, extraction and phase separation were studied and optimised. Under the experimental conditions used, the calibration graph was linear in the range of 20-1700μgL(-1) for Se(IV) ions. The limit of detection was 3.81μgL(-1) of Se(IV) and the relative standard deviation for 5 replicate determinations at 250μgL(-1)concentration level was 2.45%. Recovery values were obtained between 97.8% and 102.8% for spiked samples. The method was successfully applied to the determination of total selenium in some food samples and alcoholic and nonalcoholic beverages. Its validity was checked by the analysis of four certified reference materials. The results obtained by the proposed method were quantitatively in good agreement with the certified values.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"sodium dodecyl sulphate\", \"Se(IV)\", \"selenium\", \"SDS\", \"Pyronine B\"]}",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.1132 | [
{
"content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nMultivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years.\nThis raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain.\nHowever, naive applications of deep learning fall short in giving reliable confidence estimates and lack interpretability.\nWe propose a new deep sequential latent variable model for dimensionality reduction and data imputation.\nOur modeling assumption is simple and interpretable: the high dimensional time series has a lower-dimensional representation which evolves smoothly in time according to a Gaussian process.\nThe non-linear dimensionality reduction in the presence of missing data is achieved using a VAE approach with a novel structured variational approximation.\nWe demonstrate that our approach outperforms several classical and deep learning-based data imputation methods on high-dimensional data from the domains of computer vision and healthcare, while additionally improving the smoothness of the imputations and providing interpretable uncertainty estimates.\nMultivariate medical time series, consisting of multiple correlated univariate time series or channels, give rise to two distinct ways of imputing missing information: (1) by exploiting temporal correlations within each channel, and (2) by exploiting correlations across channels, for example by using lower-dimensional representations of the data.\nAn ideal imputation model for medical time series should take both of these sources of information into account.\nAnother desirable property of such models is to offer a probabilistic interpretation, allowing for uncertainty estimation.\nUnfortunately, current imputation approaches fall short with respect to at least one of these desiderata.\nWhile there are many time-tested statistical methods for multivariate time series analysis (e.g., Gaussian processes (Roberts et al., 2013) ), these methods are generally not applicable when features are missing.\nOn the other hand, classical methods for time series imputation often do not take the potentially complex interactions between the different channels into account (Little and Rubin, 2002; Pedersen et al., 2017) .\nFinally, recent work has explored the use of non-linear dimensionality reduction using variational autoencoders for i.i.d. data points with missing values (Ainsworth et al., 2018; Ma et al., 2018; Nazabal et al., 2018) , but this work has not considered temporal data and strategies for sharing statistical strength across time.\nA more comprehensive analysis of existing approaches and their shortcomings is deferred to the appendix (Sec. A).\nIn this paper, we propose an architecture that combines deep variational autoencoders (VAEs) with Gaussian process (GP) to efficiently model the latent dynamics at multiple time scales.\nMoreover, our inference approach makes use of efficient structured variational approximations, where we fit another multivariate Gaussian process in order to approximate the intractable true posterior.\nWe make the following contributions:\n• A new model.\nWe propose a VAE architecture for multivariate time series imputation with a GP prior in the latent space to capture temporal dynamics.\n• Efficient inference.\nWe use a structured variational approximation that models posterior correlations in the time domain.\n• Benchmarking on real-world data.\nWe carry out extensive comparisons to classical imputation methods as well as state-of-the-art deep learning approaches, and perform experiments on data from different domains.",
"role": "user"
},
{
"content": "We perform amortized variational inference on a latent Gaussian process model to achieve superior imputation performance on multivariate time series with missing data.",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.282 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nL-arginine and S-nitrosoglutathione reduce embolization in humans . BACKGROUND L-Arginine reduces platelet aggregation and adhesion in ex vivo studies , but there is no evidence as yet that it has a therapeutic effect on clinical end points . Doppler ultrasound can detect cerebral emboli noninvasively . Such embolic signals are common after carotid endarterectomy , and their frequency predicts risk of stroke recurrence . We used this situation to determine the antiplatelet efficacy of L-arginine and S-nitrosoglutathione ( GSNO ) , a physiological nitric oxide donor with possible platelet specificity . METHODS AND RESULTS Patients undergoing carotid endarterectomy were randomized in a double-blind manner between L-arginine ( n=14 ) , GSNO ( n=14 ) , or placebo ( n=14 ) administered intravenously for 90 minutes , starting 30 minutes after skin closure . All patients were pretreated with aspirin and given heparin during surgery . Transcranial Doppler recordings were made from the ipsilateral middle cerebral artery for 4 hours after surgery , beginning 30 minutes after skin closure , and also at 6 and 24 hours . There were highly significant reductions in the number of Doppler embolic signals in the L-arginine and GSNO groups ; first 4 hours , median ( range ) number of embolic signals , placebo 44.7 ( 6 to 778 ) , L-arginine 9.5 ( 0 to 225 ) , and GSNO 0.8 ( 0 to 8 ) , both P < 0.001 versus control values . The reduction in the signals persisted at the 24-hour recording . CONCLUSIONS Intravenous L-arginine and GSNO attenuate Doppler embolic signals in humans . Modulation of the NO system with these agents may have applications in the treatment of thromboembolic disease . This study demonstrates the potential application of ultrasonic embolic signal detection to examine the efficacy of new antiplatelet agents in relatively small numbers of patients .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"GSNO\", \"S-nitrosoglutathione ( GSNO )\", \"L-arginine\", \"L-Arginine\", \"S-nitrosoglutathione\", \"heparin\", \"aspirin\", \"placebo\"], \"Outcome\": [\"attenuate Doppler embolic signals\", \"reduction in the signals\", \"antiplatelet efficacy\", \"platelet aggregation and adhesion\", \"number of embolic signals\", \"number of Doppler embolic signals in the L-arginine and GSNO\", \"embolization\"], \"Participant\": [\"undergoing carotid endarterectomy\"]}",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.271 | [
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: A lightweight variant of Neural Shuffle-Exchange network for long-range sequence modelling.\n\nKeyphrases: language modelling audio\n\nDataset description:\nPG-19: A new open-vocabulary language modelling benchmark derived from books.\nLAMBADA: The LAMBADA (LAnguage Modeling Broadened to Account for Discourse Aspects) benchmark is an open-ended cloze task which consists of about 10,000 passages from BooksCorpus where a missing target word is predicted in the last sentence of each passage. The missing word is constrained to always be the last word of the last sentence and there are no candidate words to choose from.\nMusicNet: MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note's position in the metrical structure of the composition. The labels are acquired from musical scores aligned to recordings by dynamic time warping.\nFGVC-Aircraft: FGVC-Aircraft contains 10,200 images of aircraft, with 100 images for each of 102 different aircraft model variants, most of which are airplanes. The (main) aircraft in each image is annotated with a tight bounding box and a hierarchical airplane model label.\nSHREC: The SHREC dataset contains 14 dynamic gestures performed by 28 participants (all participants are right handed) and captured by the Intel RealSense short range depth camera. Each gesture is performed between 1 and 10 times by each participant in two way: using one finger and the whole hand.\nPenn Treebank: The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall Street Journal (WSJ), is one of the most known and used corpus for the evaluation of models for sequence labelling. The task consists of annotating each word with its Part-of-Speech tag.\n\nOptions:- A: PG-19 B: LAMBADA C: MusicNet D: FGVC-Aircraft E: SHREC F: Penn Treebank",
"role": "user"
},
{
"content": "[\"LAMBADA\", \"MusicNet\"]",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1749 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSolifenacin in the Elderly: Results of an Observational Study Measuring Efficacy, Tolerability and Cognitive Effects The study aimed to evaluate the efficacy and safety of solifenacin in older patients with overactive bladder (OAB). Observational data on patients aged ≥70 years and the prescribed flexible dose of solifenacin for OAB were collected at 294 offices of German general practitioners. Baseline and week 12 data included type and severity of OAB symptoms, adverse events, quality of life, and change in cognitive function per Mini Mental State Examination (MMSE). Mean age of 774 patients was 78 ± 6 years. A decrease was observed in all OAB symptoms including a reduction of urinary urgency and micturition, each by 4 episodes per 24 h. No change in mean MMSE score s was apparent at week 12. Adverse events and treatment discontinuations were low at 5.8 and 0.5%, respectively. Solifenacin was well-tolerated while OAB symptoms declined at week 12. No relevant effect of solifenacin on cognitive function was observed in this elderly population.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"cognitive function\", \"urinary urgency\", \"micturition\", \"adverse events\", \"overactive bladder\", \"Adverse events\", \"OAB\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"Solifenacin\", \"solifenacin\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"Results\", \"symptoms\", \"No change\", \"MMSE score\"], \"Food\": [], \"HealthCareActivity\": [\"MMSE\", \"Mini Mental State Examination\", \"treatment discontinuations\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [\"offices\"], \"PopulationGroup\": [\"elderly population\", \"German\", \"Elderly\"], \"ProfessionalOrOccupationalGroup\": [\"general practitioners\"], \"ResearchActivity\": [\"study\", \"Observational Study\"], \"SpatialConcept\": [], \"Virus\": []}",
"role": "assistant"
}
] |
science.data_reco_mcq_sc | science.data_reco_mcq_sc.13578 | [
{
"content": "Given a short query with keyphrases about a research idea in 2018 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: In this paper, we propose a novel method to improve the performance of sparse stereo matching method based on\n\nKeyphrases: disparity calculation stereo images\n\nDataset description:\nMVSEC: The Multi Vehicle Stereo Event Camera (MVSEC) dataset is a collection of data designed for the development of novel 3D perception algorithms for event based cameras. Stereo event data is collected from car, motorbike, hexacopter and handheld data, and fused with lidar, IMU, motion capture and GPS to provide ground truth pose and depth images.\nKITTI: KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner.\nUASOL: The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc.\nLost and Found: Lost and Found is a novel lost-cargo image sequence dataset comprising more than two thousand frames with pixelwise annotations of obstacle and free-space and provide a thorough comparison to several stereo-based baseline methods. The dataset will be made available to the community to foster further research on this important topic.\n\nOptions: A: MVSEC B: KITTI C: UASOL D: Lost and Found",
"role": "user"
},
{
"content": "KITTI",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.617 | [
{
"content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Disruption of the ccl5/rantes-ccr5 pathway restores immune homeostasis and reduces plasma viral load in critical covid-19\n\nAbstract:\nSevere acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), is now pandemic with nearly three million cases reported to date.\nAlthough the majority of COVID-19 patients experience only mild or moderate symptoms, a subset will progress to severe disease with pneumonia and acute respiratory distress syndrome (ARDS) requiring mechanical ventilation.\nEmerging results indicate a dysregulated immune response characterized by runaway inflammation, including cytokine release syndrome (CRS), as the major driver of pathology in severe COVID-19.\nWith no treatments currently approved for COVID-19, therapeutics to prevent or treat the excessive inflammation in severe disease caused by SARS-CoV-2 infection are urgently needed.\nHere, in 10 terminally-ill, critical COVID-19 patients we report profound elevation of plasma IL-6 and CCL5 (RANTES), decreased CD8+ T cell levels, and SARS-CoV-2 plasma viremia.\nFollowing compassionate care treatment with the CCR5 blocking antibody leronlimab, we observed complete CCR5 receptor occupancy on macrophage and T cells, rapid reduction of plasma IL-6, restoration of the CD4/CD8 ratio, and a significant decrease in SARS-CoV-2 plasma viremia.\nConsistent with reduction of plasma IL-6, single-cell RNA-sequencing revealed declines in transcriptomic myeloid cell clusters expressing IL-6 and interferon-related genes.\nThese results demonstrate a novel approach to resolving unchecked inflammation, restoring immunologic deficiencies, and reducing SARS-CoV-2 plasma viral load via disruption of the CCL5-CCR5 axis, and support randomized clinical trials to assess clinical efficacy of leronlimab-mediated inhibition of CCR5 for COVID-19.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Here, in 10 terminally-ill, critical COVID-19 patients we report profound elevation of plasma IL-6 and CCL5 (RANTES), decreased CD8+ T cell levels, and SARS-CoV-2 plasma viremia.\", \"Following compassionate care treatment with the CCR5 blocking antibody leronlimab, we observed complete CCR5 receptor occupancy on macrophage and T cells, rapid reduction of plasma IL-6, restoration of the CD4/CD8 ratio, and a significant decrease in SARS-CoV-2 plasma viremia.\", \"These results demonstrate a novel approach to resolving unchecked inflammation, restoring immunologic deficiencies, and reducing SARS-CoV-2 plasma viral load via disruption of the CCL5-CCR5 axis, and support randomized clinical trials to assess clinical efficacy of leronlimab-mediated inhibition of CCR5 for COVID-19.\"]}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.696 | [
{
"content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: COVID-19 spreads mainly among people who are in close contact (within about 6 feet) for a prolonged period.\n\nAbstract:\nBackground: The Greek authorities implemented the strong social distancing measures within the first few weeks after the first confirmed case of the virus to curtail the COVID-19 growth rate.\nObjectives: To estimate the effect of the two-stage strong social distancing measures, the closure of all non-essential shopping centers and businesses on March 16 and the shelter in place orders (SIPOs) on March 23 on the COVID-19 growth rate in Greece Methods: We obtained data on COVID-19 cases in Greece from February 26th through May 4th from publicly available sources.\nAn interrupted time-series regression analysis was used to estimate the effect of the measures on the exponential growth of confirmed COVID-19 cases, controlling for the number of daily testing, and weekly fixed-effects.\nResults: The growth rate of the COVID-19 cases in the pre-policies implementation period was positive as expected (p=0.003).\nBased on the estimates of the interrupted time-series, our results indicate that the SIPO on March 23 significantly slowed the growth rate of COVID-19 in Greece (p=0.04).\nHowever, we did not find evidence on the effectiveness of standalone and partial measures such as the non-essential business closures implemented on March 16 on the COVID-19 spread reduction.\nDiscussion: The combined social distancing measures implemented by the Greek authorities within the first few weeks after the first confirmed case of the virus reduced the COVID-19 growth rate.\nThese findings provide evidence and highlight the effectiveness of these measures to flatten the curve and to slow the spread of the virus.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Discussion: The combined social distancing measures implemented by the Greek authorities within the first few weeks after the first confirmed case of the virus reduced the COVID-19 growth rate.\"]}",
"role": "assistant"
}
] |
science.mslr2022_cochrane_multidoc_summarization | science.mslr2022_cochrane_multidoc_summarization.3119 | [
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nCapping carious exposed pulps with potassium nitrate, dimethyl isosorbide, polycarboxylate cement.\nAbstract 1:\nnan\n\n\nTitle 2:\nA clinical and histological evaluation of conservative pulpal therapy in human teeth.\nAbstract 2:\nThree intermediary base materials, a zinc oxide-eugenol (Cavitec) and two calcium hydroxide liners (Life and Dycal), were selected at random for use as a base beneath amalgam or composite restorations on humans following complete caries removal. Life and Dycal, selected at random, were also used as direct and indirect pulp capping agents as clinically indicated. Clinical evaluations of signs and symptoms were made before treatment and at one-week, six-month, and one-year intervals following treatment. Histological evaluations were performed on three complete caries removal teeth and 18 direct pulp capping teeth six months following treatment. No significant differences in clinical symptomatology resulted between the materials in the complete caries removal group or the indirect and direct pulp capping groups.\n\n\nTitle 3:\nEndodontic complications after plastic restorations in general practice.\nAbstract 3:\nTo test the hypothesis that dentine and pulp protection by conditioning-and-sealing is no less effective than using a conventional calcium hydroxide lining.\n A cohort of healthy adults requiring a new or replacement restoration in a posterior tooth was recruited in six general practices. All procedures received local Ethics Committee approval. Exclusion criteria included signs and symptoms of pulp necrosis or inflammation, and patients unable to commit to a long-term trial. Cavity preparations were randomized to receive a calcium hydroxide lining or conditioning-and-sealing with a smear-removing bonding system. Choice of bulk restorative material (composite resin or amalgam) was at the discretion of the dentist. The key outcome measure was evidence of pulpal breakdown identified at unscheduled (emergency) or scheduled recall examinations. Postoperative sensitivity was recorded on 100 mm VAS at 24 h, 4 days and 7 days. Pulp status was assessed at 6, 12, 24 and 36 month recall, and at any emergency recall appointment. The relationship between pre-treatment and treatment variables and pulp breakdown was assessed by logistic regression (P = 0.05).\n A total of 602 teeth were recruited, with comparable numbers of cavities lined (288, 47.8%) or conditioned and sealed (314, 52.2%). The majority (492, 81.7%) were replacement restorations, and amalgam was the most common bulk restorative material (377, 62.6%). A total of 390 (64.8%) restored teeth were reviewed at 6 months, 307 (51%) at 12 months, 363 (60.3%) at 24 months, and 279 (46.3%) at 36 months post-restoration. Sixteen cases of pulp breakdown were identified within 36 months of restoration placement, 11 presenting as emergencies and five detected at routine recall examination. Logistic regression showed that preoperative pain, cavity treatment by lining or conditioning-and-sealing and the use of rubber dam isolation had no association with pulp breakdown. Pulp breakdown was associated with deep or pulpally exposed cavities (P < 0.001, odds ratio 7.8) and with composite rather than amalgam restorations (P = 0.001, odds ratio 2.13). Re-coding to identify teeth with pulp exposures revealed that pulpal exposure was the key determinant of adverse pulp outcomes (P < 0.0001, odds ratio 28.4) and that composite resin restorations were again more likely to be associated with pulp breakdown than amalgam (P = 0.017, odds ratio 3.92).\n Considered within the context of routine primary dental care: Dentists can be confident that pulps will be equally well protected from post-restorative breakdown up to 36 months by calcium hydroxide lining and conditioning-and-sealing with adhesive resins. Residual dentine thickness appears to be a key determinant of pulp responses after restorative dental treatment. In deep and pulpally exposed cavities in posterior teeth, composites were associated with more pulpal breakdown than amalgams.\n\n\nTitle 4:\nThe efficacy of pulp capping materials. A comparative trial.\nAbstract 4:\nnan",
"role": "user"
},
{
"content": "It was disappointing that there were so few studies which could be considered as being suitable for inclusion in this review. The findings from this review do not suggest that there should be any significant change from accepted conventional practice procedures when the pulp of the carious tooth is considered. Further well designed RCTs are needed to investigate the potential of contemporary materials which may be suitable when used in the management of carious teeth. It is recognised that it is difficult to establish the 'ideal' clinical study when ethical approval for new materials must be sought and strict attention to case selection, study protocol and interpretation of data is considered. It is also not easy to recruit sufficient numbers of patients meeting the necessary criteria.",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.981 | [
{
"content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nIncreasing model size when pretraining natural language representations often results in improved performance on downstream tasks.\nHowever, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation.\nTo address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT.\nComprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT.\nWe also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs.\nAs a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.\nFull network pre-training (Dai & Le, 2015; Radford et al., 2018; Howard & Ruder, 2018 ) has led to a series of breakthroughs in language representation learning.\nMany nontrivial NLP tasks, including those that have limited training data, have greatly benefited from these pre-trained models.\nOne of the most compelling signs of these breakthroughs is the evolution of machine performance on a reading comprehension task designed for middle and high-school English exams in China, the RACE test (Lai et al., 2017) : the paper that originally describes the task and formulates the modeling challenge reports then state-of-the-art machine accuracy at 44.1%; the latest published result reports their model performance at 83.2% ; the work we present here pushes it even higher to 89.4%, a stunning 45.3% improvement that is mainly attributable to our current ability to build high-performance pretrained language representations.\nEvidence from these improvements reveals that a large network is of crucial importance for achieving state-of-the-art performance .\nIt has become common practice to pre-train large models and distill them down to smaller ones (Sun et al., 2019; Turc et al., 2019) for real applications.\nGiven the importance of model size, we ask: Is having better NLP models as easy as having larger models?\nAn obstacle to answering this question is the memory limitations of available hardware.\nGiven that current state-of-the-art models often have hundreds of millions or even billions of parameters, it is easy to hit these limitations as we try to scale our models.\nTraining speed can also be significantly hampered in distributed training, as the communication overhead is directly proportional to the number of parameters in the model.\nWe also observe that simply growing the hidden size of a model such as BERT-large can lead to worse performance.\nTable 1 and Fig. 1 show a typical example, where we simply increase the hidden size of BERT-large to be 2x larger and get worse results with this BERT-xlarge model.\nModel Hidden Size Parameters RACE (Accuracy) BERT-large 1024 334M 72.0% BERT-large (ours) 1024 334M 73.9% BERT-xlarge (ours) 2048 1270M 54.3% Table 1 : Increasing hidden size of BERT-large leads to worse performance on RACE.\nExisting solutions to the aforementioned problems include model parallelization (Shoeybi et al., 2019) and clever memory management (Chen et al., 2016; .\nThese solutions address the memory limitation problem, but not the communication overhead and model degradation problem.\nIn this paper, we address all of the aforementioned problems, by designing A Lite BERT (ALBERT) architecture that has significantly fewer parameters than a traditional BERT architecture.\nALBERT incorporates two parameter reduction techniques that lift the major obstacles in scaling pre-trained models.\nThe first one is a factorized embedding parameterization.\nBy decomposing the large vocabulary embedding matrix into two small matrices, we separate the size of the hidden layers from the size of vocabulary embedding.\nThis separation makes it easier to grow the hidden size without significantly increasing the parameter size of the vocabulary embeddings.\nThe second technique is cross-layer parameter sharing.\nThis technique prevents the parameter from growing with the depth of the network.\nBoth techniques significantly reduce the number of parameters for BERT without seriously hurting performance, thus improving parameter-efficiency.\nAn ALBERT configuration similar to BERT-large has 18x fewer parameters and can be trained about 1.7x faster.\nThe parameter reduction techniques also act as a form of regularization that stabilizes the training and helps with generalization.\nTo further improve the performance of ALBERT, we also introduce a self-supervised loss for sentence-order prediction (SOP).\nSOP primary focuses on inter-sentence coherence and is designed to address the ineffectiveness of the next sentence prediction (NSP) loss proposed in the original BERT.\nAs a result of these design decisions, we are able to scale up to much larger ALBERT configurations that still have fewer parameters than BERT-large but achieve significantly better performance.\nWe establish new state-of-the-art results on the well-known GLUE, SQuAD, and RACE benchmarks for natural language understanding.\nSpecifically, we push the RACE accuracy to 89.4%, the GLUE benchmark to 89.4, and the F1 score of SQuAD 2.0 to 92.2.\nWhile ALBERT-xxlarge has less parameters than BERT-large and gets significantly better results, it is computationally more expensive due to its larger structure.\nAn important next step is thus to speed up the training and inference speed of ALBERT through methods like sparse attention and block attention (Shen et al., 2018 ).\nAn orthogonal line of research, which could provide additional representation power, includes hard example mining (Mikolov et al., 2013) efficient language modeling training .\nAdditionally, although we have convincing evidence that sentence order prediction is a more consistently-useful learning task that leads to better language representations, we hypothesize that there could be more dimensions not yet captured by the current self-supervised training losses that could create additional representation power for the resulting representations.\nRACE RACE is a large-scale dataset for multi-choice reading comprehension, collected from English examinations in China with nearly 100,000 questions.\nEach instance in RACE has 4 candidate answers.\nFollowing prior work , we use the concatenation of the passage, question, and each candidate answer as the input to models.\nThen, we use the representations from the \"[CLS]\" token for predicting the probability of each answer.\nThe dataset consists of two domains: middle school and high school.\nWe train our models on both domains and report accuracies on both the development set and test set.\nA.2\nHYPERPARAMETERS Hyperparameters for downstream tasks are shown in Table 15 .\nWe adapt these hyperparameters from , , and Yang et al. (2019",
"role": "user"
},
{
"content": "A new pretraining method that establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.",
"role": "assistant"
}
] |
science.bioasq_factoid_qa | science.bioasq_factoid_qa.1142 | [
{
"content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which was the first oral drug for the treatment of multiple sclerosis by the US Food and Drug Administration (FDA)?\n\nContext: FTY720 (Fingolimod) is a known sphingosine-1-phosphate (S1P) receptor agonist that exerts strong anti-inflammatory effects and was approved as the first oral drug for the treatment of multiple sclerosis by the US Food and Drug Administration (FDA) in 2010.",
"role": "user"
},
{
"content": "Fingolimod",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.23373 | [
{
"content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Mouse Genetics Suggests Cell-Context Dependency for Myc-Regulated Metabolic Enzymes during Tumorigenesis\nArticle:\nAbstract:\nc-Myc( hereafter called Myc) belongs to a family of transcription factors that regulates cell growth, cell proliferation, and differentiation. Myc initiates the transcription of a large cast of genes involved in cell growth by stimulating metabolism and protein synthesis. Some of these, like those involved in glycolysis, may be part of the Warburg effect, which is defined as increased glucose uptake and lactate production in the presence of adequate oxygen supply. In this study, we have taken a mouse-genetics approach to challenge the role of select Myc-regulated metabolic enzymes in tumorigenesis in vivo. By breeding λ-Myc transgenic mice, ApcMin mice, and p53 knockout mice with mouse models carrying inactivating alleles of Lactate dehydrogenase A( Ldha), 3-Phosphoglycerate dehydrogenase( Phgdh) and Serine hydroxymethyltransferase 1( Shmt1), we obtained offspring that were monitored for tumor development. Very surprisingly, we found that these genes are dispensable for tumorigenesis in these genetic settings. However, experiments in fibroblasts and colon carcinoma cells expressing oncogenic Ras show that these cells are sensitive to Ldha knockdown. Our genetic models reveal cell context dependency and a remarkable ability of tumor cells to adapt to alterations in critical metabolic pathways. Thus, to achieve clinical success, it will be of importance to correctly stratify patients and to find synthetic lethal combinations of inhibitors targeting metabolic enzymes.\nIntroduction:\nActivation of one of the three MYC oncogenes is frequently selected for during tumorigenesis. These genes encode the transcription factors c-Myc, N-Myc and L-Myc that regulate a large number of downstream target genes. Although most of the work on MYC oncogenes has involved their role in cell proliferation, it is becoming clear that they may be involved in most aspects of oncogenic transformation[1]. As such, unravelling the mechanisms by which Myc proteins activate genes, and which are the essential genes, is paramount as studies resolving these mechanisms may open up new avenues of targeted intervention against various cancers. Some of Myc's earliest discovered transcriptional targets were genes encoding metabolic enzymes such as Ornithine decarboxylase[Odc][2],[3], Lactate dehydrogenase A[Ldha][4] and Carbamoyl-phosphate synthase/aspartate carbamoyltransferase/dihydroorotase[Cad][5]. Later studies using expression profiling identified even more of these genes, indicating that Myc is a master regulator of cellular metabolism and cell growth[6],[7]. Interestingly, inhibition of polyamine biosynthetic enzymes Odc and Spermidine synthase have shown efficacy in chemoprevention of several cancers in experimental models[8]–[14] and in colon cancer patients[15]. Furthermore, Myc-regulated Ldha, Pyruvate kinase M2 and Glutaminase have also emerged as promising targets based on experimental models of human cancer[16]–[23], suggesting that targeting various metabolic pathways regulated by Myc may prove beneficial in cancer therapies of patients. To gain in vivo support for this notion we performed genetic ablation experiments in mice to determine the individual contribution to tumorigenesis of three different Myc-regulated metabolic enzymes.\nDiscussion:\nWe are today beginning to appreciate the fact that oncogenes and tumor suppressor genes not only regulate cell proliferation, immortalization, apoptosis, metastasis and angiogenesis[39] but also cellular metabolism. The change in metabolism and the Warburg effect were for a long time believed to be self-evident and secondary to transformation. It is now known that the metabolic changes occur simultaneously and are governed by the same signal transduction pathways as those governing cell proliferation[40]. Since different tumor cells transform in response to variations of oncogenic mutations it is therefore likely that tumor cells can, or even have to, make different metabolic adaptations as well. Some of these adaptations may make the cells dependent on a certain enzyme, whereas others do not. Our study highlights the Ras oncogene as a potential pathway that requires Ldha, illustrated in Ras-transformed fibroblasts and Colon 26 cells, which carry an endogenous Kras mutation. Indeed, it has previously been shown that neu-transformed breast cancer cells are sensitive to Ldha inhibition by RNAi[37]. Since these cells have an activated Ras pathway[41], this may explain why knockdown of Ldha sensitizes these cells. The potential explanation why Ras-induced fibrosarcomas are sensitive to the Ldha mutation in vivo is that they are not inherently angiogenic, making them sensitive to metabolic perturbation before angiogenesis has occurred. The fact that Myc can stimulate angiogenesis independently of hypoxia-inducible factors[42]–[44] may account for the lack of impact on lymphomagenesis upon mutation of e. g. Ldha. Indeed, we show here that lymphomas from λ-Myc mice are very sensitive to hypoxia, most likely since they are Myc-driven and therefore rely on the TCA cycle and oxidative phosphorylation[45]. Folate biosynthesis has been linked to cancer both from studies on dietary supplements and by the identification of polymorphisms in genes encoding enzymes in folate biosynthesis like SHMT1, MTHFR and TS[46]. In addition, certain drugs like methotrexate target the folate biosynthetic pathway suggesting that this pathway is of critical importance for tumor cell survival. Interestingly, Shmt2 was first identified as a target in a screen for genes that can rescue the growth defect of Myc null rat fibroblasts[47]. In the same study Shmt1 was also shown to be a Myc transcriptional target gene but it was not further functionally characterized. Herein, we provide evidence that Shmt1 is dispensable for Myc-induced lymphomagenesis and that its deletion even accelerates tumorigenesis. The reason for this acceleration is unknown. It could involve effects on senescence or B-cell development as deletion of genes like Suv39h1 and E2f2 accelerates tumorigenesis by these mechanisms, respectively[48],[49]. However, our data corroborate other very recent studies. Using an Shmt1 knockout mouse[50], the Stover group showed that deletion of one allele of Shmt1 promotes adenomagenesis in the ApcMin mice when administered a special diet[26]. In our study the Shmt1 heterozygous ApcMin mice also have the largest mean amount of adenomas, albeit we did not investigate the impact of diet on this model. Interestingly, homozygous deletion of Shmt1 did not impact adenoma formation in ApcMin mice since there was a compensatory increase in thymidylate kinase( TK1) expression[47]: a salvage pathway for thymidine synthesis. We observed a stronger effect on acceleration of Myc-induced lymphomagenesis in homozygous mutant Shmt1 mice, suggesting that the salvage pathway is not completely penetrant. Taken together, we would argue that Shmt1 is a tumor-suppressing modifier in the context of B-cell lymphomas and colorectal adenomas. One of the most important reasons for the systematic analysis of Myc target genes in tumorigenesis is the potential of identifying or validating future drug targets. Our lymphoma and adenoma data cast doubt on the utility of developing targeted interventions against Ldha, Phgdh and Shmt1. On the other hand, in these models, tumors arise in mice carrying germline mutations of both the oncogenic lesion and the genes encoding the metabolic enzymes. It is thus plausible that adaptations have occurred during development that would not have occurred in cells acutely exposed to an inhibitor. Nevertheless, our data suggest that tumor cells eventually will develop resistance to putative treatments directed against metabolic enzymes since tumor growth undoubtedly can occur in the absence of Shmt1, Phgdh or Ldha. Therefore, a correct stratification is needed to identify patients whose tumors would be sensitive to inhibitors that are under development, for instance against Ldh[6],[18]. Such stratification can be performed based on which oncogenic driver mutation the tumor has acquired. As shown here, oncogenic Ras or pathways utilizing this circuit could be a potential parameter. Moreover, two independent studies published while revising this manuscript suggest that PHGDH amplification could be another oncogenic driver mutation in breast cancer and melanoma, which would sensitize cells to inhibition of Phgdh[51],[52]. As shown here and in these two studies, Phgdh is important in some but not all contexts. To date, very few inhibitors against Ldh have been identified and those known are either poorly bioavailable and/or have other targets. For instance, sodium oxamate is used in high millimolar concentrations but inhibits aspartate aminotransferase at concentrations where lactate production is not even affected when tumor cell growth is[53]. Gossypol, a natural compound from cottonseed first identified as a male contraceptive, also inhibits anti-apoptotic proteins of the Bcl-2 family making interpretation of anti-cancer activity difficult[54]. Even if improved inhibitors are developed the issue whether or not Ldh is a good target is unresolved. Our genetic study shows that cells carrying a defective Ldha are capable of forming tumors, albeit hindered by hypoxia. We and others also show that ablation of Ldha by RNAi can be detrimental for the cell. It is thus possible that either cell context determines sensitivity, or that the ablation of Ldha protein( RNAi) is more severe than inhibition of its activity( D223H mutation). This notion would lend support to the idea that Ldha may have other functions, potentially disconnected from its activity[55]. For instance, Ldha can be phosphorylated( Tyr238) and localized to the nucleus[56] and has recently been shown to exist in transcription complexes in ES cells[57]. Future studies should address if glycolysis-independent functions of Ldha, as suggested in transcription[58],[59], are the most important functions in some tumors. If so, focus on the development of new therapies should aim at blocking all activities of Ldha.",
"role": "user"
},
{
"content": "Cancer occurs when cells change their behavior and start to divide in an uncontrolled manner . To achieve this altered behavior , cells need to change their metabolism to be able to grow even when nutrient and oxygen supplies are limiting . Therefore , targeting metabolic pathways could be used to treat patients suffering from cancer . Here we studied a gene called MYC , which can regulate many metabolic pathways . By using genetically modified mice we can show that tumors have a remarkable ability to change their metabolism , even if key enzymes are removed . Taken together , our data suggest that metabolic disturbance by drugs in the clinic may present a future challenge .",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.5723 | [
{
"content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Nmf9 Encodes a Highly Conserved Protein Important to Neurological Function in Mice and Flies\nArticle:\nAbstract:\nMany protein-coding genes identified by genome sequencing remain without functional annotation or biological context. Here we define a novel protein-coding gene, Nmf9, based on a forward genetic screen for neurological function. ENU-induced and genome-edited null mutations in mice produce deficits in vestibular function, fear learning and circadian behavior, which correlated with Nmf9 expression in inner ear, amygdala, and suprachiasmatic nuclei. Homologous genes from unicellular organisms and invertebrate animals predict interactions with small GTPases, but the corresponding domains are absent in mammalian Nmf9. Intriguingly, homozygotes for null mutations in the Drosophila homolog, CG45058, show profound locomotor defects and premature death, while heterozygotes show striking effects on sleep and activity phenotypes. These results link a novel gene orthology group to discrete neurological functions, and show conserved requirement across wide phylogenetic distance and domain level structural changes.\nIntroduction:\nThe biological functions of many protein-coding genes remain unknown, often despite conservation across considerable evolutionary distance. Such “orphan” molecules may include genes whose functions affect fitness but not overt phenotypes in experimental settings, genes that affect overt but under-studied phenotypes, and/or genes that have been difficult to study due to unusual molecular properties, such as low level expression or poor physical annotation in genomes of well-studied organisms. Forward genetics offers one entry point into functional studies by highlighting those genes whose alterations produce measurable effects[1, 2]. A key challenge in de-orphanizing novel genes is to integrate their functions with observable sites of action, organism-level phenotypes and known cellular activities. Nmf9 and its homologs were essentially unknown prior to our studies. The nmf9 mutation was recovered in a N-ethyl-N-nitrosourea( ENU) screen at the Neuroscience Mutagenesis Facility( NMF) of the Jackson laboratory[3, 4] based on tremor and vestibular phenotypes. Several exons of the gene we identify as Nmf9 had been systematically annotated as Ankfn1( for ANK and FN3 domain containing), while other exons annotated as separate genes were identified only by clone names, based on non-overlapping partial cDNA clones from high-throughput screens[5, 6]. We selected nmf9 for study as part of a long-standing interest in mice with unusual tremors and ataxias[7–11]. The unusual conservation pattern we identified among Nmf9 homologs, along with recent innovations in genome editing[12–18], motivated additional studies to test conservation of function in mice and flies, including the generation of equivalent mutations at the same amino acid position in both species. While our work was in progress, two groups reported identification of the Drosophila homolog in very different contexts. A transposon-based screen for low-sleeping mutants by Mark Wu and colleagues found insertions in a poorly conserved, variably included 5’ region of the gene, but concluded based on antibody studies that the resulting wide-awake( wake) alleles were null[19]. The same group also found that the fly gene interacts with and promotes cell surface localization of the fly GABA-A receptor encoded by Resistance-to-dieldrin( Rdl), through physical and genetic interactions. Independently, an RNAi screen for genes required for proper segregation of Numb during asymmetric cell division of sensory organ precursors carried out by Juergen Knoblich and colleagues identified the same gene, which they named Banderuola( Bnd). Intragenic deletion of most of the Bnd coding sequence resulted in pupal and early adult lethality[20]. This group also showed physical, genetic and functional interactions with Discs-large 1( Dlg1), a prototypical membrane-associated guanylate kinase( MAGUK). We will refer to the Drosophila gene by its current annotation symbol, CG45058, and to 5’ P element alleles as wake. Here we use positional cloning, expression-guided behavioral studies, deep evolutionary constraint, and genome editing to define novel activities of Nmf9 and its Drosophila homolog. Positional cloning of mouse nmf9 identified a single point mutation at a splice donor sequence, resulting in exon skipping of a frame-shifting exon in Nmf9. The expression pattern of Nmf9 in mouse brain suggested neural circuits that might be compromised in mutant animals. Behavioral tests confirmed deficits in vestibular function, fear learning, and circadian behavior. Female mice were more severely affected than males for several phenotypes. Sliding window analysis of relative constraint across the protein coding sequence showed that the skipped exon encodes the most evolutionarily constrained peptide sequence among Nmf9 homologs and genome editing of a single glycine to alanine at this site was sufficient to generate a non-complementing allele. Genome editing of the Drosophila homolog at three different sites produced null alleles that resulted in premature lethality and severe locomotor retardation in homozygotes. Remarkably, heterozygous flies showed mild activity and sleep-related phenotypes. Our studies thus show the first genetic, behavioral and molecular information for mammalian Nmf9, highlight functional importance of the most conserved sequence among its homologs, and resolve competing views regarding CG45058 null phenotypes. These genetic, behavioral, and comparative studies provide a foundation for understanding activities of Nmf9 homologues in broad context.\nDiscussion:\nThrough a combination of genetic and molecular approaches, we showed that Nmf9 encodes a novel yet highly conserved protein that is functionally important to a distinct set of neurological phenotypes in both mice and flies. Starting with the mutation, we identified Nmf9 transcripts using positional cloning and experimental validation of predicted exons. RT-PCR and in situ hybridization experiments defined timing and location of gene expression. Sites of expression predicted, in addition to readily observed vestibular abnormalities, substantial phenotypes in fear learning and circadian behavior that were not obvious from the initial description or gross observation. Sequence information from >100 animal homologs identified distinct regions of strong evolutionary constraint, including ANK and FN3 motifs and three novel segments that lack motif annotation. Induced mutations at most conserved peptide sequence among the novel conserved domains produced strong phenotypes in both mice and flies, including a single glycine to alanine substitution in mice that failed to complement the original nmf9 allele. Among animal and non-animal Holozoan homologues some strongly predicted functional motifs appeared to be modular across phyla, including two small GTPase-binding motifs, CRIB and RA. Among animals, the Nmf9 homology group appears to have been lost only in urochordates, based on four genomes available in that taxon. The N-terminal CRIB domain was found only in choanoflagellate and filasterean sequences and was either gained in these lineages or lost in the lineage leading to animals. The RA domain appears to have been lost in a derived paralog during or soon after a gene duplication event at the basal vertebrate lineage and the ancestral paralog was lost in the basal mammalian lineage. Nmf9 showed a complex pattern of expression in the vestibular system, olfactory system, and regions of the brain implicated in satiety and metabolism, innate anxiety, fear learning, and circadian rhythm. We tested the functional integrity of each of these systems with a battery of standardized behavioral assays. Deficits in vestibular, circadian and fear learning measures were clear despite significant sex differences. Abnormalities in measures of anxiety and appetitive behavior were nominally significant, but confounded by hyperactivity in the mutant mice–particularly after sustained handling, which may suggest either an anxiety-related aspect or perhaps feedback from disturbed sensory input from the vestibular system. Although mutants and non-mutant littermates were easily distinguished by overall behavior, mutant phenotypes for any single formalized test were not fully penetrant and tended toward higher variance than control littermate values, which suggests that Nmf9 may be important to the robustness of these pathways, but not essential to their basic function. On this interpretation, loss of Nmf9 activity in AMY, LS, and the olfactory system may not be disruptive enough to these circuits for behavioral phenotypes to be detected on a standard inbred background, but might have consequences that would be subject to selection under the more competitive fitness constraints of wild populations. Although we observed striking Nmf9 expression in cortical ventricular zones, the adult cortex appeared normal on gross inspection, with no obvious loss of cells in any of the Nmf9-enriched sites. Females were more severely affected than males by the nmf9 mutation in several tests( Figs 1 and 4). While both sexes express Nmf9 RNA, we have not explicitly tested for quantitative differences in specific pattern elements. Sex-dependence is known for several mouse behaviors both at baseline and in response to perturbations[29]. Interestingly, mutations of Ntrk2( TrkB) also induce vestibular defects that are substantially more pronounced in female mice[30], although any shared mechanism remains to be explored. Our analysis of CG45058 in Drosophila resolves a conflict in the current literature and extends both sets of previous findings. While our work was nearing completion, two other groups reported putative null mutations in CG45058, but with very different primary outcomes. In a P-element screen for sleep phenotypes, Mark Wu and colleagues reported viable P-element and imprecise excision alleles, which they termed wide-awake( wake) based on increased latency to sleep and decreased total sleep by mutant animals[19]. In contrast, through an RNAi screen for regulators of asymmetric cell division, Juergen Knoblich and colleagues reported pupal-lethal gene deletion alleles of CG45058, which they termed banderuola( bnd) based on the cytological appearance of dividing sensory organ precursor cells[20]. A major difference between these studies is the nature of the alleles examined. CG45058 has several annotated transcripts, which primarily differ in transcriptional start site and utilization of 5’ exons that are not well-conserved outside of Diptera. The wake alleles occur in the variant 5’ region of the gene and appear likely to affect only those transcripts, but not several others that include all of the conserved protein coding sequences. Alternatively, the bnd gene deletion allele might have removed regulatory sequences or barriers that influence expression of neighboring genes resulting in a synthetic phenotype. Our results resolve this conflict by creating discrete loss-of-function alleles induced at three distinct, functionally important sites in the predicted protein, encoded by separate exons that are each included in all known CG45058 transcripts. These alleles strikingly reduced adult viability and locomotor function of surviving adults, demonstrating that this is the null phenotype, in support of Mauri et al. Heterozygotes for these null alleles had reduced waking activity, increased latencies to first sleep bout, and effects on total sleep, confirming and extending the behavioral results of Liu et al. for wake alleles. Our data show sexually dimorphic effects on total sleep in null heterozygotes, in contrast to decreased sleep found in both sexes of wake mutants. This allelic difference suggests either distinct cellular functions of CG45058 isoforms or, perhaps more likely, isoform-specific expression in cell types that impinge on sleep regulation. While the Nmf9 orthology group shows extraordinary conservation–including highly constrained sequence domains that had no prior functional or motif annotation–the reorganization of putative GTPase-binding domains across major phylogenetic boundaries, loss of the homology group in urochordates, and duplication and rescission events in the vertebrate lineage all suggest a degree of adaptive plasticity that may be reflected in both shared and lineage-specific requirements. Indeed, mutations in the Nmf9 homologues of both mouse and fly resulted in phenotypes related to daily cycles of activity, as well as locomotor abnormalities, but with some clear differences. While mouse nmf9 null alleles showed hyperactivity and tremor that increased over the course of six months, fly mutations at distinct conserved domains( ANK, FN3, and conserved domain 2) produced severe locomotor retardation and early death, within one week of eclosion. These studies lay a foundation for understanding both common functions of Nmf9 homologs and lineage-restricted activities that might relate to derived loss of the RA domain or other lineage-specific features. How differences in domain architecture and expression patterns reshape the functional networks in which Nmf9/CG45058 proteins act will surely be of interest in the evolutionary development of the nervous system.",
"role": "user"
},
{
"content": "Genome sequencing projects have identified large numbers of genes that encode proteins of unknown function . Many of these genes show strong evolutionary conservation , predicting important and well-conserved functions . A fraction of these show strong conservation of core domains but dynamic changes in other domains , predicting both conserved and lineage-dependent functions . Here we identify neurological functions associated with one such gene identified by a forward genetic screen in mice . We use recently developed genome editing tools both to confirm the mouse studies and to test comparative functions in a model insect , the fruit fly Drosophila melanogaster . Each of these species has a single homolog of this gene family , but differ by inclusion of a ras-association ( RA ) domain present in most invertebrate species but missing in mammals . Null mutations in both mice and flies produce neurological phenotypes , but while the mouse phenotypes are comparatively mild ( vestibular deficits , mild tremor , hyperactivity , mild circadian phenotypes and abnormal fear learning–but normal viability and breeding ) , null flies rarely survive to adulthood and surviving flies have severe locomotor deficits . Interestingly , heterozygous flies have significant sleep-related phenotypes . Together , our results provide a detailed first look at comparative function for a gene lineage with an unusual evolutionary history .",
"role": "assistant"
}
] |
science.chemdner_ner | science.chemdner_ner.2085 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAnti-inflammatory profile of paricalcitol in hemodialysis patients: a prospective, open-label, pilot study. Inflammation is a strong predictor of increased morbidity and mortality in hemodialysis (HD) patients. Paricalcitol, a selective vitamin D receptor activator used for prevention and treatment of secondary hyperparathyroidism, has shown anti-inflammatory properties in experimental studies, although clinical data are scarce. In an open-label, prospective, single center, pilot study, 25 stable HD patients, previously receiving calcitriol, completed 12 weeks of therapy with oral paricalcitol. Serum and peripheral blood mononuclear cell (PBMC) expression profiles of inflammatory cytokines were analyzed. Serum interleukin (IL)-1, IL-10, and IL-18 did not change, unlike high-sensitivity C-reactive protein (hs-CRP), tumor necrosis factor-α (TNF-α), and IL-6, which experienced a significant mean percent decrease of 14.3%, 4.7%, and 5%, respectively. There was a significant reduction in the TNF-α/IL-10 and the IL-6/IL-10 ratios (P < .05). Serum intact parathyroid hormone concentration experienced a mild but significant reduction. In addition, expression levels of TNF-α and IL-6 decreased by 19.1% (P < .01) and 17.5% (P < .001), respectively, whereas expression of IL-10 increased by 17.7% (P < .01) after treatment. In conclusion, paricalcitol administration to HD patients is associated with a beneficial effect on the inflammatory cytokine serum and gene expression profile of PBMC. This effect may contribute to the survival benefits of paricalcitol observed in clinical studies.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"calcitriol\", \"vitamin D\", \"paricalcitol\", \"Paricalcitol\"]}",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.54191 | [
{
"content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: A review of copula models for economic time series \nChapter Title: Empirical illustration, continued \n1. Abstract of Cited Paper (BIB001): This paper studies the estimation of a class of copula-based semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and parametric copula functions that capture the temporal dependence of the processes; the implied transition distributions are all semiparametric. Models in this class are easy to simulate, and can be expressed as semiparametric regression transformation models. One advantage of this copula approach is to separate out the temporal dependence(such as tail dependence) from the marginal behavior (such as fat tailedness) of a time series. We present conditions under which processes generated by models in this class are $\\beta $-mixing; naturally, these conditions depend only on the copula specification. Simple estimators of the marginal distribution and the copula parameter are provided, and their asymptotic properties are established under easily verifiable conditions. Estimators of important features of the transition distribution such as the (nonlinear) conditional moments and conditional quantiles are easily obtained from estimators of the marginal distribution and the copula parameter; their $\\sqrt{n}-$ consistency and asymptotic normality can be obtained using the Delta method. In addition, the semiparametric conditional quantile estimators are automatically monotonic across quantiles. \n2. Abstract of Cited Paper (BIB002): Copula-based models provide a great deal of flexibility in modeling multivariate distributions, allowing the researcher to specify the models for the marginal distributions separately from the dependence structure (copula) that links them to form a joint distribution. In addition to flexibility, this often also facilitates estimation of the model in stages, reducing the computational burden. This chapter reviews the growing literature on copula-based models for economic and financial time series data, and discusses in detail methods for estimation, inference, goodness-of-fit testing, and model selection that are useful when working with these models. A representative data set of two daily equity index returns is used to illustrate all of the main results. \n3. Abstract of Cited Paper (BIB003): In this paper, we study the asymptotic behavior of the sequential empirical process and the sequential empirical copula process, both constructed from residuals of multivariate stochastic volatility models. Applications for the detection of structural changes and specification tests of the distribution of innovations are discussed. It is also shown that if the stochastic volatility matrices are diagonal, which is the case if the univariate time series are estimated separately instead of being jointly estimated, then the empirical copula process behaves as if the innovations were observed; a remarkable property. As a by-product, one also obtains the asymptotic behavior of rank-based measures of dependence applied to residuals of these time series models.\n\nLiterature Review:",
"role": "user"
},
{
"content": "To illustrate some of the different estimation methods described above, we consider now the problem of estimating a parametric model for the copula of the standardized residuals on the SP100 and SP600 indices, described in Section 2.1.2. To illustrate a fully parametric model we combine the AR-GARCH models described above with the skewed t distribution of Hansen BIB001 for the marginal distributions. For the semiparametric model we will use the EDF to estimate the marginal distributions. A variety of different standard errors are computed for the estimated parameters, implementation details of which are presented in Patton BIB003 . For both the parametric and semiparametric cases, we will consider (i) naïve standard errors, where the estimation error from the earlier stages of estimation (AR, GARCH and marginal distributions) is ignored; (ii) multi-stage MLE or multi-stage semiparametric MLE (MSML) standard errors, using the asymptotic distribution theory BIB004 . For the parametric case we can consider one further type of standard error, namely a simulation-based standard error, where the model is simulated many times using the estimated parameters, and on each of the simulated samples the parameters are re-estimated. With a sufficient number of simulations, this latter approach yields correct finite-sample standard errors, unlike the other approaches which are all based on asymptotic arguments. For the bootstrap and the simulation-based standard errors 1000 replications are used. For the parametric model, the computation times for these (using MATLAB R2011a, on a 3 GHz machine) were 1.5 and 9.4 h respectively. The semiparametric model bootstrap standard errors had computation time of 11 min. The results are presented in Table 3 . Table 3 shows that the naïve standard errors are too small relative to the correct MSML standard errors, a predictable outcome given that naï ve standard errors ignore the additional estimation error arising from the estimation of marginal distribution parameters. In the parametric case the naïve standard errors are on average about half as large as the MSML standard errors (average ratio is 0.54), while for the semiparametric case the ratio is 0.84. The relatively better performance in the semiparametric case is possibly attributable to the fact that the MSML standard errors in that case can, correctly, ignore the estimation error coming from the AR-GARCH models for the conditional mean and variance; only the estimation error from the EDF needs to be accounted for. In the fully parametric case, the marginal distribution shape parameters and the parameters of the AR-GARCH models must be accounted for. In both the parametric and the semiparametric cases, the bootstrap standard errors are very close to the MSML standard errors, with the ratio of the former to the latter being 0.98 and 0.97 respectively. This is what we expect asymptotically, and confirms that the researcher may use either ''analytical'' MSML standard errors or more computationally-intensive bootstrap standard errors for inference on the estimated copula parameters. In the parametric case, where simulation-based standard errors can be computed, we see that these are smaller than the MSML and bootstrap standard errors, with the average ratio being around 0.7. Asymptotically we expect this ratio to go to 1, but in finite samples this value of ratio will depend on the particular model being used. While Table 3 shows the ratio to be less than 1 for all four models considered, this is not a general result and need not hold in other applications. Consider now the estimation of a semiparametric first-order Markov copula model for the S&P 100 index returns described in Section 2.1.2. As in Chen and Fan BIB002 , we use the EDF to estimate the marginal distribution of these returns. Given the rank autocorrelation estimates presented in Table 1 , it is clear that we should allow for negative dependence between consecutive returns. This is accommodated easily by some copula models (such as the Normal and Student's t, where the correlation parameter can be either positive or negative) but requires more work for other copula models, such as the commonly-used Clayton and Gumbel-Hougaard copulas. To accommodate negative dependence with a Clayton copula, one may consider a ''half rotation'' of the copula (where we ''flip'' just the first variable) so that Larger values of κ in this model imply stronger negative dependence, and with this rotation it implies tail dependence in the second quadrant (i.e., as u t−1 → 0 and u t → 1) rather than the third quadrant as for the usual Clayton copula. The other ''half rotation'' of this copula also implies negative dependence, but with tail dependence in the fourth quadrant (as u t−1 → 1 and u t → 0). Estimating both of these on the data indicated strongly that the first rotation is preferred (loglikelihood of 30.6 vs 2.2), and both of these are preferred to the original Clayton copula. A similar rotation improves the fit of the Gumbel-Hougaard copula on this data, though given that the original Gumbel-Hougaard copula implies tail dependence in the joint upper tail (rather than the joint lower tail for the Clayton copula) a rotation of the u t−1 variable provides the better fit. That is, These two copulas, along with the Normal and Student's t copulas, are estimated on the S&P 100 returns and the results are presented in Table 4 . The values of the log-likelihood indicate that the Student's t copula is preferred, with a log-likelihood of 76.9, compared with the next best model, the ''half-rotated'' Gumbel-Hougaard with a log-likelihood of 38.8. Table 4 also presents three different estimates of the standard errors for the copula parameters. The first is a ''naïve'' estimate based solely on the inverse Hessian of the copula likelihood, where the estimation error from the use of the EDF in the first stage is ignored. The second estimate is that of BIB002 , which does account for the estimation error due to the EDF. The third estimate is based on 1000 simulations of this copula model, and which also incorporates the impact of estimation error from the EDF. (The computation time for these standard errors was 2.3 h.) Unlike the naïve standard errors in the multivariate application considered in Section 2.2.5, which were found to be too small relative to the correct standard errors, in this univariate application the naïve standard errors are not very different from the correct standard errors. Of course, it should be noted that this need not be true in other applications, and the current application is perhaps special in that the overall dependence between consecutive returns is relatively weak (first order rank autocorrelation of just −0.05). In Table 5 we present the results of GoF tests applied to the four copula models estimated on the standardized residuals of the S&P 100 and S&P 600 stock index returns, described in Section 2.2.5. The subscripts ''C'' and ''R'' on the column headings indicate whether the test was applied using the empirical copula directly or the empirical copula of the Rosenblatt transforms. In all cases a simulation-based approach was used, as described above, with 100 simulations. For the fully parametric models we see that the Clayton copula is rejected by all four tests. The Normal copula is rejected by both tests using a Rosenblatt transform, while the rotated Gumbel-Hougaard and Student's t copulas are each rejected by one out of the four tests. For the semiparametric models all four copulas are strongly rejected by all four tests. Thus we do not find empirical support for any of these copulas. Fig. 3 . This figure shows the sample quantile dependence between the standardized residuals for the S&P 100 index and the S&P 600 index and 90% bootstrap confidence intervals (shaded), as well as the quantile dependence implied by four copula models. The vertical axis measures the probability of one variable lying below (for q < 0.5), or above (for q > 0.5), its q quantile, given that the other variable also lies below/above its q quantile. It is at this point that a drawback of ''blanket'' tests, such as KS and CvM tests, becomes clear: these tests are consistent (in that they have power against all possible deviations from the null of correct specification), but upon rejecting a model they provide no guidance on the direction in which the model might be improved. An alternative approach to GoF testing involves looking for misspecification in particular directions, such as tests based on particular moments or measures of dependence (see Bontemps et al. for example), which may be more informative but are not consistent. To try to see why these models are all rejected, one might look at the quantile dependence plot implied by each of these models, compared with the sample quantile dependence plot. This is presented in Fig. 3 . From this figure we can see that the Clayton copula is too asymmetric compared with the data, and this is also the case for the rotated Gumbel-Hougaard copula. The Normal and Student's t copulas appear to fit the sample upper quantile dependence estimates, but are (slightly) too low in the lower tail. Thus it appears that a more flexible copula, allowing for some asymmetry but not imposing too much, may be required for this data. In Table 6 we report GoF tests for the copula models of semiparametric first-order Markov copula model for the S&P 100 index returns described in Section 2.3.1. In this case we focus only the KS and CvM tests based on the Rosenblatt transform, given the time series nature of the model. We again use a simulation-based method to obtain critical values, based on 100 simulations. Table 6 shows that the Normal and ''half rotated'' Clayton copulas are both rejected, while neither the ''half rotated'' Gumbel-Hougaard nor the Student's t copula models can be rejected using either test.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.18731 | [
{
"content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Enhancing Human Spermine Synthase Activity by Engineered Mutations\nArticle:\nAbstract:\nSpermine synthase( SMS) is an enzyme which function is to convert spermidine into spermine. It was shown that gene defects resulting in amino acid changes of the wild type SMS cause Snyder-Robinson syndrome, which is a mild-to-moderate mental disability associated with osteoporosis, facial asymmetry, thin habitus, hypotonia, and a nonspecific movement disorder. These disease-causing missense mutations were demonstrated, both in silico and in vitro, to affect the wild type function of SMS by either destabilizing the SMS dimer/monomer or directly affecting the hydrogen bond network of the active site of SMS. In contrast to these studies, here we report an artificial engineering of a more efficient SMS variant by transferring sequence information from another organism. It is confirmed experimentally that the variant, bearing four amino acid substitutions, is catalytically more active than the wild type. The increased functionality is attributed to enhanced monomer stability, lowering the pKa of proton donor catalytic residue, optimized spatial distribution of the electrostatic potential around the SMS with respect to substrates, and increase of the frequency of mechanical vibration of the clefts presumed to be the gates toward the active sites. The study demonstrates that wild type SMS is not particularly evolutionarily optimized with respect to the reaction spermidine → spermine. Having in mind that currently there are no variations( non-synonymous single nucleotide polymorphism, nsSNP) detected in healthy individuals, it can be speculated that the human SMS function is precisely tuned toward its wild type and any deviation is unwanted and disease-causing.\nIntroduction:\nPolyamines are widely present in many organisms including mammals, plants as well as some bacteria, and play important roles in normal cell growth, differentiation, programmed death, and tissue repair[1],[2]. A particular polyamine of interest in this manuscript, the spermine( SPM), was shown to be synthesized by the aminopropyltransferase enzyme- spermine synthase( SMS) through conversion of spermidine( SPD) to SPM[3]. Further structural and biochemical studies have revealed that SMS exists as a dimer and that the dimerization is crucial for its normal function[2],[4]. Structurally each monomer of SMS contains three domains: an N-terminal domain to form the dimer contacts, a central domain serving as a lid for the C-terminal domain, and a C-terminal catalytic domain[2],[4]. In wild type SMS, the amine substrate is coordinated via interactions between its terminal nitrogen atoms and acidic residues in the protein and by hydrophobic interactions of SMS with the alkane portions of the polyamine[2]. The importance of SMS for normal cell function is manifested by the fact that a malfunctioning of SMS results in the deficiency of SPM and affects the normal development in mice, humans and other organisms. In humans, this deficiency results in the Snyder-Robinson syndrome( SRS), which is a disease caused by the X-linked SMS gene defects. Patients with SMS show mild-to-moderate mental retardation, osteoporosis, facial asymmetry, thin habitus, hypotonia, and a nonspecific movement disorder[5]. The malfunction of SMS in patients with SRS was attributed to several missense mutations as c. 267G>A( p. G56S)[6], c. 496T>G( p. V132G)[7], c. 550T>C( p. I150T)[8]–[10] and c. 1084A>G( p. Y328C)( Zhang et al., in preparation). Computational modeling has revealed that these missense mutations affect SMS stability, flexibility, interactions, and formation of the dimer structures[9]–[12]. As mentioned above, the spermidine synthase( SRM) is widespread from E. coli, mammals, and plants to yeast. It is an aminopropyltransferase with a very high degree of specificity for putrescine as the amine acceptor and synthases SPD[13]. It was shown that the SRM of Thermotoga maritima, which is the only bacterium known to grow at a high temperature as well as 90°C[14], is remarkably stable to thermal denaturation particularly in the presence of the amine acceptor substrate putrescine with a half-life larger than 25 h at 90°C. In contrast, the human SRM( HsSRM) and SMS( HsSMS) are less stable under the same temperature[4],[15]. The goal of this study is to probe the effect of transferring sequence information from Thermotoga maritima SRM( TmSRM) and SMS( TmSMS) to HsSMS and to reveal the effect of the amino acid substitutions on its stability and function. Previous in silico studies have indicated that mutations of the wild type amino acids may not be degrading the native properties( monomer stability, dimer affinity, hydrogen bond network and ionized states) of HsSMS even if they are introduced at sites known to harbor disease-causing mutations[10]. Here the mutation sites are chosen not to coincide with disease-causing mutation sites and the amino acid substitutions are aimed at enhancing wild type activity of HsSMS.\nDiscussion:\nThe experimental data shows that the designed HsSMS mutant is more active than the WT HsSMS and computational investigations indicate that biophysical characteristics of the mutant are altered as compared with the WT characteristics. What are the structural origins for the predicted changes? The MSA among homologous proteins implied that the sites 175 and 178 are correlated. In the WT HsSMS, L175 and T178 are quite close, and the polar hydrogen of T178 makes a hydrogen bond( hydrogen bond length 1. 81 Å) with OD2 of Asp 201( Figure 3A), and thus providing support for the ionized form of Asp 201. The substitution with a charged residue Glu at site 175 introduces extra negative charge, but the side chain of E175 is almost fully exposed to the water phase and thus is solvated. However, the substitution causes slight backbone reengagement and at the same time, the negative potential of E175 suppresses the ionized form of D201 and makes D201 protonated. There is no hydrogen acceptor in the vicinity of D201 and Thr at position 178 is replaced with hydrogen donor/acceptor with a longer side chain, His, to make a hydrogen bond with the polar hydrogen of D201( Figure 3B). The negative charge of E175 also supports the tautameric orientation of the side chain of His at position 178 by orienting the polar proton of H178 toward the negatively charged OE atoms of E175( Figure 3B). It can be speculated that by favoring the protonated form of D201, the mutations weaken the interactions between the product, the spermine, and the protein moiety and thus facilitates the release of the product( note that the product, the spermine, has one extra positive charge as compared with the reactant, the spermidine). In addition, the protonation of D201 lowers a bit the pKa value of D276( from pKa( WT) = 9. 7 to pKa( Fmut) = 9. 4) and thus reduces the work need to be done to protonate D276 upon substrate binding. It can be speculated that it will enhance the turnover of the reaction and will increase the reaction rate. The energy calculations indicated that all mutations stabilize the monomeric structure of HsSMS. The couple, E175 and H178, increases stability by lowering the desolvation penalty for D201, which is protonated in the mutant, and by providing stabilizing hydrogen bonds. The other two mutations, S165D and C206R, also stabilize the monomer, especially C206R is predicted to increase the monomer stability by 15. 77 kcal/mol. In the WT structure, C206 forms a hydrogen bond with D239( Figure 4A); while in the CRmut structure, R206 forms hydrogen bonds with the backbone oxygens of both D239 and G238( Figure 4B). The extra hydrogen bond is the main reason for increased stability of CRmut. The S165D stabilizes the monomer as well, because the WT residue, Ser 165 is not involved in any specific interactions, while in the mutant, the Asp 165 interacts with Lys 163 and Lys 167 while being exposed, does not pay any desolvation penalty. At the same time, the analysis showed that neither of the mutations nor the Fmut affects the HsSMS dimer affinity. Structurally they are far away from the dimer interface and are situated in the C-terminal domain. Because of that, from structural perspective they are not expected to alter the dimer affinity. The fact that computation protocol predicted no effect is reassuring and indicates that numerical protocol is robust. How the increased stability may be related to enhanced activity is not an obvious question. Typically a decrease of the stability( or folding free energy) caused by mutation is considered to be bad for the function of proteins and frequently associated with diseases[17]–[22]. However, the opposite effect of increasing stability was also shown to cause disease[23],[24]. Perhaps the key factor of estimating the relations between stability and function is the understanding the mechanism of the corresponding reaction. The HsSMS needs to bind two reactants and to release two products, while being a dimer. The less conformational changes are involved in the reaction, the faster the reaction will be. It can be speculated that such predicted overall rigidification will enhance the reaction rate, while allowing the necessary local and small conformational changes still to be allowed to carry the conversion SPD→SPM. Similar effect, an increase of the reaction rate upon mutations to be associated with structural stabilization was reported before[25],[26]. One of the most important finding is that the four mutations, Fmut, result in a structure which carries more negative charge than the WT. The distribution of the extra negative charge is not uniform and results in stronger( negative) potential within the dimer cleft( path “A”) and within a patch at the bottom of the C-terminal domain( path “B”)( Figure 2A, Figure 2B). There is no experimental data to provide insights why the dimerization is necessary for the HsSMS function, except the original work which showed that if N-domain is removed the HsSMS does not form dimer and does not function[4]. However, each of the HsSMS monomers carries an active site and in principle should be capable to function as a monomer. Perhaps the dimerization is needed not for the reaction itself, but to promote the delivery of the reactants and the release of the products of the reaction. Electrostatic calculations support such a notion. Indeed in both the WT and the mutant, we identified two plausible electrostatic funnels, marked as path “A” and “B” in Figure 2A, however the potential is more negative in the mutant. It can be speculated that this extra negative potential provides better steering of the SPD to enter the dimer cleft and to increase the concentration of SPD in the proximity of the active sites. Indeed, the experimental data suggests that the affinity of HsSMS to SPD is higher in the mutant( Table 7). In addition, the dramatic increase of the reaction rate could be attributed to the increase of the maximal rate( Vm) in the mutant, which in turn can be effect of the increased effective concentration of SPD at the entrance of the active site. From the normal mode analysis, we can see the Fmut has a slightly higher frequency of domain motion than the WT, the difference being 170∼260 MHz. Such difference most probably will affect the rate and efficiency of the reaction by affecting the substrate approach to and release from the active site. Following the original work of Zhou and coworkers[27], it can be speculated that HsSMS dimer forms a conformation gate. During the reaction, the substrate, the SPD, follows the electrostatic funnels and makes repeated attempts to go inside the active pocket. As pointed by Zhou and co-workers[27], the higher the frequency of the gate is, the better is the chance that the substrate will successfully pass the gate, since the gate will be more frequently open when the SPD happens to be at the entrance. The same speculation can be made for the release of the products, i. e. the increased frequency of gate switching should increase the efficiency of product releasing[27]. Taking all these effects together, seems to us that increased activity in the mutant is due to a combinations of factors, ranging from enhanced access of the substrates to the active site via better electrostatic guiding and gate switching, structural stabilization and better protonation environment for the reaction. This indicates the complexity of the reaction process taking place in SMS and shows the complex interplay between structural and thermodynamic factors. At the same time, it should be noticed that the amino acid sequence of HsSMS currently does not have known variants. A search in various databases( as for example dbSNP at NCBI, NIH[28]–[30]) resulted in no hits. The only mutations found in human population are the missense mutations causing Snyder-Robinson syndrome[6]–[9]. Our recent computations analysis[10] indicated that HsSMS should be able to tolerate mutations even at the disease-causing sites; however, such variants are not seen in the human population yet. Combining these observations with the result of this manuscript, it can be speculated that the function of HsSMS is precisely tuned toward its wild type value to satisfy some currently unknown constraints in vivo. Perhaps, any deviation from the wild type sequence is unwanted and disease-causing. However, caution should be used in the interpretation of the in silico results, since the threshold indicating a deviation is perhaps specific for each protein and reaction involved.",
"role": "user"
},
{
"content": "Proteins are constantly subjected to evolutionary pressure to assure the organism's survival and reproduction . At the same time , the proteins' amino acid sequence undergoes mutations , some of which may cause diseases while others may be reflecting natural differences within the population ( non-synonymous single nucleotide polymorphism , nsSNP ) . In this study we examine the human spermine synthase ( HsSMS ) , for which currently there are no nsSNPs , while rare disease mutations are known to cause Snyder-Robinson syndrome . What is so special with this protein ? Maybe the HsSMS is so well optimized for its function that any change of the wild type sequence should be degrading its performance . To check such a possibility , we engineered a mutant of HsSMS with enhanced stability , electrostatic and mechanical properties . The mutant was confirmed experimentally to be a better enzyme than the wild type . Thus , the HsSMS is not evolutionally optimized with respect to its enzymatic reaction , its amino acid sequence differs only in sick individuals and so far its sequence was found to be identical in all healthy individuals . Therefore , it can be speculated that the HsSMS function is precisely tuned toward the wild type characteristics such so any deviation is unwanted and is disease-causing .",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.12764 | [
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nQUARITE (quality of care, risk management and technology in obstetrics): a cluster-randomized trial of a multifaceted intervention to improve emergency obstetric care in Senegal and Mali\nAbstract 1:\nBackground Maternal and perinatal mortality are major problems for which progress in sub-Saharan Africa has been inadequate , even though childbirth services are available , even in the poorest countries . Reducing them is the aim of two of the main Millennium Development Goals . Many initiatives have been undertaken to remedy this situation , such as the Advances in Labour and Risk Management ( ALARM ) International Program , whose purpose is to improve the quality of obstetric services in low-income countries . However , few interventions have been evaluated , in this context , using rigorous methods for analyzing effectiveness in terms of health outcomes . The objective of this trial is to evaluate the effectiveness of the ALARM International Program ( AIP ) in reducing maternal mortality in referral hospitals in Senegal and Mali . Secondary goals include evaluation of the relationships between effectiveness and re source availability , service organization , medical practice s , and satisfaction among health personnel . Methods / Design This is an international , multi-centre , controlled cluster-r and omized trial of a complex intervention . The intervention is based on the concept of evidence -based practice and on a combination of two approaches aim ed at improving the performance of health personnel : 1 ) Educational outreach visits ; and 2 ) the implementation of facility-based maternal death review s . The unit of intervention is the public health facility equipped with a functional operating room . On the basis of consent provided by hospital authorities , 46 centres out of 49 eligible were selected in Mali and Senegal . Using r and omization stratified by country and by level of care , 23 centres will be allocated to the intervention group and 23 to the control group . The intervention will last two years . It will be preceded by a pre-intervention one-year period for baseline data collection . A continuous clinical data collection system has been set up in all participating centres . This , along with the inventory of re sources and the satisfaction surveys administered to the health personnel , will allow us to measure results before , during , and after the intervention . The overall rate of maternal mortality measured in hospitals during the post-intervention period ( Year 4 ) is the primary outcome . The evaluation will also include cost-effectiveness . Trial Registration The QUARITE trial is registered on the Current Controlled Trials website under the number IS RCT N46950658\n\n\nTitle 2:\nMeasuring the Quality of Physician Practice by Using Clinical Vignettes: A Prospective Validation Study\nAbstract 2:\nAccurate , affordable , and valid measurements of clinical practice are the basis for quality -of-care assessment s ( 1 ) . However , to date , most measurement tools have relied on incomplete data sources , such as medical records or administrative data ; require highly trained and expensive personnel to implement ; and are difficult to vali date ( 2 - 5 ) . Comparisons of clinical practice across different sites and health care systems are also difficult because they require relatively complex instrument design s or statistical techniques to adjust for variations in case mix among the underlying patient population s ( 6 , 7 ) . We have developed a measurement tool , computerized clinical vignettes , that overcomes these limitations and measures physicians ' clinical practice against a predefined set of explicit quality criteria . These vignettes simulate patient visits and can be given to physicians to measure their ability to evaluate , diagnose , and treat specific medical conditions . Each vignette-simulated case contains realistic clinical detail , allowing an identical clinical scenario to be presented to many physicians . Each physician can be asked to complete several vignettes to simulate diverse clinical conditions . This instrument design obviates the need to adjust quality scores for the variation in disease severity and comorbid conditions found in actual patient population s. Our vignettes are also distinct from other quality measurements of clinical practice because they do not focus on a single task , or even a limited set of tasks , but instead comprehensively evaluate the range of skills needed to care for a patient . Vignettes are particularly well-suited for quality assessment s of clinical practice that are used for large-scale ( 8 , 9 ) , cross-system comparisons ( 10 , 11 ) or for cases in which ethical issues preclude involving patients or their records ( 7 , 12 , 13 ) . They are also ideal for evaluations that require holding patient variation constant ( 14 , 15 ) or manipulating patient-level variables ( 15 - 17 ) . The appeal of vignettes has result ed in their extensive use in medical school education ( 18 , 19 ) , as well as various studies that explicitly evaluate the quality of clinical practice in real-life setting s and comparative analysis among national health care systems ( 10 , 20 - 23 ) . Before vignette-measured quality can be used confidently in these setting s , however , 2 important questions must be answered : How valid are vignettes as a measure of actual clinical practice ? Can vignettes discriminate among variations in the quality of clinical practice ? This has led to a search to define a gold st and ard for validation ( 24 - 26 ) . We and others have used st and ardized patients as this st and ard . St and ardized patients are trained actors who present unannounced to outpatient clinics as patients with a given clinical condition . Immediately after meeting with a physician , the st and ardized patient records on a checklist what the physician did during the visit ( 26 - 28 ) . Rigorous methods , which we have described in detail elsewhere ( 29 ) , ensure that st and ardized patients can be considered a gold st and ard . In addition , we have demonstrated the validity of st and ardized patients as a gold st and ard by concealing audio recorders on st and ardized patients during visits . The overall rate of agreement between the st and ardized patients ' checklists and the independent assessment of the audio transcripts was 91 % ( 26 ) . We previously used paper- and -pen vignettes in a study limited to only 1 health care system , the Veterans Administration , and found that they seemed to be a valid measure of the quality of clinical practice according to their rate of agreement with st and ardized patient checklists ( 26 ) . For this study , we wanted to confirm the validity of vignettes by using a more complex study design that introduced many more r and omly assigned physicians , a broader range of clinical cases , and several sites representing different health care systems . We also wanted to test a refined , computerized version of vignettes , which we believe are more realistic and streamline data collection and scoring . We were particularly interested in determining whether the vignettes accurately capture variation in the quality of clinical practice , which has become increasingly prominent in the national debate on quality of care ( 30 , 31 ) . We hoped that vignettes could contribute to this debate by providing a low-cost measure of variation across different health care systems . Methods Sites The study was conducted in 4 general internal medicine clinics : 2 Veterans Affairs ( VA ) medical centers and 2 large , private medical centers . One private site is a closed group model , and the other , primarily staffed by employed physicians , contracts with managed care plans . All sites are located in California , and each has an internal medicine residency training program . One VA medical center and 1 private site are located in 1 of 2 cities . The 2 VA medical centers are large , academically affiliated hospitals with large primary care general internal medicine practice s. We chose the 2 private sites that were generally similar to the VA medical centers and to each other ; each had large primary care practice s and capitated reimbursement systems that provide primary care general internists with a broad scope of clinical decision-making authority . Study Design At each site , all attending physicians and second- and third-year residents who were actively engaged in the care of general internal medicine out patients were eligible to participate in the study . We excluded only interns . Of 163 eligible physicians , 144 agreed to participate . We informed consenting physicians that 6 to 10 st and ardized patients might be introduced unannounced into their clinics over the course of a year and that they might be asked to complete an equal number of vignettes . Sixty physicians were r and omly selected to see st and ardized patients : 5 physicians from each of the 3 training levels at each of the 4 sites ( Figure 1 ) . We assigned st and ardized patients to each selected physician for 8 clinical casessimple and complex cases of chronic obstructive pulmonary disease , diabetes , vascular disease , and depression . We abstract ed the medical records from the 480 st and ardized patient visits . Each selected physician also completed a computerized clinical vignette for each of the 8 cases . For st and ardized patient visits that a selected physician did not complete , a replacement physician , who was r and omly selected from the same training level at the same site , completed the visit . Eleven physicians required replacements . The 11 replacement physicians completed 24 st and ardized patient visits . Each replacement physician completed vignettes for all 8 cases . Finally , we r and omly selected 45 additional physicians to serve as controls and complete vignettes ( only ) for all 8 cases . A total of 116 physicians participated in the study by seeing st and ardized patients , completing vignettes , or both . St and ardized patients presented to the clinics between March and July 2000 , and physicians completed vignettes between May and August 2000 . Figure 1 . Planned study design showing sites and physician sample by level of training and clinical case for the 3 quality measurement methods . Vignette Data Collection We developed the vignettes by using a st and ardized protocol . We first selected relatively common medical conditions frequently seen by internists . All selected conditions had explicit , evidence -based quality criteria and accepted st and ards of practice that could be used to score the vignettes , as well as be measured by st and ardized patients and chart abstract ion . We developed written scenarios that described a typical patient with 1 of the same 4 diseases ( chronic obstructive pulmonary disease , diabetes , vascular disease , or depression ) . For each disease , we developed a simple ( uncomplicated ) case and a more complex case with a comorbid condition of either hypertension or hypercholesterolemia . This yielded a total of 8 clinical cases . ( A sample vignette and scoring sheet are available online . ) Supplement . Appendix Figure : Vignette scoring sheet . Published online with permission from John W. Peabody , MD , PhD The physician completing the vignette sees the patient on a computer . Each vignette is organized into 5 sections , or domains , which , when completed in sequential order , recreate the normal sequence of events in an actual patient visit : taking the patient 's history , performing the physical examination , ordering radiologic or laboratory tests , making a diagnosis , and administering a treatment plan . For example , the computerized vignette first states the presenting problem to the physician and prompts the physician to take the patient 's history ( that is , ask questions that would determine the history of the present illness ; past medical history , including prevention ; and social history ) . Physicians can record components of the history in any order without penalty . The entire format is open-ended : The physician enters the history questions directly into the computer and , in the most recent computerized versions , receives realtime responses . When the history is completed , the computer confirms that the physician has finished and then provides key responses typical of a patient with the specific case . The same process is repeated for the 4 remaining domains . In addition to the open-ended format , we have taken 3 steps to avoid potential inflation of vignette scores . First , physicians are not allowed to return to a previous domain and change their queries after they have seen the computerized response . Second , the number of queries is limited in the history and physical examination domains . For example , in the physical examination domain , physicians are asked to list only the 6 to 10 essential elements of the examination that they would perform . Third , they are given limited time to complete the vignette ( just as time is limited during an actual patient visit\n\n\nTitle 3:\nUsing vignettes to compare the quality of clinical care variation in economically divergent countries.\nAbstract 3:\nOBJECTIVE To determine whether clinical vignettes can measure variations in the quality of clinical care in two economically divergent countries . DATA SOURCE / STUDY SETTING Primary data collected between February 1997 and February 1998 at two Veterans Affairs facilities in the United States and four government-run outpatient facilities in Macedonia . STUDY DESIGN R and omly selected , eligible Macedonian and U.S. physicians ( > 97 percent participation rate ) completed vignettes for four common outpatient conditions . Responses were judged against a master list of explicit quality criteria and scored as percent correct . DATA COLLECTION / EXTRACTION : An ANOVA model and two-tailed t-tests were used to compare overall scores by case , study site , and country . Principal Findings . The mean score for U.S. physicians was 67 percent ( + /-11 percent ) compared to 48 percent ( + /-11 percent ) for Macedonian physicians . The quality of clinical practice , which emphasizes basic skills , varied greatly in both sites , but more so in Macedonia . However , the top Macedonian physicians in all sites approached or-in one case-exceeded the median score in the U.S. sites . CONCLUSIONS Vignettes are a useful method for making cross-national comparisons of the quality of care provided in very different setting s. The vignette measurements revealed that some physicians in Macedonia performed at a st and ard comparable to that of their counterparts in the United States , despite the disparity of the two health systems . We infer that in poorer countries , policy that promotes improvements in the quality of clinical practice -not just structural inputs-could lead to rapid improvements in health",
"role": "user"
},
{
"content": "Re source availability positively correlated with mean hospital CBCA scores and maternal and perinatal mortality were inversely correlated with hospital CBCA scores .\nConclusion Our CBCA has substantial inter-rater reliability and there is compelling evidence of its validity as the tool performs according to theory .",
"role": "assistant"
}
] |
science.cdr_ner | science.cdr_ner.385 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nNephrotoxicity of combined cephalothin-gentamicin regimen. Two patients developed acute tubular necrosis, characterized clinically by acute oliguric renal failure, while they were receiving a combination of cephalothin sodium and gentamicin sulfate therapy. Patients who are given this drug regimen should be observed very carefully for early signs of nephrotoxicity. High doses of this antibiotic combination should be avoided especially in elderly patients. Patients with renal insufficiency should not be given this regimen.",
"role": "user"
},
{
"content": "{\"Chemical\": [\"cephalothin\", \"cephalothin sodium\", \"gentamicin\", \"gentamicin sulfate\"], \"Disease\": [\"renal failure\", \"oliguric renal failure\", \"oliguric\", \"acute tubular necrosis\", \"nephrotoxicity\", \"Nephrotoxicity\", \"renal insufficiency\"]}",
"role": "assistant"
}
] |
science.acl_arc_intent_classification | science.acl_arc_intent_classification.1280 | [
{
"content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nmethod\n\nContext before the citation:\nIn addition to its explanatory capacity, this symbolic acquisition technique has obtained good results for other acquisition tasks when compared to existing statistical techniques (Bouillon et al., 2002). highlighting of the corpus-specific structures conveying the target element. understanding of the results, that is, why a specific element has been retrieved or not; 2.\n\nCitation Sentence:\nIn addition to its explanatory capacity , this symbolic acquisition technique has obtained good results for other acquisition tasks when compared to existing statistical techniques ( Bouillon et al. , 2002 ) .\n\nContext after the citation:",
"role": "user"
},
{
"content": "Motivation",
"role": "assistant"
}
] |
science.qasper_extractive_qa | science.qasper_extractive_qa.269.0 | [
{
"content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding\n\nLearning causal and temporal relationships between events is an important step towards deeper story and commonsense understanding. Though there are abundant datasets annotated with event relations for story comprehension, many have no empirical results associated with them. In this work, we establish strong baselines for event temporal relation extraction on two under-explored story narrative datasets: Richer Event Description (RED) and Causal and Temporal Relation Scheme (CaTeRS). To the best of our knowledge, these are the first results reported on these two datasets. We demonstrate that neural network-based models can outperform some strong traditional linguistic feature-based models. We also conduct comparative studies to show the contribution of adopting contextualized word embeddings (BERT) for event temporal relation extraction from stories. Detailed analyses are offered to better understand the results.\n\nIntroduction\nEvent temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 .\nThe goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague.\nModeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations.\nDespite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED.\nWe apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension.\nIn this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows:\n\nModels\nWe investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section.\n\nData\nis created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events.\nAs demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations.\nannotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations.\nThe common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences.\nIn both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio.\nTo justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one.\nTo handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict.\n\nImplementation Details\nCAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system.\nIn an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set.\nNote that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used.\nAs we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio.\n\nResult and Analysis\nTable TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset.\n\nTemporal Relation Data\nCollecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research.\n\nFeature-based Models\nThe series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity.\n\nNeural Network Model\nNeural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation.\nIn our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations.\n\nConclusion\nWe established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction.\n\nAcknowledgement\nWe thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).\n\nFigure 1: An example paragraph with its (partial) temporal graphs. Some events are removed for clarity.\n\nFigure 2: Deep neural network architecture for event relation prediction\n\nTable 2: Token sentence distance breakdown. 0: a pair of events in the same sentence; 1: a pair of events in the neighboring sentence (2 sentence span); 2: a pair of events in 3 sentence span, etc.\n\nTable 1: Data overview: the number of documents in CaTeRS refers to the number of stories. “Negative” denotes negative pairs (missing annotations) within two sentence span we construct for the whole dataset.\n\nTable 3: Best hyper-parameters: C: controls for the strength of L1 penalty; balanced: is a binary indicator of whether training on “balanced” labels; max iter: early stopping criteria.\n\nTable 4: F1 Scores on development and test set for the two datasets. Note for CaTeRS data, we didn’t conduct cross-validation on CAEVO, but instead train the model with default parameter settings. Hence the dev performance doesn’t apply here.\n\nFigure 3: NN model (with GloVe embedding) performance with different negative sample ratio for CaTeRS.\n\nTable 5: NN performances with GloVe and BERT embeddings respectively.\n\nFigure 4: NN model (with GloVe embedding) performance with different negative sample ratio for RED.\n\nTable 6: Examples of temporal relations misclassified with GloVe embedding but correct with BERT embedding.\n\nQuestion: What conclusions do the authors draw from their detailed analyses?",
"role": "user"
},
{
"content": "{\"answer\": [\"neural network-based models can outperform feature-based models with wide margins\", \"contextualized representation learning can boost performance of NN models\"], \"evidence\": [\"We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction.\"]}",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_arxiv | science.scientific_papers_summarization_single_doc_arxiv.8772 | [
{
"content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\ndistributions of matter in form of shells have been a useful tool in astrophysics and general relativity , often providing simplified but analytically tractable models in cosmology , gravitational collapse and supernovae . in general relativity ,\nthe pioneering work on constructing solutions of the einstein field equations representing shells was done by israel @xcite . in his formalism , the shells are constructed by joining two manifolds across a three - dimensional time - like surface , and the matter representing the shell is interpreted as a discontinuity of the extrinsic curvature on the surface .\nthis formalism has been used to study several models of relativistic shells . @xcite\ngive a brief review of some of these models and also discuss shells which admit closed equations of state , including non - linear polytropic models .\nan exact solution of a static shell surrounding a black hole was obtained by @xcite , and the stability of such a configuration against radial perturbations was studied by @xcite , while non - radial perturbations were analysed by @xcite .\nnon - radial perturbations of self - gravitating static fluid shells in the newtonian gravity were studied by @xcite . in this work ,\nwe propose a newtonian analytical potential - density pair that represents a family of spherical shells with varying thickness .\nfor this potential , we calculate the rotation curves of test particles and make a first study of stability of the shells by considering the stability of circular orbits of the test particles .\nthis is done in section [ sec_ns ] . in section [ sec_gr ]\n, we consider a general relativistic version of the newtonian shells . for this task , we employ a particular isotropic form of a metric in spherical coordinates .\nwe calculate the components of the energy - momentum tensor of the shell s matter and also analyse the geodesic circular motion of test particles and discuss their stability .\nfinally , in section [ sec_dis ] we summarize our results .\nin this section , we present a simple potential - density pair that describes a family of spherical shells of matter\n. we shall consider the following potential : @xmath0 where @xmath1 is a parameter with dimensions of length and @xmath2 is a non - negative integer . by using the poisson equation in spherical coordinates ,\n@xmath3 one obtains a mass density given by @xmath4 for @xmath5 , the pair ( [ eq_phi_n ] ) and ( [ eq_rho_n ] ) is reduced to the plummer model @xcite that has a monotone decreasing density profile .\nhowever , for @xmath6 the density vanishes on @xmath7 and thus we have a shell - like distribution of matter . fig .\n[ fig1](a ) shows some curves of the dimensionless mass density @xmath8 as function of @xmath9 for @xmath10 , @xmath11 and @xmath12 . for higher values of @xmath2 ,\nthe mass density becomes more concentrated ( thinner shell ) , with maximum at @xmath13 for large values of @xmath14 , the density ( [ eq_phi_n ] ) decays as @xmath15 , thus very fast for large @xmath2 , as can be seen in fig .\n[ fig1](a ) .\n( equation [ eq_rho_n ] ) , as function of @xmath9 for @xmath10 , @xmath11 and @xmath12 .\n( b ) the circular velocity @xmath16 ( equation [ eq_vc_n ] ) , and ( c ) the angular momentum @xmath17 ( equation [ eq_h_n ] ) as functions of @xmath9 for @xmath10 , @xmath11 and @xmath12 . ] the circular velocity @xmath18 and angular momentum per unit mass @xmath19 of test particles moving in circular orbits are given by @xmath20 curves of the circular velocity @xmath16 and of the angular momentum @xmath17 for @xmath10 , @xmath11 and @xmath12 are depicted in figs [ fig1](b ) and ( c ) , respectively .\nthe stability of circular orbits can be determined using the rayleigh criterion of stability for a rotating fluid ( @xcite ; see also @xcite ) , and that can be adapted to study circular orbits of test particles in self - gravitating systems ( see e. g. @xcite for an application of the rayleigh criterion of stability in general relativity ) .\nthe condition for a stable orbit is @xmath21 using equation ( [ eq_h_n ] ) , it is straigthforward to check that the orbits are always stable .\nit is instructive to consider the limit @xmath22 . in this case\n, we have an infinitesimal thin shell on @xmath23 , and the limits of the potential ( [ eq_phi_n ] ) and circular velocity ( [ eq_vc_n ] ) are @xmath24 thus , for @xmath25 the potential is constant and a test particle is at rest inside the shell , and if @xmath26 the shell exerts on the test particle the same gravitational force as a point mass located on @xmath7 , in agreement with newton s shell theorem .\nnow , we study a general relativistic version of the newtonian potential - density pair discussed in the previous section . for this , it is convenient to choose a particular metric in isotropic form in spherical coordinates @xmath27 , which we write as @xmath28 where @xmath29 . in this particular form of metric\n, the schwarzschild solution is given by @xmath30 .\na similar form of the metric ( [ eq_metric ] ) in cylindrical coordinates was used by @xcite to construct general relativistic versions of newtonian models for distributions of matter in galaxies . for metric ( [ eq_metric ] ) ,\nthe einstein equations @xmath31 yield the following expressions for the non - zero components of the energy - momentum tensor @xmath32 : @xmath33 \\mbox{. } \\label{eq_tthth}\\end{gathered}\\ ] ] because @xmath32 has only diagonal non - zero components , the energy density is directly given by @xmath34 and the pressures or tensions along a direction @xmath35 read @xmath36 .\nthe ` effective newtonian density ' @xmath37 reads @xmath38 the relation between the function @xmath39 and the newtonian potential @xmath40 is obtained by comparing equation ( [ eq_rho_n1 ] ) or ( [ eq_ttt ] ) with equation ( [ eq_poisson ] ) in the non - relativistic limit when @xmath41 , @xmath42 now , using equation ( [ eq_f_phi ] ) with the potential ( [ eq_phi_n ] ) , equations ( [ eq_ttt])([eq_rho_n1 ] ) can be cast as @xmath43 where the dimensionless variables and parameters are as follows : @xmath44 , @xmath45 , @xmath46 , @xmath47 , @xmath48 , and where we defined @xmath49 and @xmath50 .\nwe assume again that @xmath6 . for @xmath5 , equations ( [ eq_eps])([eq_rho_n2 ] ) reduce to the perfect fluid sphere derived by @xcite , which has as equation of state similar to the classical polytrope of index 5 .\nnote that the relation between @xmath51 and @xmath52 is independent of the radius and is simply @xmath53 . in order to be physically meaningful\n, the components of the energy - momentum tensor should satisfy the energy conditions .\nthe strong energy condition states that @xmath54 , whereas the weak energy condition imposes the condition @xmath55 .\nthe dominant energy condition requires @xmath56 .\nequations ( [ eq_eps])([eq_rho_n2 ] ) show that the weak energy condition is always satisfied , and the strong condition holds if @xmath57 .\nthis condition also ensures that we have pressures everywhere .\nthe inequality @xmath58 is satisfied when @xmath59 , and we have @xmath60 if @xmath61 .\nthus , there exists an interval of the parameter @xmath62 where all energy conditions are satisfied .\n( equation [ eq_eps ] ) , ( b ) the effective newtonian density @xmath47 ( equation [ eq_rho_n2 ] ) , ( c ) the radial pressure @xmath63 ( equation [ eq_p_r ] ) and ( d ) the azimuthal / polar pressure @xmath64 ( equation [ eq_p_phi ] ) as functions of @xmath44 for @xmath10 , and parameter @xmath65 , @xmath66 and @xmath67 . ] ( equation [ eq_eps ] ) , ( b ) the effective newtonian density @xmath47 ( equation [ eq_rho_n2 ] ) , ( c ) the radial pressure @xmath63 ( equation [ eq_p_r ] ) and ( d ) the azimuthal / polar pressure @xmath64 ( equation [ eq_p_phi ] ) as functions of @xmath44 for @xmath11 , and parameter @xmath68 , @xmath67 and @xmath69 . ] figs [ fig2](a)(d ) display , respectively , some curves of the energy density @xmath70 , the effective newtonian density @xmath71 , the radial pressure @xmath51 and the azimuthal / polar pressure @xmath52 as functions of the radial coordinate for the shell with @xmath10 and parameter values @xmath72 , @xmath66 and @xmath67 . in figs [ fig3](a)(d ) , the same quantities are displayed for the shell with @xmath11 and parameter values @xmath73 , @xmath67 and @xmath69 .\nall quantities have the profile of a shell - like distribution of matter . as the value of the parameter @xmath62\nis increased , the densities and pressures are smoothed out .\nthe densities and pressures become more concentrated when @xmath2 is increased . as we did in the newtonian model\n, we also study the circular orbits and stability of test particles . without loss of generatity , if the geodesic motion is confined on the @xmath74 plane , the expressions for the circular velocity and for the angular momentum are formally the same as in cylindrical coordinates , and we have @xcite @xmath75 } } \\mbox{. } \\label{eq_h_r1}\\end{gathered}\\ ] ] for the shell potential ( [ eq_phi_n ] ) , we get @xmath76 where @xmath77 , @xmath78 and @xmath50 .\n( equation [ eq_vc_r2 ] ) , as a function of @xmath44 for the relativistic shell model with ( a ) @xmath10 and ( b ) @xmath11 .\n( c)(d ) the angular momentum @xmath78 ( equation [ eq_h_r2 ] ) , as a function of @xmath44 for the relativistic shell model with ( c ) @xmath10 and ( d ) @xmath11 . ]\nsome curves of the circular velocity @xmath77 and angular momentum @xmath78 for relativistic shells are shown in figs [ fig4](a)(d ) . in figs [ fig4](a)(b ) , the curves of rotation are displayed for the shells with @xmath10 and @xmath11 , respectively , and the corresponding curves of angular momentum are plotted in figs [ fig4](c)(d ) . as the values of the parameter @xmath45 is increased , the velocities becomes less relativistic .\nthe minimum value of @xmath62 such that the rotation curve is sublumial has to be found numerically .\ntable [ tab1 ] shows some approximate minimum values for @xmath62 for shells with @xmath10 to @xmath12 .\nnote that these minimum values are always larger than those required by the energy conditions .\n.approximate minimum values of the parameter @xmath45 that ensures sublumial rotation curves for some values of @xmath2 .\n[ cols=\"<,^,^,^,^,^,^,^ \" , ] as functions of @xmath79 and @xmath62 for circular geodesic orbits in shells with @xmath10 to @xmath11 . ]\nthe curves of angular momentum in figs [ fig4](c)(d ) show regions of instability at high relativistic velocities . in the newtonian limit ,\nthe orbits are always stable , thus this instability is a purely relativistic effect . using the rayleigh criterion of stability , curves of @xmath80 as functions of @xmath79 and @xmath62 are shown in fig .\n[ fig5 ] for models of shells with @xmath2 from @xmath10 to @xmath11 . in each graph\n, the minimum value of @xmath62 is that given in table [ tab1 ] . in the inner part of the curves ,\norbits are unstable .\nthe regions of instability become larger as the parameter @xmath2 is increased .\nwe proposed a very simple potential - density pair in newtonian gravity that describes a family of spherical shells whose thickness is function of a parameter @xmath2 . in the limit @xmath22 ,\none obtains an infinitesimal thin shell .\nthis family of shells is found to be stable by the rayleigh criterion of stability .\nwe also studied a general relativistic version of the newtonian model of shells by using a particular metric in isotropic form in spherical coordinates .\nthese relativistic shells have radial pressures that are different from the equal azimuthal and polar pressures , but are proportional to for a fixed value of @xmath2 .\nalso , parameters can be chosen such that the matter in the shells satisfies all the energy conditions .\nthe rayleigh criterion of stability shows that the relativistic shells have regions of unstable circular orbits of test particles , and this instability is a pure relativistic effect .\nd. v. thanks fapesp for financial support , and p. s. l. thanks fapesp and cnpq for financial support .\n99 israel w. , 1966 , nuovo cimento b , 44 , 1 nuovo cimento b , 48 , 463 ( erratum ) .\nkijowski j. , magli g. , malafarina d. , 2006 , gen .\ngravitation , 38 , 1697 .\nfrauendiener j. , hoenselaers c. , konrad w. , 1990 , classical quantum gravity , 7 , 585 .\nbrady p.r . , louko j. , poisson e. , 1991 , phys . rev .\nd , 44 , 1891 .\nschmidt b.g . , 1998 ,\nd , 59 , 024005 .\nbik j. , schmidt b.g .\n, 1999 , apj , 521 , 708 .\nplummer h.c . , 1911 , mnras , 71 , 460 .\nbinney j. , tremaine s. , 2008 , galactic dynamics , 2nd edn .\nprinceton univ . press ,\nprinceton , nj . rayleigh j.w.s . , 1917 ,\nlondon a , 93 , 148 .\nlandau l.d .\n, lifshitz e.m . , fluid mechanics , 2nd edn .\npergamon press , oxford .\nletelier p.s . , 2003 ,\nd , 68 , 104002 .\nvogt d. , letelier p.s . , 2005 , mnras , 363 , 268 .\nbuchdahl h.a . , 1964 ,\napj , 140 , 1512 .",
"role": "user"
},
{
"content": "a family of spherical shells with varying thickness is derived by using a simple newtonian potential - density pair . \n then , a particular isotropic form of a metric in spherical coordinates is used to construct a general relativistic version of the newtonian family of shells . \n the matter of these relativistic shells presents equal azimuthal and polar pressures , while the radial pressure is a constant times the tangential pressure . \n we also make a first study of stability of both the newtonian and relativistic families of shells . \n _ keywords _ : gravitation .",
"role": "assistant"
}
] |
science.data_reco_mcq_mc | science.data_reco_mcq_mc.537 | [
{
"content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to train a model for detailed face reconstruction from monocular video or a single\n\nKeyphrases: face reconstruction monocular video single image\n\nDataset description:\nMake3D: The Make3D dataset is a monocular Depth Estimation dataset that contains 400 single training RGB and depth map pairs, and 134 test samples. The RGB images have high resolution, while the depth maps are provided at low resolution.\nMuCo-3DHP: MuCo-3DHP is a large scale training data set showing real images of sophisticated multi-person interactions and occlusions.\n300W: The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size.\nAFLW: The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image.\nFaceForensics: FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces.\n\nOptions:- A: Make3D B: MuCo-3DHP C: 300W D: AFLW E: FaceForensics",
"role": "user"
},
{
"content": "[\"300W\", \"AFLW\"]",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.1018 | [
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is Etizolam?\n\nContext: Etizolam is a thienodiazepine derivative, with high affinity for the benzodiazepine site of GABAA receptors. It is often referred to as a new (or novel) psychoactive substance, a 'designer' benzodiazepine or a 'street benzodiazepine'.\nEtizolam is a novel psychoactive substance and novel benzodiazepine of the thienotriazolodiazepine class that has recently seen an increasing trend in use worldwide.\nEtizolam is a benzodiazepine analogue that is approved for use in Japan, Italy and India\nEtizolam is a drug from the thienotriazoldiazepine class, widely prescribed as anxiolytic due to its apparently secure toxicological profile.\nThe prevalence of benzodiazepine consumption in Japan is one of the highest worldwide. Etizolam is the most abused drug of the benzodiazepine class.\nEtizolam is an anxiolytic drug with a pharmacologic profile similar to that of the classic benzodiazepines. Neurochemical research suggests that etizolam may have selectivity for the subpopulation of Y-aminobutyric acid type A receptors associated with anxiety (ie, alpha1, beta2, gamma2).\nEtizolam is a novel psychoactive substance and novel benzodiazepine of the thienotriazolodiazepine class that has recently seen an increasing trend in use worldwide. W\nISSUES: Etizolam is a thienodiazepine derivative, with high affinity for the benzodiazepine site of GABAA receptors. It is often referred\nEtizolam is a drug from the thienotriazoldiazepine class, widely prescribed as anxiolytic due to its apparently secure toxicological profile.\nEtizolam is a benzodiazepine analogue that is approved for use in Japan, Italy and India but has recently appeared as a nonapproved product on the illicit drug market in Europe and North America. Et\nEtizolam is a thienodiazepine that although licensed for clinical usage in Japan, India and South Korea is commonly abused and detected in post-mortem cases around the world. T\nr the years. Etizolam is a designer benzodiazepine (BZD) that has raised concern because of its growing non-medical use, liability to tolerance and dependence, and related\nEtizolam is a novel psychoactive substance and novel benzodiazepine of the thienotriazolodiazepine class that has recently seen an increasing trend in use worldwide. We\nEtizolam is a drug from the thienotriazoldiazepine class, widely prescribed as anxiolytic due to its apparently secure toxicological profile. N\nEtizolam is a thienodiazepine anxiolytic which is said to have lower dependence potential than other benzodiazepines. W\nISSUES: Etizolam is a thienodiazepine derivative, with high affinity for the benzodiazepine site of GABAA rec\nBACKGROUND: Etizolam is an anxiolytic drug with a pharmacologic profile similar to that of the classic benzod\nEtizolam (ETZ) is an antidepressive thienodiazepine drug that is used worldwide.\nBACKGROUND: Etizolam is an anxiolytic drug with a pharmacologic profile similar to that of the classic benz\nEtizolam is a thienodiazepine that although licensed for clinical usage in Japan, India and South Korea is commonly abused and detected in post-mortem cases around the world.\nEtizolam is a benzodiazepine-based anti-anxiety agent.\nISSUES: Etizolam is a thienodiazepine derivative, with high affinity for the benzodiazepine site of GABAA re\nEtizolam is a benzodiazepine analogue that is approved for use in Japan, Italy and India but has recently appeared as a nonapproved product on the illicit drug market in Europe and North America.\nEtizolam in the treatment of generalized anxiety disorder associated with depressive symptoms.\nEtizolam is a designer benzodiazepine (BZD) that has raised concern because of its growing non-medical use, liability to tolerance and dependence, and related harms.\nEtizolam is a thienodiazepine anxiolytic which is said to have lower dependence potential than other benzodiazepines.\nEtizolam, an anti-anxiety agent which is an antagonist of platelet-activating factor receptors, was administered to patients with chronic subdural hematoma (CSH) after hematoma removal to assess the effectiveness for preventing recurrence compared with control patients not given the drug after surgery.\nEtizolam, an anti-anxiety agent, attenuates recurrence of chronic subdural hematoma--evaluation by computed tomography.",
"role": "user"
},
{
"content": "Etizolam is a benzodiazepine analogue that is approved for use in Japan, Italy and India as an anxiolytic drug with a pharmacologic profile similar to that of the classic benzodiazepines. Neurochemical research suggests that etizolam may have selectivity for the subpopulation of Y-aminobutyric acid type A receptors associated with anxiety (ie, alpha1, beta2, gamma2).",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_plos_single_doc_summ | science.scientific_lay_summarisation_plos_single_doc_summ.19976 | [
{
"content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: A Neurotoxic Glycerophosphocholine Impacts PtdIns-4, 5-Bisphosphate and TORC2 Signaling by Altering Ceramide Biosynthesis in Yeast\nArticle:\nAbstract:\nUnbiased lipidomic approaches have identified impairments in glycerophosphocholine second messenger metabolism in patients with Alzheimer's disease. Specifically, we have shown that amyloid-β42 signals the intraneuronal accumulation of PC( O-16:0/2:0) which is associated with neurotoxicity. Similar to neuronal cells, intracellular accumulation of PC( O-16:0/2:0) is also toxic to Saccharomyces cerevisiae, making yeast an excellent model to decipher the pathological effects of this lipid. We previously reported that phospholipase D, a phosphatidylinositol-4, 5-bisphosphate( PtdIns( 4, 5) P2)-binding protein, was relocalized in response to PC( O-16:0/2:0), suggesting that this neurotoxic lipid may remodel lipid signaling networks. Here we show that PC( O-16:0/2:0) regulates the distribution of the PtdIns( 4) P 5-kinase Mss4 and its product PtdIns( 4, 5) P2 leading to the formation of invaginations at the plasma membrane( PM). We further demonstrate that the effects of PC( O-16:0/2:0) on the distribution of PM PtdIns( 4, 5) P2 pools are in part mediated by changes in the biosynthesis of long chain bases( LCBs) and ceramides. A combination of genetic, biochemical and cell imaging approaches revealed that PC( O-16:0/2:0) is also a potent inhibitor of signaling through the Target of rampamycin complex 2( TORC2). Together, these data provide mechanistic insight into how specific disruptions in phosphocholine second messenger metabolism associated with Alzheimer's disease may trigger larger network-wide disruptions in ceramide and phosphoinositide second messenger biosynthesis and signaling which have been previously implicated in disease progression.\nIntroduction:\nRemodeling of lipid species is required for maintaining normal cellular function and disruptions in lipid homeostasis are believed to contribute to aberrant cellular processes and toxicity associated with specific diseases[1]. Although significant advances have been made in characterizing the changes in lipid composition that occur in pathological conditions, it has proven difficult to connect these changes with relevant signaling networks that regulate cellular growth and viability. This is especially true for Alzheimer's disease( AD) for which there is increasing evidence that lipid dyshomeostatsis is playing a central role in the disease progression[2],[3]. Recent lipidomic studies on both post mortem brain tissue and AD mouse models have not only detected dramatic changes in lipid species of most of the major lipid subclasses including ceramides, cholesterols, sphingolipids, phosphatidic acids and glycerophospholipids, but have also reported the presence of distinct changes between regions of the brain[4]. Although these dramatic alterations in lipid homeostasis correlate with the disease, it is imperative to identify the specific subspecies that are critical in contributing to the AD pathology by identifying their impact on signaling networks, which contribute to cellular toxicity. One lipid metabolite with neurotoxic properties that is of particular interest in AD is 1-O-hexadecyl-2-acetyl-sn-glycerophosphocholine or PC( O-16:0/2:0), also known as C16:0 Platelet Activating Factor( PAF). We have shown that amyloid-β42 signals the intraneuronal accumulation of PC( O-16:0/2:0) in AD and that this lipid second messenger, in turn, signals tau-hyperphosphorylation and induces caspase-dependent cell death independently of the G-protein coupled PAF receptor( PAFR)[5]–[7]. However, the underlying signaling pathways mediating the receptor-independent toxicity of PC( O-16:0/2:0) remain enigmatic. The budding yeast Saccharomyces cerevisiae has been a valuable tool for identifying basic elements of lipid signaling networks associated with diseases as many of the fundamental processes of lipid metabolism and signaling are remarkably well conserved with mammalian cells[8]. Previously we employed a chemical genomic screen to identify signaling networks involved in regulating the receptor independent toxicity of PC( O-16:0/2:0). Using this approach we identified a conserved role for phospholipase D( PLD)( S. cerevisiae Spo14) in buffering against the toxicity of PC( O-16:0/2:0) in both yeast and cultured neuronal cells[9]. We also reported relocalization of GFP-tagged Spo14 to distinct foci juxtaposed to the PM upon PC( O-16:0/2:0) treatment. Since PLD activation and localization depends upon the binding to PtdIns( 4, 5) P2[10]–[12], our findings suggested that the toxic accumulation of PC( O-16:0/2:0) may elicit effects upon signaling networks that regulate the PM distribution of PtdIns( 4, 5) P2. Here we provide more precise mechanistic insights by showing that PC( O-16:0/2:0) promotes the redistribution of the sole yeast PtdIns( 4) P-5 kinase, Mss4, which gives rise to the formation of large invaginations of the PM that we have called PtdIns( 4, 5) P2-enriched structures( PES). We also show that PC( O-16:0/2:0) remodeling of the PtdIns( 4, 5) P2 PM pool is associated with the potent inhibition of Tor2 signaling. Consistent with these findings we observed that the effects of PC( O-16:0/2:0) upon Mss4 distribution and PES formation depend on the accumulation of LCBs and ceramides. Together these findings identify a novel signaling network wherein toxic levels of PC( O-16:0/2:0) modulate LCBs and ceramide metabolism, which in turn promotes the redistribution of PM PtdIns( 4, 5) P2 and the inhibition of Tor2 signaling. Our work provides further information into how the toxic accumulation of PC( O-16:0/2:0), as observed in AD patients[6], may impact other lipid signaling networks( i. e., ceramide, PtdIns( 4, 5) P2) which have previously been implicated in the progression of this disease[13]–[19].\nDiscussion:\nAberrant glycerophosphocholine metabolism in AD leading to the intraneuronal accumulation of specific lipid second messengers, including PC( O-16:0/2:0) is linked to neuronal dysfunction, neurotoxicity, and accelerated cognitive decline[6],[50]–[52]. In this report we have used S. cerevisiae to further characterize the mechanisms underlying receptor-independent toxicity of PC( O-16:0/2:0). Our work suggests a model( Fig. 7) wherein exposure to toxic concentrations of PC( O-16:0/2:0) promotes the accumulation of LCBs and ceramides, which leads to changes in the subcellular localization of Mss4 and formation of PtdIns( 4, 5) P2 enriched invaginations of the PM. Ultimately the PC( O-16:0/2:0)-dependent remodeling of PtdIns( 4, 5) P2 affects downstream PtdIns( 4, 5) P2-dependent cellular processes such as PLD localization, which is critical for buffering against the toxic effects of PC( O-16:0/2:0)[9]. However, the inhibition of TORC2 by PC( O-16:0/2:0) also suggests that the toxic properties of PC( O-16:0/2:0) are only partly due to disruptions in PtdIns( 4, 5) P2 signaling and that this lipid impacts other signaling pathways through distinct second messengers that remain to be identified. Given that PtdIns( 4, 5) P2 and downstream signaling events buffer against PC( O-16:0/2:0) toxicity, it was important to investigate the factors underlying the relocalization of Mss4-GFP and PES formation to elucidate potential endogenous mechanisms of neuroprotection. The molecular details that contribute to the localization of Mss4 into distinct phosphatidylinositol kinase or PIK patches in yeast are not completely understood. However, the availability of its substrate, PtdIns( 4) P, a recently identified interacting partner Opy1 and sphingolipid biosynthesis have been implicated[30],[32],[33]. Our data suggests that the PC( O-16:0/2:0)-induced accumulation of LCBs and ceramides( precursor molecules in the sphingolipid biosynthetic pathway Fig. S4) are at least partly responsible for the changes in PM PtdIns( 4, 5) P2 distribution as treatment with myriocin, an inhibitor of sphingolipid biosynthesis, was sufficient to prevent the redistribution of Mss4-GFP and PES formation( Fig. 4 and Fig. S4). The mechanism by which the observed changes in LCBs and ceramide might regulate Mss4 PM localization are not clear but previous reports have suggested that both MIPC and dihydrosphingosine-1 phosphate( DHS-1P) can interact with Mss4[32],[33]. The relocalization of Mss4-GFP, however, is likely not due to interactions with MIPC as neither the total levels of this lipid nor the abundance of individual species was significantly impacted by PC( O-16:0/2:0) at any time point( Fig. 4, Fig. S4 and Dataset 1). In contrast, the accumulation of one DHS and two PHS species displayed similar kinetics to the PtdIns( 4, 5) P2 redistribution and PES formation suggesting that these lipids may be involved in mediating the observed changes( Fig. S4B and Dataset 1). Certainly, this observation must be interpreted with caution as the reported in vitro interaction between Mss4 and LCBs has not been evaluated in vivo[32]. Furthermore, the role of LCB-Ps in mediating Mss4 localization at the PM must also be reconciled with the fact that LCB-Ps do not appear to be trafficked to the PM under normal circumstances[53]. Whether PC( O-16:0/2:0)-induced changes in Mss4-GFP localization are dependent upon the improper trafficking of LCB-Ps or another mechanism remains an open question in need of further study. The spatial distribution of PtdIns( 4, 5) P2 at the PM is critical for regulating the activity of downstream signaling pathways. Our biochemical, genetic and cell biology based-assays suggest that the inhibition of Tor signaling plays a critical role in mediating the sensitivity to the toxic effects of PC( O-16:0/2:0)( Fig. 5 and 6). The results of our in vitro kinase assay do not identify PC( O-16:0/2:0) as a direct inhibitor of Tor kinase activity and suggests that another mechanism is responsible for the inhibition of Tor signaling( Fig. S6). How else might PC( O-16:0/2:0) inhibit Tor signaling? Although the cellular inputs which impinge upon Tor signaling are still being identified and the molecular mechanisms which translate these stimuli into activation/inhibition of Tor signaling are not completely understood, the TORC2-dependent phosphorylation of Ypk1/2 is sensitive to changes in PM PtdIns( 4, 5) P2 levels[48]. Our work demonstrating the relocalization of Slm1-GFP to the PES in response to PC( O-16:0/2:0) is consistent with previous reports describing the interactions of Slm proteins with PtdIns( 4, 5) P2( Fig. 6 and S6)[1],[20]. Because our data indicate that relocalization of Slm1, and presumably Slm2, to the PES is not required for the inhibition of TORC2-dependent Ypk phosphorylation, they suggest that an additional mechanism( s) exists for the regulation of TORC2 signaling( Fig. 6). Collectively, our results provide insight into how a disruption in phosphocholine metabolism signals network-wide lipid metabolic disturbances that may play defining roles into how neurons respond to accumulating Aβ42. Interestingly, accumulating evidence suggests that disruptions in both PtdIns( 4, 5) P2 signaling and ceramide metabolism are contributing factors in the neuronal cell dysfunction and death observed in AD[13]–[19]. Whether the disruptions in PtdIns( 4, 5) P2 signaling and ceramide metabolism homeostasis observed in neurons are dependent upon an increase in PC( O-16:0/2:0) concentrations is an intriguing question in need of further investigation.",
"role": "user"
},
{
"content": "Accelerated cognitive decline in Alzheimer's patients is associated with distinct changes in the abundance of choline-containing lipids belonging to the platelet activating factor family . In particular , PC ( O-16:0/2:0 ) or C16:0 platelet activating factor ( PAF ) , is specifically elevated in brains of Alzheimer's patients . Since elevated intraneuronal levels of PC ( O-16:0/2:0 ) are thought to contribute to the loss of neuronal cells it is imperative to identify the underlying mechanisms contributing to the toxic effects of PC ( O-16:0/2:0 ) . In this study , we have determined that elevated levels of PC ( O-16:0/2:0 ) has negative effects upon the distribution of phosphoinositides at the plasma membrane leading to a potent inhibition of target of rapamycin ( TOR ) signaling . We further show that the changes in phosphoinositide distribution are due to changes in ceramide metabolism . In conclusion , our study suggests that the toxicity associated with aberrant metabolism of glycerophosphocholine lipids species is likely due to the remodeling of phosphoinositide and ceramide metabolism and that therapeutic strategies which target these disruptions may be effective in ameliorating Alzheimer's Disease pathology .",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.8726 | [
{
"content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nWhilethecontrolsdisplayedaninterruptionofthecarotidductatE12andE13asexpected(Fig.2C),themutantsdemonstratedanuncharacteristicbreakofthedorsalaortabetweenthe4thand6thpairsofPAAs(Fig.2D).",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.scitldr_aic | science.scitldr_aic.370 | [
{
"content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nPruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting.\nThis motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts.\nTo better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates (defined as the drops in test accuracy immediately following pruning) than on the final size of the pruned model.\nWe demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks.\nPruning weights and/or convolutional filters from deep neural networks (DNNs) can substantially shrink parameter counts with minimal loss in accuracy (LeCun et al., 1990; Hassibi & Stork, 1993; Han et al., 2015a; Li et al., 2016; Molchanov et al., 2017; Louizos et al., 2017; Liu et al., 2017; Ye et al., 2018) , enabling broader application of DNNs via reductions in memory-footprint and inference-FLOPs requirements.\nMoreover, many pruning methods have been found to actually improve generalization (measured by model accuracy on previously unobserved inputs) (Narang et al., 2017; Frankle & Carbin, 2018; You et al., 2019) .\nConsistent with this, pruning was originally motivated as a means to prevent over-parameterized networks from overfitting to comparatively small datasets (LeCun et al., 1990) .\nConcern about over-parameterizing models has weakened, however, as many recent studies have found that adding parameters can actually reduce a DNN's generalization-gap (the drop in performance when moving from previously seen to previously unseen inputs), even though it has been shown that the same networks have enough parameters to fit large datasets of randomized data (Neyshabur et al., 2014; Zhang et al., 2016) .\nPotential explanations for this unintuitive phenomenon have come via experiments (Keskar et al., 2016; Morcos et al., 2018; Yao et al., 2018; Belkin et al., 2018; Nagarajan & Kolter, 2019) , and the derivation of bounds on DNN generalization-gaps that suggest less overfitting might occur as parameter counts increase (Neyshabur et al., 2018) .\nThis research has implications for neural network pruning, where a puzzling question has arisen: if larger parameter counts don't increase overfitting, how does pruning parameters throughout training improve generalization?\nTo address this question we first introduce the notion of pruning instability, which we define to be the size of the drop in network accuracy caused by a pruning iteration (Section 3).\nWe then empirically analyze the instability and generalization associated with various magnitude-pruning (Han et al., 2015b) algorithms in different settings, making the following contributions:\n1. We find a tradeoff between the stability and potential generalization benefits of pruning, and show iterative pruning's similarity to regularizing with noise-suggesting a mechanism unrelated to parameter counts through which pruning appears to affect generalization.\n2. We characterize the properties of pruning algorithms which lead to instability and correspondingly higher generalization.\nIn this study, we defined the notion of pruning algorithm instability, and applied several pruning approaches 5 to multiple neural networks, assessing the approaches' effects on instability and generalization.\nThroughout these experiments, we observed that pruning algorithms that generated more instability led to better generalization (as measured by test accuracy).\nFor a given pruning target and total pruning percentage, instability and generalization could be fueled by raising iterative pruning rates (Figure 4, Section 4.3) .\nAdditionally, targeting more important weights, again holding total parameters pruned constant, led to more instability and generalization than targeting less important weights (Figure 1, Section 4.1) .\nThese results support the idea that the generalization benefits of pruning cannot be explained solely by pruning's effect on parameter counts-the properties of the pruning algorithm must be taken into account.\nOur analysis also suggests that the capacity effects of weight-removal may not even be necessary to explain how pruning improves generalization.\nIndeed, we provide an interpretation of iterative pruning as noise injection, a popular approach to regularizing DNNs, and find that making pruning noise impermanent provides pruning-like generalization benefits while not removing as much capacity as permanent pruning ( Figure 5 , Section 4.4).",
"role": "user"
},
{
"content": "We demonstrate that pruning methods which introduce greater instability into the loss also confer improved generalization, and explore the mechanisms underlying this effect.",
"role": "assistant"
}
] |
science.scireviewgen_multidoc_summarization | science.scireviewgen_multidoc_summarization.50723 | [
{
"content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: AutoML: A Survey of the State-of-the-Art \nChapter Title: B. Transfer learning \n1. Abstract of Cited Paper (BIB001): Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. \n2. Abstract of Cited Paper (BIB002): We present a systematic study on how to morph a well-trained neural network to a new one so that its network function can be completely preserved. We define this as network morphism in this research. After morphing a parent network, the child network is expected to inherit the knowledge from its parent network and also has the potential to continue growing into a more powerful one with much shortened training time. The first requirement for this network morphism is its ability to handle diverse morphing types of networks, including changes of depth, width, kernel size, and even subnet. To meet this requirement, we first introduce the network morphism equations, and then develop novel morphing algorithms for all these morphing types for both classic and convolutional neural networks. The second requirement is its ability to deal with non-linearity in a network. We propose a family of parametric-activation functions to facilitate the morphing of any continuous nonlinear activation neurons. Experimental results on benchmark datasets and typical neural networks demonstrate the effectiveness of the proposed network morphism scheme. \n3. Abstract of Cited Paper (BIB003): We reduce the computational cost of Neural AutoML with transfer learning. AutoML relieves human effort by automating the design of ML algorithms. Neural AutoML has become popular for the design of deep learning architectures, however, this method has a high computation cost. To address this we propose Transfer Neural AutoML that uses knowledge from prior tasks to speed up network design. We extend RL-based architecture search methods to support parallel training on multiple tasks and then transfer the search strategy to new tasks. On language and image classification tasks, Transfer Neural AutoML reduces convergence time over single-task training by over an order of magnitude on many tasks. \n4. Abstract of Cited Paper (BIB004): We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89%, which is on par with NASNet (Zoph et al., 2018), whose test error is 2.65%. \n5. Abstract of Cited Paper (BIB005): Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1)the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption, (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks\n\nLiterature Review:",
"role": "user"
},
{
"content": "At the beginning of the development of NAS algorithms, like BIB002 , each network architecture, which is trained for a long time to converge, is dropped after evaluating its performance. Hence, the technique of transfer learning is used to accelerate the process of NAS. For example, Wong and Lu et al. BIB004 propose Transfer Neural AutoML that uses knowledge from prior tasks to speed up network design. ENAS BIB005 shares parameters among child networks, leading to 1000 times faster than BIB002 . The network morphism based algorithms BIB001 , BIB006 , BIB003 can also inherit the weights of previous architectures.",
"role": "assistant"
}
] |
science.bioasq_general_qa | science.bioasq_general_qa.495 | [
{
"content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the mechanism of action of verubecestat?\n\nContext: The BACE1 inhibitor verubecestat (MK-8931) reduces CNS β-amyloid in animal models and in Alzheimer's disease patients.\nWe show that verubecestat (MK-8931) is a potent, selective, structurally unique BACE1 inhibitor that reduced plasma, cerebrospinal fluid (CSF), and brain concentrations of Aβ40, Aβ42, and sAPPβ (a direct product of BACE1 enzymatic activity) after acute and chronic administration to rats and monkeys.\nDiscovery of the 3-Imino-1,2,4-thiadiazinane 1,1-Dioxide Derivative Verubecestat (MK-8931)-A β-Site Amyloid Precursor Protein Cleaving Enzyme 1 Inhibitor for the Treatment of Alzheimer's Disease.\nVerubecestat 3 (MK-8931), a diaryl amide-substituted 3-imino-1,2,4-thiadiazinane 1,1-dioxide derivative, is a high-affinity β-site amyloid precursor protein cleaving enzyme 1 (BACE1) inhibitor currently undergoing Phase 3 clinical evaluation for the treatment of mild to moderate and prodromal Alzheimer's disease.\nSynthesis of Verubecestat, a BACE1 Inhibitor for the Treatment of Alzheimer's Disease.\nVerubecestat is an inhibitor of β-secretase being evaluated for the treatment of Alzheimer's disease.\nVerubecestat is an inhibitor of β-secretase being evaluated for the treatment of Alzheimer's disease.\nAfter giving an update on the development and current status of new AD therapeutics, this review will focus on BACE inhibitors and, in particular, will discuss the prospects of verubecestat (MK-8931), which has reached phase III clinical trials.\nAlthough not selective over the closely related aspartyl protease BACE2, verubecestat has high selectivity for BACE1 over other key aspartyl proteases, notably cathepsin D, and profoundly lowers CSF and brain Aβ levels in rats and nonhuman primates and CSF Aβ levels in humans.\nWe show that verubecestat (MK-8931) is a potent, selective, structurally unique BACE1 inhibitor that reduced plasma, cerebrospinal fluid (CSF), and brain concentrations of Aβ40, Aβ42, and sAPPβ (a direct product of BACE1 enzymatic activity) after acute and chronic administration to rats and monkeys.\nVerubecestat 3 (MK-8931), a diaryl amide-substituted 3-imino-1,2,4-thiadiazinane 1,1-dioxide derivative, is a high-affinity β-site amyloid precursor protein cleaving enzyme 1 (BACE1) inhibitor currently undergoing Phase 3 clinical evaluation for the treatment of mild to moderate and prodromal Alzheimer's disease.\nChronic treatment of rats and monkeys with verubecestat achieved exposures >40-fold higher than those being tested in clinical trials in AD patients yet did not elicit many of the adverse effects previously attributed to BACE inhibition, such as reduced nerve myelination, neurodegeneration, altered glucose homeostasis, or hepatotoxicity.\nExpert opinion: Drugs in Phase III clinical development for AD include one inhibitor of the β-secretase cleaving enzyme (BACE) (verubecestat), three anti-Aβ monoclonal antibodies (solanezumab, gantenerumab, and aducanumab), an inhibitor of receptor for advanced glycation end products (RAGE) (azeliragon) and the combination of cromolyn sodium and ibuprofen (ALZT-OP1).\nSynthesis of Verubecestat, a BACE1 Inhibitor for the Treatment of Alzheimer's Disease.\nThe BACE1 inhibitor verubecestat (MK-8931) reduces CNS β-amyloid in animal models and in Alzheimer's disease patients.\nDiscovery of the 3-Imino-1,2,4-thiadiazinane 1,1-Dioxide Derivative Verubecestat (MK-8931)-A β-Site Amyloid Precursor Protein Cleaving Enzyme 1 Inhibitor for the Treatment of Alzheimer's Disease.\nAfter giving an update on the development and current status of new AD therapeutics, this review will focus on BACE inhibitors and, in particular, will discuss the prospects of verubecestat (MK-8931), which has reached phase III clinical trials.\nAlthough not selective over the closely related aspartyl protease BACE2, verubecestat has high selectivity for BACE1 over other key aspartyl proteases, notably cathepsin D, and profoundly lowers CSF and brain Aβ levels in rats and nonhuman primates and CSF Aβ levels in humans.",
"role": "user"
},
{
"content": "Verubecestat (MK-8931), a diaryl amide-substituted 3-imino-1,2,4-thiadiazinane 1,1-dioxide derivative, is a potent, selective, structurally unique BACE1 inhibitor that reduced plasma, cerebrospinal fluid (CSF), and brain concentrations of Aβ40, Aβ42, and sAPPβ (a direct product of BACE1 enzymatic activity).",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.4443 | [
{
"content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nMetabolicGeneExpressioninLiverofFedandFastedPGC-1alpha+/+andPGC-1alpha-/-Mice",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.5674 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nWe present an online landmark selection method for distributed long-term visual localization systems in bandwidth-constrained environments. Sharing a common map for online localization provides a fleet of au- tonomous vehicles with the possibility to maintain and access a consistent map source, and therefore reduce redundancy while increasing efficiency. However, connectivity over a mobile network imposes strict bandwidth constraints and thus the need to minimize the amount of exchanged data. The wide range of varying appearance conditions encountered during long-term visual localization offers the potential to reduce data usage by extracting only those visual cues which are relevant at the given time. Motivated by this, we propose an unsupervised method of adaptively selecting landmarks according to how likely these landmarks are to be observable under the prevailing appear- ance condition. The ranking function this selection is based upon exploits landmark co-observability statistics collected in past traversals through the mapped area. Evaluation is per- formed over different outdoor environments, large time-scales and varying appearance conditions, including the extreme tran- sition from day-time to night-time, demonstrating that with our appearance-dependent selection method, we can significantly reduce the amount of landmarks used for localization while maintaining or even improving the localization performance.\n\nCited Abstract(s):\n[1]: United States. Army Research Office. Multidisciplinary University Research Initiative (Grant W911NF-11-1-0391)",
"role": "user"
},
{
"content": "Landmark selection has also been studied in connection with specific tasks like path-planning and obstacle avoidance. The method presented by [1] selects those landmark measurements from a map, which maximize the utility wrt. a predefined task, such as collision-free navigation. In contrast, the method we present selects landmarks based on the appearance condition the robot is exposed to during operation.",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.7580 | [
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nEffectiveness of ear syringing in general practice: a randomised controlled trial and patients' experiences.\nAbstract 1:\nBACKGROUND Ear syringing is a common procedure performed for a variety of symptoms in primary care . Reports of its effectiveness vary considerably and no r and omised controlled trials ( RCTs ) have been performed . AIM To estimate the effect of ear syringing on hearing thresholds and on symptoms leading to ear syringing in general practice . DESIGN OF STUDY R and omised single-blind controlled trial . Before- and -after self- assessment s of symptoms . SETTING Patients from three general practice s in the Bristol area attending twice-weekly clinics dedicated to ear syringing over a 12-week period . METHOD Patients were r and omly assigned to have their hearing tested before and after ear syringing , or twice before ear syringing . Changes in hearing threshold were measured by pure tone audiometry ( PTA ) . All patients completed sef- assessment forms of symptoms using Likert scales before , and one week after , ear syringing . RESULTS Hearing threshold improved by 10 dB or more in 34 % ( 95 % confidence interval [ CI ] = 21 % to 47 % ) of the intervention group and 1.6 % of control group ( number needed to treat = 3.1 , 95 % CI = 2.2 to 5.2 , P<0.001 ) . The levels of improvement in the intervention group ranged between 15 dB and 36 dB. The symptoms that most commonly improved included hearing on the phone , pain , a feeling of blocked ears , and hearing one-to-one . There was a strong relationship between the change thresholds , as measure using PTA , and self-reports of hearing improvement . Secondary analysis was unable to identify predictors of objective ly measured improvement . CONCLUSION Ear syringing improved hearing threshold in a substantial proportion of patients . An even larger proportion reported an improvement in symptoms . It was not possible to predict which patients would benefit\n\n\nTitle 2:\nA comparison of the efficacy of two ear drop preparations ('Audax' and 'Earex') in the softening and removal of impacted ear wax.\nAbstract 2:\nThirty-six patients with symptoms of impacted ear wax were recruited to an open , r and omized , parallel group study of ' Audax ' ear drops and ' Earex ' ear drops . Patients had had their symptoms for several weeks and they were assessed on entry for the degree of impaction in each ear . After using the drops , morning and evening for 4 days , they were assessed on the fifth day for degree of impaction , ease of syringing , side-effects or discomfort , and the investigator 's and patient 's own global impression of efficacy of the ear drops . A trend was seen showing less impaction post-treatment in the ' Audax ' group than in the ' Earex ' group although the difference did not reach statistical significance . A significant difference was seen in favour of ' Audax ' for the frequency and ease of syringing ( p < 0.005 ) . No patients in the ' Audax ' group reported any side-effect or discomfort although 1 patient using ' Earex ' reported slight irritation whilst another found the smell unacceptable . The results of the investigators ' and patients ' own global impression of efficacy were significantly in favour of ' Audax ' ear drops ( p < 0.01 )\n\n\nTitle 3:\nRandomized clinical trial of docusate, triethanolamine polypeptide, and irrigation in cerumen removal in children.\nAbstract 3:\nBACKGROUND Cerumen obstructing visualization of the tympanic membrane in children is a common and frustrating problem . Docusate sodium , triethanolamine polypeptide , and saline were compared to determine their effectiveness in relieving cerumen obstruction in children . METHODS A r and omized , controlled , double-blind trial was performed on pediatric patients aged 6 months through 5 years with cerumen obstruction . The enrolling physician determined whether the cerumen completely or partially obstructed visualization of the tympanic membrane . One milliliter of docusate sodium , triethanolamine polypeptide , or normal saline as control was placed into the patient 's ear canal . If the tympanic membrane was not completely visualized after 15 minutes , the ear was irrigated with 50 mL of tepid water . Irrigation was repeated a second time if needed . The main outcome was the proportion of tympanic membranes that were completely visualized after cerumeno-eblytic agents or control saline , alone or with irrigation if needed . RESULTS Of 92 patients enrolled , 34 received docusate sodium ; 30 , triethanolamine polypeptide ; and 28 , saline . Mean + /- SD patient age was 34.7 + /- 18.1 months , and 50 ( 54 % ) of the patients were girls . Groups were similar in age , race , sex , ear enrolled , wax consistency , and degree of obstruction . There was no significant difference in the proportion of tympanic membranes completely visualized after treatment with docusate ( 18/34 ; 53 % ) , triethanolamine polypeptide ( 13/30 ; 43 % ) , or saline ( 19/28 ; 68 % ) ( P = .17 ) . CONCLUSION Application of docusate sodium or triethanolamine polypeptide did not significantly improve the proportion of tympanic membranes that were completely visualized vs application of the saline control\n\n\nTitle 4:\nCeruminolytic effects of docusate sodium: a randomized, controlled trial.\nAbstract 4:\nSTUDY OBJECTIVE Assessment of the tympanic membrane is often impeded by the presence of cerumen . We compared the ceruminolytic effects of triethanolamine polypeptide and docusate sodium in patients with cerumen . METHODS We conducted a prospect i ve , r and omized , controlled , double-blind trial on a convenience sample of cooperative adult and pediatric patients presenting to a university-based emergency department who required removal of cerumen to visualize the tympanic membrane . Structured data collection was performed , and the physician determined whether visualization of the tympanic membrane was partially or totally obscured by cerumen ( interobserver agreement , rho=0.79 ) . Patients received intra-aural instillation of 1mL of either docusate sodium or triethanolamine polypeptide in a liquid form . If not completely cleared within 15 minutes , the external ear canal was irrigated with 50 or 100 mL of normal saline solution and additional attempts to visualize the tympanic membrane were made . The main outcome was the proportion of ears in which the tympanic membrane could be totally visualized after ceruminolytic instillation with or without irrigation . This study had 80 % power to detect a 40 % difference between groups in the proportion of totally visualized tympanic membranes ( chi(2 ) test , alpha=.05 ) . RESULTS Of 50 enrolled patients , 23 received triethanolamine polypeptide and 27 received docusate sodium . Mean patient age was 40 years ( range 1 to 81 years ) ; 35 % were female . Groups were similar in age , sex , and proportion of completely obscured tympanic membranes at presentation ( 78 % ) . The ability to completely visualize the tympanic membrane was significantly greater after treatment with docusate sodium versus triethanolamine polypeptide ( 81 % versus 35 % ; difference in proportions 47 % ; 95 % confidence interval [ CI ] , 22 to 71 ) particularly in children aged 5 or less ( 90 % versus 0 % ; difference in proportions 89 % ; 95 % CI 50 to 100 ) . CONCLUSION Docusate sodium solution is a more effective ceruminolytic than triethanolamine polypeptide , allowing complete or partial visualization of the tympanic membrane in most patients after a single application when followed with irrigation . Use of docusate sodium as a ceruminolytic should be encouraged , particularly in children\n\n\nTitle 5:\nA single-blind, randomized study to compare the efficacy of two ear drop preparations ('Audax' and 'Cerumol') in the softening of ear wax.\nAbstract 5:\nA parallel group , single-blind , r and omized study was carried out in a general practice to compare the effectiveness and tolerability of two ear drop preparations ( ' Audax ' and ' Cerumol ' ) in the softening of ear wax in 50 adult patients with impacted or hardened ear wax . Assessment s were made on entry of the amount , colour and consistency of the ear wax , symptoms , and objective hearing . Patients were then allocated at r and om to receive one or other preparation and instructed to use the drops , morning and evening , for 4 days after which they were reassessed . Details were recorded of any side-effects or discomfort caused by the study medication and both physician and patients were asked to give their overall opinion of treatment efficacy . Both treatments were shown to be effective in the softening of ear wax and were well tolerated , there being no significant difference between the two groups in these parameters . However , patients who had abnormal hearing before treatment had a significantly greater improvement in objective hearing after treatment with ' Audax ' ear drops compared to those patients treated with ' Cerumol ' ear drops . There were no between-treatment differences in either either the physician 's or patient 's overall assessment s of effectiveness\n\n\nTitle 6:\nAn evaluation of a nurse-led ear care service in primary care: benefits and costs.\nAbstract 6:\nBACKGROUND Nurses trained in ear care provide a new model for the provision of services in general practice , with the aim of cost-effective treatment of minor ear and hearing problems that affect well-being and quality of life . AIM To compare a prospect i ve observational cohort study measuring health outcomes and re source use for patients with ear or hearing problems treated by nurses trained in ear care with similar patients treated by st and ard practice . METHOD A total of 438 Rotherham and 196 Barnsley patients aged 16 years or over received two self-completion question naires : question naire 1 ( Q1 ) on the day of consultation and question naire 2 ( Q2 ) after three weeks . Primary measured outcomes were changes in discomfort and pain ; secondary outcomes included the effect on normal life , health status , patient satisfaction , and re sources used . RESULTS After adjusting for differences at Q1 , by Q2 there was no statistical evidence of a difference in discomfort and pain reduction , or differential change in health status between areas . Satisfaction with treatment was significantly higher ( P = 0.0001 ) in Rotherham ( 91 % ) than in Barnsley ( 82 % ) . Average total general practitioner ( GP ) consultations were lower in Rotherham at 0.4 per patient with an average cost of 6.28 Pounds compared with Barnsley at 1.4 per patient and an average cost of 22.53 Pounds ( P = 0.04 ) . Barnsley GPs prescribed more drugs per case ( 6 % of total costs compared with 1.5 % ) and used more systemic antibiotics ( P = 0.001 ) . CONCLUSIONS Nurses trained in ear care reduce costs , GP workload , and the use of systemic antibiotics , while increasing patient satisfaction with care . With underst and ing and support from GPs , such nurses are an example of how exp and ed nursing roles bring benefits to general practice . Nurses trained in ear care reduce treatment costs , reduce the use of antibiotics , educate patients in ear care , increase patient satisfaction , and raise ear awareness\n\n\nTitle 7:\nCeruminolytic efficacy in adults versus children.\nAbstract 7:\nOBJECTIVES An inexpensive , nontoxic ceruminolytic with reasonable efficacy that can be used by the patient or their caregivers at home could represent significant financial saving for patients and health care systems . The purpose of this study was to compare two simple ceruminolytics for in vitro efficacy . STUDY DESIGN R and omized controlled trial . METHODS Thirty-six children and 33 adults presenting to a community family practice clinic who had cerumen occluding at least one external auditory canal were r and omly assigned to compare use of 10 % aqueous sodium bicarbonate and 2.5 % aqueous acetic acid as ceruminolytics in occlusive cerumen . RESULTS We were unable to demonstrate a difference between in vivo efficacy of these two ceruminolytic solutions , but both solutions were significantly more efficacious in children than adults . CONCLUSIONS There is a role for these ceruminolytics in children with occlusive cerumen , but adults would be better treated in another way\n\n\nTitle 8:\nA Multicentric Clinical Trial Comparing Otocerol® with Cerumol® as Cerumenolytics\nAbstract 8:\nFifteen general practitioners conducted a r and omized , double-blind comparative trial of two cerumenolytics , namely , Otocerol ® and Cerumol ® . A total of 106 patients were entered into the study ( fifty-three in each group ) . Otocerol was shown to be marginally better than Cerumol in all parameters evaluated",
"role": "user"
},
{
"content": "A non-water- , non-oil-based preparation appeared more effective than an oil-based preparation at both clearing earwax without syringing , and facilitating successful syringing .\nImmediate syringing after application of a preparation may be as effective as using eardrops for several days and delaying syringing .\nAlthough immediate ear syringing is effective and convenient for patients , it may be less cost-effective than using eardrops and perhaps avoiding syringing .",
"role": "assistant"
}
] |
science.chemsum_single_document_summarization | science.chemsum_single_document_summarization.49224 | [
{
"content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nSynthesis of pyrrolidine-based hamamelitannin analogues as quorum sensing inhibitors in Staphylococcus aureus\n\nPaper sections:\n\n<p>This article is part of the thematic issue \"Antibacterials, bacterial small molecule interactions and quorum sensing\".</p>\n\nIntroduction\n<p>Antimicrobial resistance is rapidly becoming a global threat [1–2]. It is estimated that worldwide, at least 700 000 people die every year from drug-resistant strains of common bacterial infections. Strategies to deal with the global antimicrobial resistance problem need to be multifactorial. Next to disease prevention and the development of new antibiotics, it is essential to investigate innovative strategies to combat bacterial infections [3–4]. Recently, targeting bacterial virulence has gained a lot of attention [5–7]. It has been hypothesized that by \"disarming\" the pathogen, rather than inhibiting its growth, selective pressure for resistance development will be much lower. Furthermore, reduction of bacterial virulence directly protects the host, and at the same time renders the bacteria more susceptible towards the host defense system and antibiotics.</p><p>The Centers for Disease Control and Prevention (CDC) have listed a number of bacteria that present serious, urgent and concerning threats [8]. One of these problematic bacteria is methicillin-resistant Staphylococcus aureus (MRSA), a human pathogen that causes a wide range of clinical infections. In S. aureus, virulence is mainly mediated by quorum sensing, a bacterial cell-to-cell communication system based on the secretion of signal molecules [9–11]. The natural product hamamelitannin (1) has been identified as a non-peptide analogue of RIP (RNAIII-inhibiting protein), an inhibitor of the RAP/TRAP (RNAIII-activating protein/target of RAP) quorum sensing system in S. aureus (Figure 1) [12–14]. Furthermore, hamamelitannin has been shown to inhibit biofilm formation and to potentiate the activity of antibiotics against staphylococcal biofilms in vitro and in vivo [12,15]. Structural optimization of hamamelitannin by our group resulted in several more potent and more druglike analogues of which compound 2 emerged as a promising starting point for further optimization and subsequent development (Figure 1) [16–19]. Our earlier work revealed that the optimal side chain substituents are an o-chlorobenzamide on the 5-position and a non-substituted benzamide on the 2'-position. In absence of any structural information of the inhibitor–target interaction, we were interested in replacing the core tetrahydrofuran scaffold by a pyrrolidine ring in order to further elucidate the structure–activity relationship. The pyrrolidine nitrogen atom provides an extra point of diversification, allowing further elaboration of the scaffold. Substituents on the ring nitrogen might lead to additional interactions with the target and therefore provide more potent analogues. Moreover, the O-to-N replacement is expected to increase solubility and possible polar interactions with the target. In this work we report the design, synthesis and biological evaluation of a number of pyrrolidine-based hamamelitannin analogues.</p>\n\n\n<p>Structures of hamamelitannin (1), lead compound 2 and target compounds 3.</p>\n\nIntroduction\n<p>The envisioned strategy for the synthesis of the target pyrrolidine-based hamamelitannin analogues is depicted in Scheme 1. The synthesis of 4 as a key intermediate allows to gain access to a diverse set of analogues by chemoselective late-stage derivatization of the pyrrolidine nitrogen. Previously, we used the iminosugar 5 to prepare a series of 2'-homoazanucleosides. This possible precursor was synthesized convergently in 12 steps [20]. The pyrrolidine ring was constructed via alkylation of 6 and 7, followed by ring-closing metathesis. Stereoselective dihydroxylation of the resulting alkene then furnished the protected iminosugar 5. However, using intermediate 5 as a starting point for the hamamelitannin analogues would render the synthetic route very linear and impractical to produce sufficient amounts required to prepare a series of analogues.</p>\n\n\n<p>Proposed strategy for the synthesis of the target analogues.</p>\n\nIntroduction\n<p>Therefore, we envisioned a modification of the previously developed synthetic route, where the two fully functionalized side chains are introduced prior to coupling and ring closure to afford 8. This new synthetic route would be more convergent and efficient for the preparation of 3. Retrosynthetic analysis led to 9 and 10 as two key synthons, which would be assembled via a Mitsunobu–Fukuyama reaction, and subsequently the secondary amine converted to the pyrrolidine ring via a ring-closing metathesis reaction [21].</p>\n\nChemistry\n<p>The synthesis of fragment 9 is depicted in Scheme 2 and proved to be challenging. It was essential to introduce the nosyl group only in the last step, since several previous attempts to synthesize 9 failed due to side reactions caused by the strongly electron-withdrawing properties of the nosyl group. The successful synthesis starts with a Pd-catalyzed dynamic kinetic asymmetric transformation of racemic butadiene monoepoxide to 12, employing phthalimide as nucleophile [22–23]. Attempts to substitute the alcohol functionality of 12 via displacement of the derived mesylate with NaN3 failed, similar to previously reported difficulties by Trost et al. [24]. The benzamide substituent was therefore introduced via Mitsunobu reaction with N-Boc-protected ortho-chlorobenzamide 13. Removal of the Boc-group with TFA resulted in 14. Next, the phthalimide was removed via refluxing with ethylenediamine and the resulting amine protected as para-nitrobenzenesulfonamide to obtain the desired fragment 9.</p>\n\n\n<p>Synthesis of fragment 9. Reagents and conditions: a) phthalimide, Pd2(allyl)2Cl2, ligand 15, Na2CO3, CH2Cl2, rt; b) i) 13, DEAD, PPh3, toluene, 0 °C to rt; ii) TFA, CH2Cl2, H2O, rt; c) i) ethylenediamine, EtOH/THF 7:3, reflux; ii) p-Ns-Cl, Et3N, THF, 0 °C.</p>\n\nChemistry\n<p>The synthesis of 10 starts from commercially available 2-methylene-1,3-propanediol (16), which was selectively monoprotected in high yield as TBS ether (Scheme 3) [25]. The remaining alcohol was then substituted for a phthalimide via Mitsunobu reaction. Phthalimide deprotection, acylation with benzoic acid, and removal of the silyl protecting group furnished 10.</p>\n\n\n<p>Synthesis of fragment 10. Reagents and conditions: a) TBSCl, NaH, THF, rt; b) phthalimide, PPh3, DEAD, THF, 0 °C to rt; c) H2NNH2·H2O, MeOH, reflux; d) BzCl, Et3N, CH2Cl2, 0 °C; e) TBAF, THF, rt.</p>\n\nChemistry\n<p>Fragments 9 and 10 were coupled under Mitsunobu conditions (Scheme 4), affording 20 contaminated with Mitsunobu byproducts. Unfortunately, attempted ring-closing metathesis of 20 using the Grubbs–Hoveyda II catalyst failed to produce any product, probably due to the insolubility of 20 in solvents suitable for metathesis reactions (1,2-DCE, toluene) and/or the coordinating ability of the three (sulfon)amide functionalities [26–27].</p>\n\n\n<p>Coupling of 9 and 10 and attempted ring-closing metathesis. Reagents and conditions: a) PPh3, DEAD, THF/DMF 2:1, 0 °C to rt; b) 5 mol % Grubbs–Hoveyda II, 1,2-DCE, 50 °C.</p>\n\nChemistry\n<p>To circumvent this problem, we were forced to alter the initial synthetic strategy and used a different eastern fragment for the ring-closing metathesis reaction. The ortho-chlorobenzamide substituent would then be introduced in a later stage. We chose to protect alcohol 12 as TBS ether and used the derived nosyl-protected fragment 23 as the coupling partner (Scheme 5).</p>\n\n\n<p>Synthesis of alternative eastern fragment 23. Reagents and conditions: a) TBSCl, imidazole, CH2Cl2, rt; b) H2NNH2·H2O, MeOH, reflux; c) p-Ns-Cl, Et3N, CH2Cl2, 0 °C.</p>\n\nChemistry\n<p>Fragments 23 and 10 were coupled via Mitsunobu reaction, yielding 24 in 49% yield (Scheme 6). Fortunately, ring-closing metathesis of 24 now smoothly afforded 25 in high yield. Dihydroxylation selectively yielded the desired stereoisomer of diol 26, which was subsequently protected as isopropylidene acetal. In the next step, after removal of the TBS group and mesylation, attempted substitution with NaN3 resulted only in an elimination product. This led us to replace the electron-withdrawing nosyl protecting group with a Boc group. After removal of the TBS ether and mesylation of the resulting alcohol, substitution with NaN3 now smoothly provided azide 29. The azide was then reduced under classical Staudinger conditions, followed by amide formation with ortho-chlorobenzoic acid and a final acidic deprotection step to provide the desired analogue 4. Despite the change of the initial strategy and resulting elongation of the synthetic route, the overall synthesis still proved to be very efficient, delivering 4 in 6% yield in 17 steps from 16.</p>\n\n\n<p>Synthesis of 4. Reagents and conditions: a) PPh3, DEAD, THF, 0 °C to rt; b) Grubbs–Hoveyda II (5 mol %), 1,2-DCE, 50 °C; c) K2OsO4·2H2O, NMO, acetone/H2O 3:1, rt; d) 2-methoxypropene, CSA (cat.), THF, rt; e) PhSH, K2CO3, MeCN, 50 °C; f) Boc2O, Et3N, CH2Cl2, rt; g) TBAF, THF, rt; h) Ms-Cl, Et3N, CH2Cl2, 0 °C; i) NaN3, DMF, 60 °C; j) PMe3, H2O, THF, rt; k) 2-chlorobenzoyl chloride, Et3N, CH2Cl2, 0 °C; l) conc. HCl, MeOH/H2O 1:1, reflux.</p>\n\nChemistry\n<p>The pyrrolidine nitrogen was then further derivatized with several small substituents (Scheme 7). Reductive amination with several aldehydes resulted in 3a–c. The N-methyl analogue 3d was synthesized via methylation with MeI. The methanesulfonamide 3e and acetamide 3f, in which the cationic character of the pyrrolidine nitrogen is removed, were also synthesized.</p>\n\n\n<p>Late stage functionalization of the pyrrolidine nitrogen. Reagents and conditions: A) (masked) aldehyde, NaBH3CN, AcOH, MeOH, 60 °C; B) MeI, DIPEA, THF, 0 °C; C) Ms-Cl, Et3N, THF, 0 °C; D) AcOH, DIPEA, HATU, DMF, rt.</p>\n\nChemistry\n<p>The correct stereochemistry of the synthesized analogues was unequivocally proven via X-ray structural analysis of compound 3a (Figure 2).</p>\n\n\n<p>Molecular X-ray structure of 3a, showing thermal displacement ellipsoids at the 50% probability level. Positional disorder of the chlorophenyl ring and the water solvent molecule are not shown.</p>\n\nBiological evaluation\n<p>The synthesized analogues were tested in a S. aureus biofilm model, but were all inactive (see Supporting Information File 1).</p>\n\nConclusion\n<p>A convergent synthetic route for the synthesis of pyrrolidine-based hamamelitannin analogues was developed. The originally envisioned strategy failed due to difficulties in the ring-closing metathesis reaction, but modification of one of the coupling partners solved this issue. The desired pyrrolidine-based analogue was synthesized in 17 steps and chemoselective modification of the nitrogen atom provided 6 analogues. Unfortunately, these analogues were inactive in inhibiting S. aureus biofilm formation.</p>\n\n\n<p>Experimental details.</p>",
"role": "user"
},
{
"content": "Interfering with bacterial cell-to-cell communication is a promising strategy to combat antimicrobial resistance. The natural product hamamelitannin and several of its analogues have been identified as quorum sensing inhibitors. In this paper the synthesis of pyrrolidine-based analogues of a more lead-like hamamelitannin analogue is reported. A convergent synthetic route based on a key ring-closing metathesis reaction was developed and delivered the pyrrolidine analogue in 17 steps in high yield. Chemoselective derivatization of the pyrrolidine nitrogen atom resulted in 6 more compounds. The synthesized compounds were evaluated in a biofilm model, but were all inactive.",
"role": "assistant"
}
] |
science.scientific_lay_summarisation_elife_single_doc_summ | science.scientific_lay_summarisation_elife_single_doc_summ.2752 | [
{
"content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Vestigial auriculomotor activity indicates the direction of auditory attention in humans\nArticle:\nAbstract:\nUnlike dogs and cats, people do not point their ears as they focus attention on novel, salient, or task-relevant stimuli. Our species may nevertheless have retained a vestigial pinna-orienting system that has persisted as a 'neural fossil’ within in the brain for about 25 million years. Consistent with this hypothesis, we demonstrate that the direction of auditory attention is reflected in sustained electrical activity of muscles within the vestigial auriculomotor system. Surface electromyograms( EMGs) were taken from muscles that either move the pinna or alter its shape. To assess reflexive, stimulus-driven attention we presented novel sounds from speakers at four different lateral locations while the participants silently read a boring text in front of them. To test voluntary, goal-directed attention we instructed participants to listen to a short story coming from one of these speakers, while ignoring a competing story from the corresponding speaker on the opposite side. In both experiments, EMG recordings showed larger activity at the ear on the side of the attended stimulus, but with slightly different patterns. Upward movement( perking) differed according to the lateral focus of attention only during voluntary orienting; rearward folding of the pinna’s upper-lateral edge exhibited such differences only during reflexive orienting. The existence of a pinna-orienting system in humans, one that is experimentally accessible, offers opportunities for basic as well as applied science.\nIntroduction:\nWatching the ears allows an equestrian to gauge their mount’s shifting attention. Ear movements are not a useful cue in humans or apes because higher primates have lost the ability to orient by adjusting pinna shape and focal direction. Instead we judge a person’s attention by their gaze direction. In thousands of research reports each year, though, casual observation of ocular orienting is replaced by sophisticated recording techniques. We show in the present paper that similar electrical and optical techniques allow us to extract muscular correlates of pinna-orienting in our species and even render subtle pinna-orienting movements visible. Activation of the ear muscles is directionally specific and it occurs during voluntary as well as reflexive attention. A review of research in Hackley, 2015 on pinna-orienting in humans identified three relevant findings scattered across the preceding 100-or-so years. The first was Wilson’s oculo-auricular phenomenon( Wilson, 1908), in which shifting the gaze hard to one side elicits a 1 to 4 mm deflection of the lateral rim of both ears. The relevance to spatial attention is uncertain, though, with diverging results across studies, for example see Gerstle and Wilkinson, 1929; Urban et al., 1993; O'Beirne and Patuzzi, 1999. Additional evidence comes from a 1987 study( Hackley et al., 1987) of the bilateral postauricular muscle( PAM) reflex( onset latency = 10 ms) to acoustic onset transients. Increased amplitudes were observed when subjects directed their attention to a stream of tones on the same side as the recorded muscle while ignoring a competing, contralateral stream. Comparisons across left/right stimulus, attention, and PAM combinations localized modulation to the motor limb of the reflex arc. This pattern could indicate that the muscle behind an ear is primed when attention is directed toward that side. Finally, an experiment in Stekelenburg and van Boxtel, 2002 found that the automatic capture of attention by unexpected sounds coming from a speaker hidden to the left of the participant elicited greater activity in the left than right PAM. Apart from the research just described, functional studies of the human auriculomotor system have been mainly limited to the PAM reflex, in the context of audiometry or affective psychophysiology. The auriculomotor system lies essentially untouched in the literature. Here we present evidence that our brains retain vestigial circuitry for orienting the pinnae during both exogenous, stimulus-driven attention to brief, novel sounds and endogenous, goal-directed attention to sustained speech. We also demonstrate a complex interplay of different auricular muscles which may be causally linked to subtle movements of the pinnae.\nDiscussion:\nThe ability to swivel and point the pinnae seems to have been lost during the transition from the primarily nocturnal lifestyles of prosimians to the diurnal ones of New World monkeys, and then, Old World monkeys( Coleman and Ross, 2004). Mobility continued to decline as the ears became shorter and more rigid, see Waller et al., 2008; Coleman and Ross, 2004. The musculature degenerated. For example, an inferior auricular muscle to oppose SAM still exists in lesser apes such as gibbons and siamangs( Burrows et al., 2011), but not in chimpanzees( Burrows et al., 2011) or humans( Cattaneo and Pavesi, 2014). Given that head rotation has little effect on PAM activity( Cook and Patuzzi, 2014), it seems likely that the vestibulo–auricular reflex as documented in cats( Tollin et al., 2009) has not been conserved in our species. Also presumably lost is the ability to use proprioceptive information to adjust auditory processing in accordance with pinna position, orientation, and shape as documented in cats, see Kanold and Young, 2001. Although the ear muscles of Old World monkeys have spindles( Lovell et al., 1977), those of humans do not( Cattaneo and Pavesi, 2014). When pinna-orienting movements became too small to modify acoustic input substantially, possibly 25 million years ago when lesser apes branched off from Old World monkeys, see Gibbs et al., 2007 as discussed in Hackley, 2015, selective environmental pressure ceased. The neural system became more-or-less 'frozen’ in a form optimized for controlling taller, more flexible ears, mounted on a smaller, more spherical head. This evolutionary perspective helps us to understand the surprising finding that AAM, which pulls the base of the pinna forward, was activated in Experiment one by novel sounds coming from the rear. Co-activation of opposing muscles AAM and PAM in our remote ancestors would have reduced occlusion of the ear canal by the tragus. Note that this occlusion occurs in monkeys when contraction of the PAM homolog is unopposed, see Waller et al., 2008, supplementary video clip 17. In addition, PAM-AAM co-activation would have stabilized the base of the pinna and reduced myotendinous elasticity, thereby allowing quick changes in position or orientation. This perspective also illuminates our unexpected finding of ipsilateral SAM suppression in Experiment 1. A study of pinna orienting in cats, whose tall ears resemble those of prosimians, showed that they tend to tilt the ear downward slightly when orienting to a lateral target, see Populin and Yin, 1998. Neurobiologists have distinguished two types of pinna-orienting movements in cats, based on onset latency, see Siegmund and Santibáñez, 1982. The short-latency response is specific to auditory stimuli and is chronometrically uncorrelated with saccades toward the target. By contrast, the long-latency response can be elicited by visual as well as auditory stimuli and it is roughly synchronous with ocular orienting, see Populin and Yin, 1998. Using a 4-speaker set-up similar to that of the present Experiment 1, Siegmund and Santibáñez, 1982 found cats’ unconditioned pinna responses to have an EMG onset latency that averaged 78 ms, similar to our value of about 70 ms. The animals were then trained to make gaze shifts toward the sound sources. Onset latency of the auriculomotor responses dropped to a remarkable 29 ms and the responses were resistant to extinction over the course of 125 trials. Both findings were replicated by Populin and Yin, 1998( mean = 26 ms; failure to extinguish across 10, 000 unreinforced trials). The latter authors obtained an even more rapid response( mean = 21 ms) when the sound was preceded by a visual stimulus that served as warning signal and indicated that the cat should maintain gaze at a fixation point. They argued that the short-latency pinna response is too rapid to be mediated by the brain region most centrally involved in orienting, the superior colliculus( SC). This is because an earlier study, Populin and Yin, 1997, had found the average first-spike latency in the relevant portion of this structure to be 19 ms. Comparisons with these cat studies suggest that our participants' auriculomotor responses may have been primarily also of the short-latency variety that is not mediated by the SC. Two of the conditions tested by Populin and Yin, 1998 involved brief, lateralized auditory stimuli that, as in the present Experiment 1, were not task-relevant. The stimuli elicited short-latency ipsilateral pinna movements that were temporally uncorrelated with gaze shifts( see their Figures 6 and 8). During the delayed-saccade condition of their study, laterally presented sounds were task-relevant and forward fixation was required, as in our Experiment 2. Ipsilateral pinna movements triggered by onset of these sounds were of the short-latency variety( 21 ms, as noted above). Subsequent, smaller movements were then observed in synchrony with ocular orienting, roughly 400 ms after the fixation point was extinguished( Figure 7). It is long-latency pinna movements of this sort that Populin and Yin, 1998 suggested might be mediated by the SC. Given the major role of the SC in controlling eye fixation( Krauzlis et al., 2017), this structure may also be responsible for sustained maintenance of pinna orientation, such as in Experiment 2. Pinna movements can be triggered by electrical stimulation of the deep and intermediate layers of the SC in accordance with a topographical pattern that is in register with that of eye movements( Stein and Clamann, 1981). Lesions of this structure reduce the likelihood of pinna orienting as well as its accuracy, see Czihak et al., 1983. Although monosynaptic connections from SC to auriculomotor neurons in the facial nucleus do exist( Vidal et al., 1988), pinna control is dominated by disynaptic pathways from the SC that include the paralemniscal, oculomotor, or pontine reticular zones, see Henkel and Edwards, 1978; Takeuchi et al., 1979; Vidal et al., 1988. Among these, the paralemniscal zone appears to be the most important, and its auditory input originates in the nearby nucleus sagulum, see Henkel, 1981. Portions of neocortex also play a role in controlling pinna movements. Lesions of auditory cortex reduce the kinematic complexity of pinna orienting and slow its habituation, see Alvarado and Santibañez, 1971. Stimulation and recording studies in the macaque have identified a premotor ear-eye field( area 8B), which is connected with both auditory cortical areas and the SC, see Lanzilotto et al., 2013. These animal neuroanatomy and physiology studies, coupled with Wilson, 1908 seminal report, make it clear that the eyes and pinnae work together during endogenously cued attentional orienting. A recent study in humans by Gruters et al., 2018, showed a close relationship between movements of the left and right eardrums and multiple parameters of task-related left- and right-directed eye movements. It will be important in future research to test whether muscular responses of the middle and outer ears are linked in a coordinated manner to ocular orienting. Furthermore, exploration of the relationship between human auriculomotor activity and subtle markers of covert attention( see the recent review given in van Ede et al., 2019) corticofugal modulation of ascending auditory pathways( Perrot et al., 2006) in endogenous attention, and the neural mechanisms of orienting discussed in the preceding section has scarcely begun. Our results have implications for applied science, as well. They suggest that patterns of auricular muscle activity might serve as an easily accessible correlate of top-down processing in endogenous modes of attention. As such, the described effects might complement electroencephalographic indices of attentional focus( de Cheveigné et al., 2018; Schäfer et al., 2018) in that their sensitivity is exclusively spatial, rather than reflecting a context-specific mixture of modality, feature, location, and object representations. Registration of pinna-orienting might better support near real-time decoding of the attentional focus and, as compared to EEG-based stimulus reconstruction approaches, does not require the exogenous sound source, for example see the discussion in Schäfer et al., 2018. Thus, auricular muscle monitoring might support the decoding of auditory attention in technical applications such as attentionally controlled hearing aids that preferentially amplify sounds the user is attempting to listen to. We wish to underscore, though, that the development of such applications would benefit crucially from a better understanding of how the auditory and visual attention systems interact. We hope that the results presented here will stimulate research in this direction.",
"role": "user"
},
{
"content": "Dogs , cats , monkeys and other animals perk their ears in the direction of sounds they are interested in . Humans and their closest ape relatives , however , appear to have lost this ability . Some humans are able to wiggle their ears , suggesting that some of the brain circuits and muscles that allow automatic ear movements towards sounds are still present . This may be a ‘vestigial feature’ , an ability that is maintained even though it no longer serves its original purpose . Now , Strauss et al . show that vestigial movements of muscles around the ear indicate the direction of sounds a person is paying attention to . In the experiments , human volunteers tried to read a boring text while surprising sounds like a traffic jam , a baby crying , or footsteps played . During this exercise , Strauss et al . recorded the electrical activity in the muscles of their ears to see if they moved in response to the direction the sound came from . In a second set of experiments , the same electrical recordings were made as participants listened to a podcast while a second podcast was playing from a different direction . The individuals’ ears were also recorded using high resolution video . Both sets of experiments revealed tiny involuntary movements in muscles surrounding the ear closest to the direction of a sound the person is listening to . When the participants tried to listen to one podcast and tune out another , they also made ear ‘perking’ movements in the direction of their preferred podcast . The results suggest that movements of the vestigial muscles in the human ear indicate the direction of sounds a person is paying attention to . These tiny movements could be used to develop better hearing aids that sense the electrical activity in the ear muscles and amplify sounds the person is trying to focus on , while minimizing other sounds .",
"role": "assistant"
}
] |
science.bc7_litcovid_topic_classification | science.bc7_litcovid_topic_classification.3691 | [
{
"content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nPURPOSE: The novel coronavirus 2019 (COVID-19) outbreak which was first reported in Wuhan, China has been declared a pandemic by the World Health Organization on March 11, 2020. Otorhinolaryngologists deal intimately with pathologies of the head and neck region and upper respiratory tract and have been reported as a vulnerable group of healthcare workers who may be more susceptible to COVID-19 nosocomial infection. METHODS: In this article, we provide a comprehensive overview of the adaptations of Singapore's largest tertiary Otorhinolaryngology department during the COVID-19 outbreak. This was undertaken via an evidence-based approach. The relevant medical literature and evidence underlying our adaptations are highlighted. RESULTS: A four-pronged strategy including (1) personnel segregation, (2) triaging and decantment, (3) use of personal protective equipment and (4) changes in clinical practice was employed. The strategy was bolstered by drawing upon a collective learnt experience from the 2003 Severe Acute Respiratory Syndrome (SARS) outbreak. CONCLUSION: A rigorous framework which can preserve operationality while navigating the heightened risks during this outbreak is critical for every Otorhinolaryngology department. As the pandemic continues to evolve and more scientific reports of this disease are made available, approaches will need to be morphed.",
"role": "user"
},
{
"content": "[\"Prevention\"]",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.20421 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nRecent advances in image captioning task have led to increasing interests in video captioning task. However, most works on video captioning are focused on generating single input of aggregated features, which hardly deviates from image captioning process and does not fully take advantage of dynamic contents present in videos. We attempt to generate video captions that convey richer contents by temporally segmenting the video with action localization, generating multiple captions from multiple frames, and connecting them with natural language processing techniques, in order to generate a story-like caption. We show that our proposed method can generate captions that are richer in contents and can compete with state-of-the-art method without explicitly using video-level features as input.\n\nCited Abstract(s):\n[1]: Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for.",
"role": "user"
},
{
"content": "In terms of the story-like characteristic of the outcome, [1] are more intimate to our motivation. By aligning scenes with subtitles, and subtitles with passages in the books, they were able to generate story-like description of the scenes. This, however, has an obvious drawback that both books and movies should exist, along with subtitles. Alternatively, one may be able to rely on story-like, human-written ground truth captions for images or videos, but such data are not yet sufficiently available in the research community, and are expensive to collect. Since our work treats the frames from each segment independently, we do not need to rely on additional textual data to generate a story-like caption.",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.951 | [
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: A Google search for 2019-nCoV diagnostics (as of 6 February 2020; Table S2 ) yielded five webpage links from government and international bodies with official information and guidelines (WHO, Europe CDC, US CDC, US FDA), three webpage links on diagnostic protocols and scientific commentaries, and five webpage links on market news and press releases. Six protocols for diagnostics using reverse transcriptase polymerase chain reaction (RT-PCR) from six countries were published on WHO's website [9] . Google search for 2019-nCoV vaccines yielded 19 relevant articles. With the emergence of 2019-nCoV, real time RT-PCR remains the primary means for diagnosing the new virus strain among the many diagnostic platforms available ( [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] ; Table S3 ). Among the 16 diagnostics studies selected, one study discussed the use of RT-PCR in diagnosing patients with 2019-nCoV [11] ( Table 1 ). The period and type of specimen collected for RT-PCR play an important role in the diagnosis of 2019-nCoV. It was found that the respiratory specimens were positive for the virus while serum was negative in the early period. It has also suggested that in the early days of illness, patients have high levels of virus despite the mild symptoms. Apart from the commonly used RT-PCR in diagnosing MERS-CoV, four studies identified various diagnostic methods such as reverse transcription loop-mediated isothermal amplification (RT-LAMP), RT-insulated isothermal PCR (RT-iiPCR) and a one-step rRT-PCR assay based on specific TaqMan probes. RT-LAMP has similar sensitivity as real time RT-PCR. It is also highly specific and is used to detect MERS-CoV. It is comparable to the usual diagnostic tests and is rapid, simple and convenient. Likewise, RT-iiPCR and a one-step rRT-PCR assay have also shown similar sensitivity and high specificity for MER-CoV. Lastly, one study focused on the validation of the six commercial real RT-PCR kits, with high accuracy. Although real time RT-PCR is a primary method for diagnosing MERS-CoV, high levels of PCR inhibition may hinder PCR sensitivity (Table 1) . There are eleven studies that focus on SARS-CoV diagnostic testing (Table 1) . These papers described diagnostic methods to detect the virus with the majority of them using molecular testing for diagnosis. Comparison between the molecular test (i.e RT-PCR) and serological test (i.e., ELISA) showed that the molecular test has better sensitivity and specificity. Hence, enhancements to the current molecular test were conducted to improve the diagnosis. Studies looked at using nested PCR to include a pre-amplification step or incorporating N gene as an additional sensitive molecular marker to improve on the sensitivity (Table 1 ). In addition, there are seven potential rapid diagnostic kits (as of 24 January 2020; Table 2 ) available on the market for 2019-nCoV. Six of these are only for research purposes. Only one kit from Beijing Genome Institute (BGI) is approved for use in the clinical setting for rapid diagnosis. Most of the kits are for RT-PCR. There were two kits (BGI, China and Veredus, Singapore) with the capability to detect multiple pathogens using sequencing and microarray technologies, respectively. The limit of detection of the enhanced realtime PCR method was 10 2 -fold higher than the standard real-time PCR assay and 10 7fold higher than conventional PCR methods In the clinical aspect, the enhanced realtime PCR method was able to detect 6 cases of SARS-CoV positive samples that were not confirmed by any other assay [25] • The real time PCR has a threshold sensitivity of 10 genome equivalents per reaction and it has a good reproducibility with the inter-assay coefficients of variation of 1.73 to 2.72%. • 13 specimens from 6 patients were positive with viral load range from 362 to 36,240,000 genome equivalents/mL. The real-time RT-PCR reaction was more sensitive than the nested PCR reaction, as the detection limit for the nested PCR reaction was about 10 3 genome equivalents in the standard cDNA control. [34] Real-time reverse-transcription PCR (rRT-PCR); RNA-dependent RNA polymerase (RdRp); open reading frame 1a (ORF1a); Loop-mediated isothermal amplification (LAMP); enzyme-linked immunosorbent assay (ELISA); immunofluorescent assay (IFA); immunochromatographic test (ICT); nasopharyngeal aspirate (NPA). With the emergence of 2019-nCoV, there are about 15 potential vaccine candidates in the pipeline globally (Table 3 ), in which a wide range of technology (such as messenger RNA, DNA-based, nanoparticle, synthetic and modified virus-like particle) was applied. It will likely take about a year for most candidates to start phase 1 clinical trials except for those funded by Coalition for Epidemic Preparedness Innovations (CEPI). However, the kit developed by the BGI have passed emergency approval procedure of the National Medical Products Administration, and are currently used in clinical and surveillance centers of China [40] . Of the total of 570 unique studies on 2019-nCoV, SARS CoV or MERS-CoV vaccines screened, only four were eventually included in the review. Most studies on SARS and MERS vaccines were excluded as they were performed in cell or animal models ( Figure 1 ). The four studies included in this review were Phase I clinical trials on SARS or MERS vaccines (Table 4 ) [44] [45] [46] [47] . There were no studies of any population type (cell, animal, human) on the 2019-nCoV at the point of screening. The published clinical trials were mostly done in United States except for one on the SARS vaccine done in China [44] . All vaccine candidates for SARS and MERS were reported to be safe, well-tolerated and able to trigger the relevant and appropriate immune responses in the participants. In addition, we highlight six ongoing Phase I clinical trials identified in the ClinicalTrials.gov register ( [48, 49] ); Table S4 ) [50] [51] [52] . These trials are all testing the safety and immunogenicity of their respective MERS-CoV vaccine candidates but were excluded as there are no results published yet. The trials are projected to complete in December 2020 (two studies in Russia [50, 51] ) and December 2021 (in Germany [52] ). Existing literature search did not return any results on completed 2019-nCoV trials at the time of writing. Among 23 trials found from the systematic review (Table 5) , there are nine clinical trials registered under the clinical trials registry (ClinicalTrials.gov) for 2019-nCoV therapeutics [53] [54] [55] [56] [57] [58] [59] [60] [61] . Of which five studies on hydroxychloroquine, lopinavir plus ritonavir and arbidol, mesenchymal stem cells, traditional Chinese medicine and glucocorticoid therapy usage have commenced recruitment. The remaining four studies encompass investigation of antivirals, interferon atomization, darunavir and cobicistat, arbidol, and remdesivir usage for 2019-nCoV patients (Table 5) . Seroconversion measured by S1-ELISA occurred in 86% and 94% participants after 2 and 3 doses, respectively, and was maintained in 79% participants up to study end at week 60. Neutralising antibodies were detected in 50% participants at one or more time points during the study, but only 3% maintained neutralisation activity to end of study. T-cell responses were detected in 71% and 76% participants after 2 and 3 doses, respectively. There were no differences in immune responses between dose groups after 6 weeks and vaccine-induced humoral and cellular responses were respectively detected in 77% and 64% participants at week 60. [47] Molecules developed by the university scientists inhibit two coronavirus enzymes and prevent its replication. The discovered drug targets are said to be more than 95% similar to enzyme targets found on the SARS virus. Researchers note that identified drugs may not be available to address the ongoing outbreak but they hope to make it accessible for future outbreaks. [85] Besides the six completed randomized controlled trials (RCT) selected from the systematic review (Table 6) , there is only one ongoing randomized controlled trial targeted at SARS therapeutics [92] . The studies found from ClinicalTrials.gov have not been updated since 2013. While many prospective and retrospective cohort studies conducted during the epidemic centered on usage of ribavirin with lopinavir/ritonavir or ribavirin only, there has yet to be well-designed clinical trials investigating their usage. Three completed randomized controlled trials were conducted during the SARS epidemic-3 in China, 1 in Taiwan and 2 in Hong Kong [93] [94] [95] [96] [97] . The studies respectively investigated antibiotic usage involving 190 participants, combination of western and Chinese treatment vs. Chinese treatment in 123 participants, integrative Chinese and Western treatment in 49 patients, usage of a specific Chinese medicine in four participants and early use of corticosteroid in 16 participants. Another notable study was an open non-randomized study investigating ribavirin/lopinavir/ritonavir usage in 152 participants [98] . One randomized controlled trial investigating integrative western and Chinese treatment during the SARS epidemic was excluded as it was a Chinese article [94] . There is only one ongoing randomized controlled trial targeted at MERS therapeutics [99] . It investigates the usage of Lopinavir/Ritonavir and Interferon Beta 1B. Likewise, many prospective and retrospective cohort studies conducted during the epidemic centered on usage of ribavirin with lopinavir/ritonavir/ribavirin, interferon, and convalescent plasma usage. To date, only one trial has been completed. One phase 1 clinical trial investigating the safety and tolerability of a fully human polyclonal IgG immunoglobulin (SAB-301) was found in available literature [46] . The trial conducted in the United States in 2017 demonstrated SAB-301 to be safe and well-tolerated at single doses. Another trial on MERS therapeutics was found on ClinicalTrials.gov-a phase 2/3 trial in the United States evaluating the safety, tolerability, pharmacokinetics (PK), and immunogenicity on coadministered MERS-CoV antibodies REGN3048 & REGN3051 [100]. Rapid diagnostics plays an important role in disease and outbreak management. The fast and accurate diagnosis of a specific viral infection enables prompt and accurate public health surveillance, prevention and control measures. Local transmission and clusters can be prevented or delayed by isolation of laboratory-confirmed cases and their close contacts quarantined and monitored at home. Rapid diagnostic also facilitates other specific public health interventions such as closure of high-risk facilities and areas associated with the confirmed cases for prompt infection control and environmental decontamination [11, 101] . Laboratory diagnosis can be performed by: (a) detecting the genetic material of the virus, (b) detecting the antibodies that neutralize the viral particles of interest, (c) detecting the viral epitopes of interest with antibodies (serological testing), or (d) culture and isolation of viable virus particles. The key limitations of genetic material detection are the lack of knowledge of the presence of viable virus, the potential cross-reactivity with non-specific genetic regions and the short timeframe for accurate detection during the acute infection phase. The key limitations of serological testing is the need to collect paired serum samples (in the acute and convalescent phases) from cases under investigation for confirmation to eliminate potential cross-reactivity from non-specific antibodies from past exposure and/or infection by other coronaviruses. The limitation of virus culture and isolation is the long duration and the highly specialized skills required of the technicians to process the samples. All patients recovered. Significantly shorted time from the disease onset to the symptom improvement in treatment (5.10 ± 2.83 days) compared to control group (7.62 ± 2.27 days) (p < 0.05) No significant difference in blood routine improvement, pulmonary chest shadow in chest film improvement and corticosteroid usgae between the 2 groups. However, particularly in the respect of improving clinical symptoms, elevating quality of life, promoting immune function recovery, promoting absorption of pulmonary inflammation, reducing the dosage of cortisteroid and shortening the therapeutic course, treatment with integrative chinese and western medicine treatment had obvious superiority compared with using control treatment alone. Single infusions of SAB-301 up to 50 mg/kg appear to be safe and well-tolerated in healthy participants. [46] Where the biological samples are taken from also play a role in the sensitivity of these tests. For SARS-CoV and MERS-CoV, specimens collected from the lower respiratory tract such as sputum and tracheal aspirates have higher and more prolonged levels of viral RNA because of the tropism of the virus. MERS-CoV viral loads are also higher for severe cases and have longer viral shedding compared to mild cases. Although upper respiratory tract specimens such as nasopharyngeal or oropharyngeal swabs can be used, they have potentially lower viral loads and may have higher risk of false-negatives among the mild MERS and SARS cases [102, 103] , and likely among the 2019-nCoV cases. The existing practices in detecting genetic material of coronaviruses such as SARS-CoV and MERS-CoV include (a) reverse transcription-polymerase chain reaction (RT-PCR), (b) real-time RT-PCR (rRT-PCR), (c) reverse transcription loop-mediated isothermal amplification (RT-LAMP) and (d) real-time RT-LAMP [104] . Nucleic amplification tests (NAAT) are usually preferred as in the case of MERS-CoV diagnosis as it has the highest sensitivity at the earliest time point in the acute phase of infection [102] . Chinese health authorities have recently posted the full genome of 2019-nCoV in the GenBank and in GISAID portal to facilitate in the detection of the virus [11] .\n\nQuestion: Why is RT-PCR not the best method sometimes?",
"role": "user"
},
{
"content": "high levels of PCR inhibition may hinder PCR sensitivity",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.6780 | [
{
"content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nHowever, leaf detachment experiments demonstrated that it takes 4 h in cucumber (Rasmussen et al., 1991) and 4–6 h in Arabidopsis (Truman et al., 2007; Chaturvedi et al., 2012) for the SAR signal to move out of the induced leaf.",
"role": "user"
},
{
"content": "Background",
"role": "assistant"
}
] |
science.chia_ner | science.chia_ner.1181 | [
{
"content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPremenopausal women\n18-35 years old\nFSH levels < 10 mIU/ml\nAFC> 10\nRegular cycles\nBMI < 28\nSigned informed consent",
"role": "user"
},
{
"content": "{\"Condition\": [\"Regular cycles\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"AFC\", \"FSH levels\", \"BMI\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"old\", \"Premenopausal\", \"women\"], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"\\u003c 10 mIU/ml\", \"\\u003c 28\", \"18-35 years\", \"\\u003e 10\"], \"Visit\": []}",
"role": "assistant"
}
] |
science.pico_ner | science.pico_ner.1223 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nFactors that influence cancer patients ' anxiety following a medical consultation : impact of a communication skills training programme for physicians . BACKGROUND No study has yet assessed the impact of physicians ' skills acquisition after a communication skills training programme on the evolution of patients ' anxiety following a medical consultation . This study aimed to compare the impact , on patients ' anxiety , of a basic communication skills training programme ( BT ) and the same programme consolidated by consolidation workshops ( CW ) , and to investigate physicians ' communication variables associated with patients ' anxiety . PATIENTS AND METHODS Physicians , after attending the BT , were randomly assigned to CW or to a waiting list . The control group was not a non-intervention group . Consultations with a cancer patient were recorded . Patients ' anxiety was assessed with the State Trait Anxiety Inventory before and after a consultation . Communication skills were analysed according to the Cancer Research Campaign Workshop Evaluation Manual . RESULTS No statistically significant change over time and between groups was observed . Mixed-effects modelling showed that a decrease in patients ' anxiety was linked with screening questions ( P = 0.045 ) , physicians ' satisfaction about support given ( P = 0.004 ) and with patients ' distress ( P < 0.001 ) . An increase in anxiety was linked with breaking bad news ( P = 0.050 ) and with supportive skills ( P = 0.013 ) . No impact of the training programme was observed . CONCLUSIONS This study shows the influence of some communication skills on the evolution of patients ' anxiety . Physicians should be aware of these influences .",
"role": "user"
},
{
"content": "{\"Intervention\": [\"consolidation workshops\", \"communication skills training programme\", \"BT\", \"basic communication skills training programme ( BT )\", \"CW\", \"waiting list\"], \"Outcome\": [\"physicians \\u0027 satisfaction\", \"anxiety\", \"time\", \"patients \\u0027 anxiety\", \"Communication skills\", \"State Trait Anxiety Inventory\"], \"Participant\": [\"Physicians\", \"cancer patients \\u0027 anxiety\"]}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.3990 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nWhen reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation [Ni+ 2017] and definition generation [Noraset+ 2017; Gadetsky+ 2018], our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.\n\nCited Abstract(s):\n[1]: Unsupervised word sense disambiguation (WSD) methods are an attractive approach to all-words WSD due to their non-reliance on expensive annotated data. Unsupervised estimates of sense frequency have been shown to be very useful for WSD due to the skewed nature of word sense distributions. This paper presents a fully unsupervised topic modelling-based approach to sense frequency estimation, which is highly portable to different corpora and sense inventories, in being applicable to any part of speech, and not requiring a hierarchical sense inventory, parsing or parallel text. We demonstrate the effectiveness of the method over the tasks of predominant sense learning and sense distribution acquisition, and also the novel tasks of detecting senses which aren’t attested in the corpus, and identifying novel senses in the corpus which aren’t captured in the sense inventory.\n\n[2]: We address the problem of unknown word sense detection: the identification of corpus occurrences that are not covered by a given sense inventory. We model this as an instance of outlier detection, using a simple nearest neighbor-based approach to measuring the resemblance of a new item to a training set. In combination with a method that alleviates data sparseness by sharing training data across lemmas, the approach achieves a precision of 0.77 and recall of 0.82.\n\n[3]: Systems and methods for word sense disambiguation, including discerning one or more senses or occurrences, distinguishing between senses or occurrences, and determining a meaning for a sense or occurrence of a subject term. In a collection of documents containing terms and a reference collection containing at least one meaning associated with a term, the method includes forming a vector space representation of terms and documents. In some embodiments, the vector space is a latent semantic index vector space. In some embodiments, occurrences are clustered to discern or distinguish a sense of a term. In preferred embodiments, meaning of a sense or occurrence is assigned based on either correlation with an external reference source, or proximity to a reference source that has been indexed into the space.",
"role": "user"
},
{
"content": "Our task is closely related to word sense disambiguation () [3] , which identifies a pre-defined sense for the target word with its context. Although we can use it to solve our task by retrieving the definition sentence for the sense identified by , it requires a substantial amount of training data to handle a different set of meanings of each word, and cannot handle words (or senses) which are not registered in the dictionary. Although some studies have attempted to detect novel senses of words for given contexts [2] [1] , they do not provide definition sentences. Our task avoids these difficulties in by directly generating descriptions for phrases or words with their contexts. It also allows us to flexibly tailor a fine-grained definition for the specific context.",
"role": "assistant"
}
] |
science.scientific_papers_summarization_single_doc_pubmed | science.scientific_papers_summarization_single_doc_pubmed.7886 | [
{
"content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\nabdomen is the third most commonly injured region of the body , involving 20% of all injured civilians who undergo surgical intervention . penetrating wounds represent 25% of all urban traumas which include gunshot , shotgun and stab wounds .\nthese injuries are potentially life threatening and often warrant exploration . penetrating wounds presenting with in situ objects\nare rarely encountered , and anesthetic and surgical management is complex due to inability in positioning the patient and the risk of sudden hemorrhage .\nthe anaesthetic management of these types of injuries is challenging with respect to airway control , urgency for the surgery and associated hemodynamic instability .\nwe report the management of a patient , who presented with through and through penetrating injury of the abdomen with an intact penetrating object projecting posteriorly .\na 50-year - old male patient was brought to the emergency medicine department ( emd ) with an in situ 2.8 heavy metallic bar projecting over the back in a through and through penetrating abdomen injury following road traffic accident ( rta ) .\nthe entry and exit wounds were in the back in the midline at l3 level and right iliac fossa , respectively [ figures 1 and 2 ] .\nthe penetrating heavy metallic bar had to be cut from the three wheeler vehicle prior to transporting the patient to hospital . penetrated object visible in anaesthetized patient . note a )\nicd tube b ) inability to support the fractured pelvis . penetrating foreign body and nylon bag visible through anterior abdominal wall the patient presented with acute dyspnea , cough and abdominal pain .\nhe was in hypovolemic shock with blood pressure 76/30 mm of hg and pulse rate 140/minute and with cold extremities .\nauscultation revealed decreased air entry in the right thorax with oxygen saturation on room air of 80% .\noxygen supplementation ( 8 l / minute ) with face mask and intravenous fluids were started .\nabdomen displayed an iron bar which had entered through the back , and at its tip , another foreign body ( nylon bag ! ) was visible through an open anterior abdominal wall wound [ figures 13 ] .\nintercostal drainage ( icd ) was planned but insertion was difficult owing to difficulty in positioning the restless patient .\nthe patient was rushed immediately to operating room ( or ) in a lateral position .\nhe was positioned supine over two operating tables arranged in parallel [ figure 4 ] , such that the metallic rod was positioned in between them .\nicd was placed in the right 4th intercostal space and simultaneously anaesthesia was induced with intravenous ketamine 100 mg .\nsuccinylcholine 75 mg i.v . was administered and modified rapid sequence tracheal intubation was performed .\nanaesthesia was maintained with 50% air - oxygen mixture , fentanyl 2 mcg / kg , sevoflurane and vecuronium .\nvolume replacement with colloids and crystalloids along with dopamine 5 - 8 mcg / kg / minute were needed to maintain intraoperative blood pressures .\na central venous catheter was placed in the right subclavian vein and central venous pressure ( cvp ) was monitored .\ntwo and a half liters of crystalloids , 2 units of plasma and 4 units of whole blood were replaced .\nthe patient was repositioned semi - laterally so as to visualize both ends of the foreign body which was then accessed and removed by anterolateral laparotomy approach .\nlaparotomy revealed retroperitoneal haematoma , lower renal pole injury , multiple mesenteric vascular tears and bowel injury .\npostoperatively , the patient was mechanically ventilated for 8 hours and trachea was extubated , once hemodynamically stable .\nrusted metallic bar with nylon bag operating room ( or ) tables in parallel , avoiding the foreign body hinging\nalthough penetrating injuries of abdomen are common , presentation with projecting in - situ foreign bodies is rare .\nvarious types of objects causing such injury include glass , knife , barbed wire , bearing scrapper , plank , forklift , broomstick and metal hooks .\nsurvival following a through and through penetrating thoraco - abdominal injury with a large heavy iron rod is rare and rarer still when associated with polytrauma .\ndifficulties in managing such a patient are manifold due to hypoxia , hemorrhagic shock , urgency of surgical intervention and inability to position the patient supine .\nairway control may be challenging if the projecting object in the back precludes supine positioning and any inadvertent manipulation can result in sudden collapse of the patient . resuscitation and close monitoring prior to and during surgery , especially at\nthe time of removal of penetrating object , is vital with anticipation of major vascular injuries .\nthere was delay in reaching the hospital as the penetrated metallic rod had to be cut from the vehicle .\nprogressive dyspnea , due to chest injury with fracture ribs mandated emergency icd insertion to relieve hemopneumothorax . however , icd insertion was not possible in the emd as the intoxicated , full stomach patient , with worsening hypoxia and hypotension was getting progressively restless .\ncvp measurement and stomach tube insertion ( to empty the gastric contents ) , though ideally required prior to anaesthetic administration , were not possible due to the above reasons .\ninduction of anaesthesia in the emd was required , but inability to position the patient supine was a major hurdle .\ntracheal intubation with the patient in lateral position was not practical as the long metal bar was projecting and there was associated chest injury and pelvic fracture . to position the patient supine without disturbing the projecting metal rod on the back , a two - table technique \ntwo parallel tables were arranged and patient was positioned in such a way that the projecting rod takes the space between the two .\nthis two - table technique is a safe and simple solution for this life - threatening situation with projecting objects over back of chest and abdomen . during the removal of foreign body , anticipation of sudden gushing out of blood from an injured major vessel\nsurgical removal of the foreign body may relieve the plugging effect of the foreign body on the breach in the injured blood vessel wall .\nfortuitously , our patient did not have any injury to the inferior vena cava or aorta although such association is well known . continued hypotension may indicate bleeding elsewhere and our patient also had an associated pelvic fracture with hematoma .\nmodified rapid sequence induction - intubation with ketamine in this circumstance may be the choice.[57 ] emergently placed icd relieved the pneumothorax , improved the blood pressure and saturation .\nright side was preferred for subclavian central venous catheter as icd tube was already in place for the pneumothorax .\ndopamine infusion was started along with volume replacement to maintain adequate blood pressure and organ perfusion .\nadvanced trauma life support ( atls ) curriculum advocates rapid infusion of up to 2 l of warmed isotonic crystalloid solution for any hypovolemic , hypothermic patient to restore normal bp and urine output .",
"role": "user"
},
{
"content": "penetrating abdominal injuries are potentially life threatening due to the associated hemorrhagic shock and visceral injury . through and through penetrating injury with polytrauma is rarely encountered . \n we report a case presenting with in situ projecting heavy metallic rod in a through and through penetrating abdominal injury along with foreign body in a road traffic accident . \n anaesthetic management was difficult due to inability to position in supine , rapidly progressing hemorrhagic shock and hypoxia due hemopneumothorax . \n two operating tables were used with adequate intervening space to accommodate the posteriorly projecting metallic rod during intubation in supine position . \n intensive monitoring and resuscitation resulted in uneventful successful outcome .",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.429 | [
{
"content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Triple combination nitazoxanide, ribavirin, and hydroxychloroquine decrease in the multiplicative reduction of in vitro sars-cov-2 viral replication.\n\nAbstract:\nBackground An immediate unmet medical need exists to test and develop existing approved drugs against SARS-COV-2.\nDespite many efforts, very little progress has been made regarding finding low-cost oral medicines that can be made widely available worldwide to address the global pandemic.\nMethods We sought to examine if a triple combination of nitazoxanide (using its active metabolite tizoxanide), ribavirin, and hydroxychloroquine would lead to a multiplicative effects on viral replication of SARS-COV-2 resulting in a significant reduction of virus yield using VERO E6 cells as a model of viral replication.\nResults Virus yield measured in PFU/ml was ~ 2 logs lower with triple combination versus either drug alone, resulting in the prolongation of time to peak cytopathic effects (CPE).\nThe time to produce 50% CPE increased from 2.8 days for viral controls versus 5.3 days for triple combination therapy.\nFinally, for each 1-log reduction in virus yield 24 hours post-infection, there was an additional 0.7-day delay in onset of CPE.\nConclusions A triple combination of tizoxanide, ribavirin, and hydroxychloroquine produced a reduction in SARS-COV-2 viral replication in Vero E6 cells, warranting exploration in additional cell lines as well as human clinical trials.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Conclusions A triple combination of tizoxanide, ribavirin, and hydroxychloroquine produced a reduction in SARS-COV-2 viral replication in Vero E6 cells, warranting exploration in additional cell lines as well as human clinical trials.\"]}",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1257 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nElectrospray deposition device used to precisely control the matrix crystal to improve the performance of MALDI MSI MALDI MSI has been recently applied as an innovative tool for detection of molecular distribution within a specific tissue. MALDI MSI requires deposition of an organic compound, known as matrix, on the tissue of interest to assist analyte desorption and ionization, in which the matrix crystal homogeneity and size greatly influence the imaging reproducibility and spatial resolution in MALDI MSI. In this work, a homemade electrospray deposition device was developed for deposition of matrix in MALDI MSI. The device could be used to achieve 1 μm homogeneous matrix crystals in MALDI MSI analysis. Moreover, it was found, for the first time, that the electrospray deposition device could be used to precisely control the matrix crystal size, and the imaging spatial resolution was increased greatly as the matrix crystals size becoming smaller. In addition, the easily-built electrospray deposition device was durable for acid, base or organic solvent, and even could be used for deposition of nanoparticles matrix, which made it unparalleled for MALDI MSI analysis. The feasibility of the electrospray deposition device was investigated by combination with MALDI FTICR MSI to analyze the distributions of lipids in mouse brain and liver cancer tissue section.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"tissue\"], \"Bacterium\": [], \"BiologicFunction\": [\"brain\", \"distributions\", \"distribution\", \"liver cancer\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"crystals\", \"lipids\", \"crystal\", \"organic solvent\", \"matrix\", \"organic compound\", \"acid\", \"base\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"mouse\"], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"analysis\", \"detection\", \"imaging\", \"MALDI MSI\", \"MALDI FTICR MSI\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [\"Electrospray deposition device\", \"electrospray deposition device\", \"device\"], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [\"spatial\", \"size\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.genia_ner | science.genia_ner.1324 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEffect of environmental estrogens on IL-1beta promoter activity in a macrophage cell line. Environmental estrogens or estrogen disrupters have recently received a great deal of attention because of their potential health impact on reproductive tissues. Few, if any, studies have been made on the impact of these compounds on the immune system. We sought to determine the activities of various environmental estrogens on the modulation of the interleukin-1beta (IL-1beta) gene in a model monocytic cell line, hER + IL-1beta-CAT +. This cell line stably transfected with the human estrogen receptor, and an IL-1beta promoter construct fused to the CAT reporter gene allows us to monitor the effect of estrogenic compounds on IL-1beta promoter activity. 17beta-estradiol (E2) markedly enhanced lipopolysaccharide- (LPS) induced IL-1beta promoter-driven CAT activity in a dose-dependent manner. The mycotoxins alpha-zearalenol and zearalenone both exhibited full agonist activity, but at lower potencies, with EC50 values of 1. 8 and 54 nM, respectively, compared with E2 at 0. 5 nM. In addition, genistein was a very low-potency agonist, having an EC50 of 1. 5 microM. Similar to the E2 response, the slope factors for alpha-zearalenol, zearalenone, and genistein were close to 3. 0, suggesting positive cooperativity in the estrogenic response. The activity of the mycotoxins appeared to be mediated through the estrogen receptor, since both the antiestrogens H1285 and ICI 182, 780 effectively inhibited their agonist activity in a dose-dependent manner. Representative environmental estrogenic compounds both from plant and industrial sources were also tested. Unlike the mycoestrogens, none of the compounds, with the exception of genistein, synergized with LPS to enhance IL-1beta promoter activity. When tested for antiestrogenic activity, the industrial compound 4-octylphenol was able to antagonize the response to E2; however, the response was three orders of magnitude less potent than H 1285. Naringenin, a plant flavonoid, showed little or no ability to antagonize the response to E2. Overall, the results show that some environmental estrogens that display agonist activity in reproductive tissue also have an effect on IL-1 gene expression in hemopoietic-derived tissue.",
"role": "user"
},
{
"content": "{\"CellLine\": [\"model monocytic cell line\", \"hER + IL - 1beta - CAT +\", \"macrophage cell line\"], \"CellType\": [], \"DNA\": [\"IL - 1 gene\", \"CAT reporter gene\", \"IL - 1beta promoter\", \"interleukin - 1beta (IL - 1beta) gene\"], \"Protein\": [\"human estrogen receptor\", \"CAT\", \"IL - 1beta\", \"interleukin - 1beta\"], \"RNA\": []}",
"role": "assistant"
}
] |
science.covidfact_entailment | science.covidfact_entailment.96 | [
{
"content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Cryptic transmission of novel coronavirus sars-cov-2 revealed by genomic epidemiology\n\nAbstract:\nFollowing its emergence in Wuhan, China, in late November or early December 2019, the SARS-CoV-2 virus has rapidly spread throughout the world.\nGenome sequencing of SARS-CoV-2 strains allows for the reconstruction of transmission history connecting these infections.\nHere, we analyze 346 SARS-CoV-2 genomes from samples collected between 20 February and 15 March 2020 from infected patients in Washington State, USA.\nWe found that the large majority of SARS-CoV-2 infections sampled during this time frame appeared to have derived from a single introduction event into the state in late January or early February 2020 and subsequent local spread, indicating cryptic spread of COVID-19 before active community surveillance was implemented.\nWe estimate a common ancestor of this outbreak clade as occurring between 18 January and 9 February 2020.\nFrom genomic data, we estimate an exponential doubling between 2.4 and 5.1 days.\nThese results highlight the need for large-scale community surveillance for SARS-CoV-2 and the power of pathogen genomics to inform epidemiological understanding.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Genome sequencing of SARS-CoV-2 strains allows for the reconstruction of transmission history connecting these infections.\"]}",
"role": "assistant"
}
] |
science.bioasq_yesno_qa | science.bioasq_yesno_qa.146 | [
{
"content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is Alpers disease inherited in an autosomal recessive mode?\n\nContext: Alpers-Huttenlocher syndrome (AHS) is a very rare autosomal recessive disorder\nAlpers syndrome is an autosomal recessive mitochondrial DNA depletion disorder that affects children and young adults\nAlpers' syndrome is a fatal neurogenetic disorder first described more than 70 years ago. It is an autosomal recessive, developmental mitochondrial DNA depletion disorder characterized by deficiency in mitochondrial DNA polymerase gamma (POLG) catalytic activity, refractory seizures, neurodegeneration, and liver disease\nHistopathological findings in both patients ((a) chronic hepatitis with prominent bile duct proliferation, fatty change, and fibrosis; (b) in the brain a patchy destruction of the cerebral cortex, predominantly involving striate cortex) were characteristic of progressive neuronal degeneration of childhood with liver disease--Alpers-Huttenlocher syndrome--a rare autosomal recessive disorder usually seen in infants and young children\nHistopathological findings in both patients ((a) chronic hepatitis with prominent bile duct proliferation, fatty change, and fibrosis; (b) in the brain a patchy destruction of the cerebral cortex, predominantly involving striate cortex) were characteristic of progressive neuronal degeneration of childhood with liver disease--Alpers-Huttenlocher syndrome--a rare autosomal recessive disorder usually seen in infants and young children.\nAlpers syndrome is a rare autosomal recessive hepatocerebral degenerative disorder.\nAlpers disease is a recessive mitochondrial disorder caused by mutations in POLG1 and characterized primarily by progressive neurological and hepatic degeneration.\nAlpers syndrome is an autosomal recessive mitochondrial DNA depletion disorder that affects children and young adults.\nWe conclude that Alpers disease can be a cause of rapidly progressive liver failure in early childhood. Although the cause of this autosomal recessive disease is not known, it does not appear to be related to peroxisomal dysfunction.",
"role": "user"
},
{
"content": "yes",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.3677 | [
{
"content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Vitamin D can Reduce COVID-19 Risk\n\nAbstract:\nBackground: Following emerge of a novel coronavirus from Wuhan, China, in December 2019, it has affected the whole world and after months of efforts by the medical communities, there is still no specific approach for prevention and treatment against the Coronavirus Disease 2019 (COVID-19).\nEvidence recommends that vitamin D might be an important supportive agent for the immune system, mainly in cytokine response regulation against COVID-19.\nHence, we carried out a rapid systematic review and meta-analysis along with an ecological investigation in order to maximize the use of everything that exists about the role of vitamin D in the COVID-19.\nMethods: A systematic search was performed in PubMed, Scopus, Embase, Cochrane Library, Web of Science and Google Scholar (intitle) as well as preprint database of medRxiv, bioRxiv, Research Square, preprints.org, search engine of ScienceDirect and a rapid search through famous journals up to May 26, 2020.\nStudies focused on the role of vitamin D in confirmed COVID-19 patients were entered into the systematic review.\nAlong with our main aim, to find the second objective: correlation of global vitamin D status and COVID-19 recovery and mortality we carried out a literature search in PubMed database to identify the national or regional studies reported the vitamin D status globally.\nCMA v. 2.2.064 and SPSS v.16 were used for data analysis.\nResults: Out of nine studies entered into our systematic review, six studies containing 3,822 participants entered into the meta-analysis.\nThe meta-analysis indicated that 46.5% of COVID-19 patients were suffering from vitamin D deficiency (95% CI, 28.2%-65.8%) and in 43.3% of patients, levels of vitamin D were insufficient (95% CI, 27.4%-60.8%).\nIn regard to our ecological investigation on 51 countries including 408,748 participants, analyses indicated no correlation between vitamin D levels and recovery rate (r= 0.041) as well as mortality rate (r=-0.073) globally.\nHowever, given latitude, a small reverse correlation between mortality rate and vitamin D status was observed throughout the globe (r= -0.177).\nIn Asia, a medium direct correlation was observed for recovery rate (r= 0.317) and a significant reveres correlation for mortality rate (r= -0.700) with vitamin D status in such patients.\nIn Europe, there were no correlations for both recovery (r= 0.040) and mortality rate (r= -0.035).\nIn Middle East, the recovery rate (r= 0.267) and mortality rate (r= -0.217) showed a medium correlation.\nIn North and Sought America, surprisingly, both recovery and mortality rate demonstrated a direct correlation respectively (r= 1.000, r=0.500).\nIn Oceania, unexpectedly, recovery (r= -1.000) and mortality (r= -1.000) rates were in considerable reverse correlation with vitamin D levels.\nConclusion: In this systematic review and meta-analysis with an ecological approach, we found a high percentage of COVID-19 patients who suffer from vitamin D deficiency or insufficiency.\nMuch more important, our ecological investigation resulted in substantial direct and reverse correlations between recovery and mortality rates of COVID-19 patients with vitamin D status in different countries.\nConsidering latitudes, a small reverse correlation between vitamin D status and mortality rate was found globally.\nIt seems that populations with lower levels of vitamin D might be more susceptible to the novel coronavirus infection.\nNevertheless, due to multiple limitations, if this study does not allow to quantify a value of the Vitamin D with full confidence, it allows at least to know what the Vitamin D might be and that it would be prudent to invest in this direction through comprehensive large randomized clinical trials.",
"role": "user"
},
{
"content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"In regard to our ecological investigation on 51 countries including 408,748 participants, analyses indicated no correlation between vitamin D levels and recovery rate (r= 0.041) as well as mortality rate (r=-0.073) globally.\", \"However, given latitude, a small reverse correlation between mortality rate and vitamin D status was observed throughout the globe (r= -0.177).\", \"In Asia, a medium direct correlation was observed for recovery rate (r= 0.317) and a significant reveres correlation for mortality rate (r= -0.700) with vitamin D status in such patients.\", \"In Europe, there were no correlations for both recovery (r= 0.040) and mortality rate (r= -0.035).\"]}",
"role": "assistant"
}
] |
science.nlmgene_ner | science.nlmgene_ner.346 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"IGF\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTAK1 is an essential regulator of BMP signalling in cartilage. TGFbeta activated kinase 1 (TAK1), a member of the MAPKKK family, controls diverse functions ranging from innate and adaptive immune system activation to vascular development and apoptosis. To analyse the in vivo function of TAK1 in cartilage, we generated mice with a conditional deletion of Tak1 driven by the collagen 2 promoter. Tak1(col2) mice displayed severe chondrodysplasia with runting, impaired formation of secondary centres of ossification, and joint abnormalities including elbow dislocation and tarsal fusion. This phenotype resembled that of bone morphogenetic protein receptor (BMPR)1 and Gdf5-deficient mice. BMPR signalling was markedly impaired in TAK1-deficient chondrocytes as evidenced by reduced expression of known BMP target genes as well as reduced phosphorylation of Smad1/5/8 and p38/Jnk/Erk MAP kinases. TAK1 mediates Smad1 phosphorylation at C-terminal serine residues. These findings provide the first in vivo evidence in a mammalian system that TAK1 is required for BMP signalling and functions as an upstream activating kinase for Smad1/5/8 in addition to its known role in regulating MAP kinase pathways. Our experiments reveal an essential role for TAK1 in the morphogenesis, growth, and maintenance of cartilage.",
"role": "user"
},
{
"content": "{\"Gene\": [\"MAP kinase\", \"TGFbeta activated kinase 1\", \"col2\", \"p38\", \"Erk\", \"Smad1\", \"Smad1/5/8\", \"Gdf5\", \"bone morphogenetic protein receptor (BMPR)1\", \"MAPKKK\", \"Tak1\", \"MAP kinases\", \"collagen 2\", \"BMP\", \"TAK1\", \"BMPR\", \"Jnk\"]}",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.2096 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTranscriptome analysis of IL-10 - stimulated (M2c) macrophages by next-generation sequencing Alternatively activated \" M2\" macrophages are believed to function during late stages of wound healing, behaving in an anti-inflammatory manner to mediate the resolution of the pro-inflammatory response caused by \"M1\" macrophages. However, the differences between two main subtypes of M2 macrophages, namely interleukin-4 (IL-4)- stimulated \"M2a\" macrophages and IL-10- stimulated \"M2c\" macrophages, are not well understood. M2a macrophages are characterized by their ability to inhibit inflammation and contribute to the stabilization of angiogenesis. However, the role and temporal profile of M2c macrophages in wound healing are not known. Therefore, we performed next generation sequencing (RNA-seq) to identify biological functions and gene expression signatures of macrophages polarized in vitro with IL-10 to the M2c phenotype in comparison to M1 and M2a macrophages and an unactivated control (M0). We then explored the expression of these gene signatures in a publicly available data set of human wound healing. RNA-seq analysis showed that hundreds of genes were upregulated in M2c macrophages compared to the M0 control, with thousands of alternative splicing events. Following validation by Nanostring, 39 genes were found to be upregulated by M2c macrophages compared to the M0 control, and 17 genes were significantly upregulated relative to the M0, M1, and M2a phenotypes (using an adjusted p-value cutoff of 0.05 and fold change cutoff of 1.5). Many of the identified M2c-specific genes are associated with angiogenesis, matrix remodeling, and phagocytosis, including CD163, MMP8, TIMP1, VCAN, SERPINA1, MARCO, PLOD2, PCOCLE2 and F5. Analysis of the macrophage - conditioned media for secretion of matrix-remodeling proteins showed that M2c macrophages secreted higher levels of MMP7, MMP8, and TIMP1 compared to the other phenotypes. Interestingly, temporal gene expression analysis of a publicly available microarray data set of human wound healing showed that M2c-related genes were upregulated at early times after injury, similar to M1-related genes, while M2a-related genes appeared at later stages or were downregulated after injury. While further studies are required to confirm the timing and role of M2c macrophages in vivo, these results suggest that M2c macrophages may function at early stages of wound healing. Identification of markers of the M2c phenotype will allow more detailed investigations into the role of M2c macrophages in vivo.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"genes\", \"M2\\\" macrophages\", \"\\\"M1\\\" macrophages\", \"PLOD2\", \"MARCO\", \"\\\"M2a\\\" macrophages\", \"(M2c) macrophages\", \"macrophage\", \"M2a macrophages\", \"\\\"M2c\\\" macrophages\", \"PCOCLE2\", \"SERPINA1\", \"macrophages\", \"CD163\", \"M2c macrophages\", \"VCAN\", \"F5\", \"M1\", \"TIMP1\", \"M2 macrophages\", \"MMP8\"], \"Bacterium\": [], \"BiologicFunction\": [\"biological functions\", \"inflammation\", \"secretion\", \"downregulated\", \"upregulated\", \"expression\", \"wound healing\", \"angiogenesis\", \"phagocytosis\", \"secreted\", \"alternative splicing\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"MMP7\", \"proteins\", \"IL-10\", \"TIMP1\", \"MMP8\", \"conditioned media\"], \"ClinicalAttribute\": [\"markers\"], \"Eukaryote\": [\"human\"], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"stabilization\", \"RNA-seq analysis\", \"Transcriptome analysis\"], \"InjuryOrPoisoning\": [\"injury\"], \"IntellectualProduct\": [\"subtypes\", \"data set\", \"microarray data set\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"next generation sequencing\", \"validation by Nanostring\", \"next-generation sequencing\", \"gene expression analysis\"], \"SpatialConcept\": [\"RNA-seq\", \"in vivo\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.covid_deepset_qa | science.covid_deepset_qa.725 | [
{
"content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: However, some of the commonly employed vectors, for example Salmonella and adenovirus, often have pre-existing immune responses in the host and this has the potential to modify the subsequent immune response to a vectored antigen. This review examines the literature on this topic, and concludes that for bacterial vectors there can in fact, in some cases, be an enhancement in immunogenicity, typically humoral, while for viral vectors pre-existing immunity is a hindrance for subsequent induction of cell-mediated responses. Text: In the fields of medicine and veterinary medicine, there are numerous live, attenuated bacterial and viral vaccines in use today worldwide. The safety and efficacy of such vaccines is well established and allows further development as vector systems to deliver antigen originating from other pathogens. Various attenuated bacteria, including Escherichia coli, Vibrio cholerae, lactic acid bacteria (LAB), specifically Lactococcus lactis, Mycobacterium, Listeria, Shigella and Salmonella, have been tested for the targeted delivery of heterologous antigens of bacterial, viral and parasitic origin into a variety of animal hosts (Bahey-El-Din et al., 2010; Innocentin et al., 2009; Johnson et al., 2011; Tobias et al., 2008 Tobias et al., , 2010 Tobias & Svennerholm, 2012) . Bacteria such as E. coli and lactic acid bacteria have recently gained favour, as E. coli is a commensal and lactic acid bacteria are present in most fermented food items and are therefore naturally present in the host. They are also a much safer option than traditional attenuated vaccines in children and immunecompromised people. As this review discusses the effects of pre-existing immune responses to attenuated vaccines, further discussion of LAB and E. coli as potential vectors will not be undertaken; however, the reader is directed to several interesting reviews (Bermú dez-Humarán et al., 2011; Wells & Mercenier, 2008) . Intracellular bacteria from the genera Mycobacterium (Guleria et al., 1996) , Listeria (Gentschev et al., 2001) , Shigella (Levine et al., 1997) and Salmonella (Dougan et al., 1987) are considered to be suitable candidates for the delivery of vaccine antigens due to their capability to induce robust T cell immune responses (Alderton et al., 1991; Lo et al., 1999; Mastroeni et al., 2001; Mittrücker & Kaufmann, 2000; Nauciel, 1990) . Salmonella is one genus that has been well examined as a vector, building on the extensive research available on the micro-organism's physiology and pathogenesis (Basso et al., 2000; Killeen & DiRita, 2000; Sirard et al., 1999; Ward et al., 1999) . There exist several commercial vaccines that are used as anti-Salmonella vaccines in humans and animals (e.g. Ty21a for typhoid fever in humans, several Salmonella serovars against salmonellosis in chickens and other animals). The general strategy for vectoring heterologous antigen is depicted in Fig. 1 . The first clinical trial of a recombinant, which was conducted over 20 years ago using an attenuated Salmonella as a delivery vector, led to the widespread testing of this bacterium as a mucosal delivery system for antigens from non-Salmonella pathogens (Dougan et al., 1987) . These studies have demonstrated the utility of live bacteria to deliver expressed antigens and DNA vaccines to the host immune system (Atkins et al., 2006; Husseiny & Hensel, 2008; Jiang et al., 2004; Kirby et al., 2004) . Since then several other intracellular bacterial vectors have been successfully tested for their capability to deliver a variety of antigens from various pathogens, as well as vaccination against cancer. One genus which has been widely tested as vector is Listeria. Listeria species are Gram-positive intracellular food-borne pathogens. The advantages of Listeria are that it can invade a variety of cells, including antigen presenting cells (APCs). After invading the host cell, Listeria resides inside the phagosome; however, it can escape the phagosome with the help of listeriolysin O (LLO; Hly) and reside in the cytoplasm of the cells, thereby efficiently presenting antigen to both CD8 and CD4 T cells (Cossart & Mengaud, 1989; Kaufmann, 1993; Pamer et al., 1997) . Several studies have demonstrated the effectiveness and ease of using Listeria monocytogenes to deliver heterologous vaccine antigens and DNA vaccines Jensen et al., 1997; Johnson et al., 2011; Peters et al., 2003; Shen et al., 1995; Yin et al., 2011) . Similarly, various viral vectors have been successfully tested for their capability to deliver heterologous vaccine antigens, and this generally results in the induction of strong CTL immune responses. In the veterinary field, there are numerous viral vector vaccines that are currently licensed for use in livestock and domesticated animals. These recombinant vaccines are based on both DNA viruses (such as fowlpox virus-based vaccines which target avian influenza virus and fowlpox virus, or vaccinia virusbased vectors against the rabies virus in wildlife) and RNA viruses [such as Newcastle disease virus-based vaccines to be used in poultry or yellow fever virus (YFV)-based vaccines to be used in horses against West Nile virus] (Draper & Heeney, 2010) . Based on the safety record in the veterinary field, many viruses have been studied for human use as a vector in vaccine development (Beukema et al., 2006; Esteban, 2009; Schirrmacher & Fournier, 2009; Stoyanov et al., 2010; Weli & Tryland, 2011) . Amongst them, YFV (YF-17D strain) was the first to be licensed for use in humans, where the cDNAs encoding the envelope proteins of YFV were replaced with the corresponding genes of an attenuated Japanese encephalitis virus strain, SA14-14-2 (Appaiahgari & Vrati, 2010; Rollier et al., 2011) . Poxviruses are also studied extensively as candidate vectors for human use, among which attenuated derivatives of vaccinia virus [such as modified vaccinia virus Ankara (MVA) and New York attenuated vaccinia virus NYVAC strains] are the most promising vectors (Esteban, 2009; Gó mez et al., 2008; Rimmelzwaan & Sutter, 2009 ). They are ideal candidate vectors due to their large DNA-packing capacity and their thermal and genetic stability (Minke et al., 2004) . The NYVAC vector has been shown to induce CD4 + T cell-dominant responses, and MVA induces both CD4 + and CD8 + T cell responses (Mooij et al., 2008) . The adenovirus (Ad) vector is another of the most widely evaluated vectors to date to express heterologous antigens, due to ease of production, safety profile, genetic stability, the ease of DNA genome manipulation, and the ability to stimulate both innate and adaptive immune responses and induce both T and B cell responses (Alexander et al., 2012; Fitzgerald et al., 2003; Gabitzsch & Jones, 2011; Lasaro & Ertl, 2009; Vemula & Mittal, 2010; Weyer et al., 2009) . They have been extensively examined as a delivery vector in several preclinical and clinical studies for infectious diseases such as anthrax, hepatitis B, human immunodeficiency virus (HIV)-1, influenza, measles, severe acute respiratory syndrome (SARS), malaria and tuberculosis M. Saxena and others (Chengalvala et al., 1994; Gao et al., 2006; Hashimoto et al., 2005; Hsu et al., 1992; Limbach & Richie, 2009; Radosevic et al., 2007; Shiver et al., 2002) . However, before vectored vaccines can be used in the human population they need to satisfy several important criteria. Safety is a major concern, as even a low level of toxicity is unacceptable (of course the minor discomfort that accompanies many vaccinations is normal). Secondly, a vaccine should be inexpensive, so that it can be administered to a large population at minimal cost, and this is particularly important in resource-poor countries (Killeen & DiRita, 2000) . Similar constraints apply to veterinary vaccines, with cost often an even more important consideration. Finally, long-lasting cellular and (where appropriate) humoral immune responses to the vectored antigen must be induced following administration of these vaccines, preferably with a single dose (Atkins et al., 2006) . As some of the vectors in use will have been seen by the host immune system prior to vaccination, whether the presence of pre-existing immune responses is detrimental for the further development of a vector-based vaccine scheme, or can augment responses to the vectored antigen, needs to be considered in detail. This is the subject of this review. In discussing the possible effects on pre-existing immunity, the natural immunity to the vector needs to be considered. Therefore, considering a vector such as Salmonella, if a host has previously been infected there will exist robust B and T memory responses, and as such, when a vaccination is delivered, an anamnestic response to the Salmonella antigens will be induced (while the response to the vectored antigen will be a primary response). This will theoretically reduce the exposure of the heterologous antigen to the immune system, as the vector is rapidly cleared. Surprisingly, as will be seen in some of the examples given below, this can have results that differ depending on the magnitude of the response to the vectored antigen. Similarly, for virally vectored antigens, the existence of pre-existing immunity to the vector (particularly neutralizing antibody) will restrict delivery of the virus into cells, thereby effectively reducing the dose of the vectored antigen. Again, this might be expected to result in a reduction in the antigenicity of the vectored antigen. In the case of bacterial vectors, the effect of pre-existing immune responses has only been tested using Salmonella serovars and Listeria spp. Concern that prior immunological experience of the host with either the homologous Salmonella vector strain or a related strain might compromise its ability to deliver heterologous vaccine antigen was first raised in 1987 (Dougan et al., 1987) . Bao and Clements subsequently reported experimental evidence of the consequences of prior exposure of animals to the vector strain (Bao & Clements, 1991) . This work showed that both serum and mucosal antibody responses against the foreign antigen were in fact upregulated in animals with prior exposure to the vector strain. Whittle & Verma (1997) reported similar findings. Mice immunized via the intra-peritoneal route with a Salmonella dublin aroA mutant expressing heterologous antigen after being exposed to the same vector showed a higher immune response to the vectored antigen in comparison to mice without any immunological memory against the vector. Subsequently, several studies have been conducted to examine the effect of pre-existing immunity in the host against Salmonella. These results are summarized in Table 1 . The various reports are contradictory in their findings and seem to paint a rather confusing picture. Some studies concluded that pre-existing immunity against the Salmonella vector leads to stronger immune responses against the delivered antigen (Bao & Clements, 1991; Jespersgaard et al., 2001; Kohler et al., 2000a, b; Metzger et al., 2004; Saxena et al., 2009; Sevil Domènech et al., 2008; Whittle & Verma, 1997) , with others considering pre-existing immunity to be a limiting factor in the long-term use of Salmonella as an efficient vector for antigen delivery (Attridge et al., 1997; Gahan et al., 2008; Roberts et al., 1999; Sevil Domènech et al., 2007; Vindurampulle & Attridge, 2003a, b) . A slight majority of the studies listed in Table 1 (10 versus eight) indicate the upregulation of immune responses after animals have been exposed to either homologous or related strains before the delivery of heterologous antigen using a Salmonella vector. A study by Metzger and co-workers on human volunteers using Salmonella Typhi as a vector suggested that there was no change in the T cell immune response against the heterologous antigen in human volunteers who were exposed to empty vector in comparison with volunteers who were immunologically naive of the vector strain (Metzger et al., 2004) . In these subjects, humoral responses were moderately elevated in preexposed individuals. Similarly, Saxena et al. (2009) indicated higher humoral and T cell responses in mice pre-exposed to homologous or heterologous Salmonella strains. The interleukin 4 (IL4) response was significantly higher when the animal host was exposed to the homologous strain, whereas pre-exposure to a related species did not have such an impact on IL4 responses. Conversely interferon (IFN)-c responses were higher, irrespective of the strain to which mice were pre-exposed. This study also indicated that the presence of homologous or heterologous opsonizing antibodies leads to a higher uptake of Salmonella by macrophages in vitro, which may explain the higher immune responses in exposed mice. As may be expected, uptake was higher when homologous sera were used as the opsonin rather than heterologous sera. This is depicted in Fig. 2 . Conversely, there are reports that indicate that pre-existing immunity against the bacterial vector downregulates immune responses against the delivered heterologous antigen using similar or related vectors. Attridge and coworkers reported that the presence of immunity against the bacterial vector prior to the delivery of vectored antigenic \n\nMicrobiology 159 protein can downregulate immune responses in mice against the delivered antigen (Attridge et al., 1997) . Similar results were reported by Roberts et al. (1999) and Vindurampulle & Attridge (2003a, b) . However, the latter authors found that the hypo-responsiveness could be largely eliminated by exposing animals to the foreign antigen prior to vectorpriming (Vindurampulle & Attridge, 2003b) .\n\nQuestion: What can be a factor in using common vectors for the delivery of vaccines?",
"role": "user"
},
{
"content": "commonly employed vectors, for example Salmonella and adenovirus, often have pre-existing immune responses in the host and this has the potential to modify the subsequent immune response to a vectored antigen.",
"role": "assistant"
}
] |
science.craftchem_ner | science.craftchem_ner.6892 | [
{
"content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nTheCapn2knockoutfitsintothelattercategory.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |
science.ddi_ner | science.ddi_ner.417 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCo-administration with antifungal agents such as ketoconazole or itraconazole is not recommended. Nafazodone, fluvoxamine, cimetidine (consider Xanax dose reduction). Fluoxetine, OCs, sertraline, diltiazem, macrolide antibiotics (exercise caution).",
"role": "user"
},
{
"content": "{\"Brand\": [\"Xanax\"], \"Drug\": [\"fluvoxamine\", \"itraconazole\", \"Nafazodone\", \"cimetidine\", \"Fluoxetine\", \"ketoconazole\", \"sertraline\", \"diltiazem\"], \"DrugGroup\": [\"macrolide antibiotics\", \"antifungal agents\", \"OCs\"], \"UnapprovedDrug\": []}",
"role": "assistant"
}
] |
science.multixscience_multidoc_summarization | science.multixscience_multidoc_summarization.4812 | [
{
"content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nConstrained shortest distance (CSD) querying is one of the fundamental graph query primitives, which finds the shortest distance from an origin to a destination in a graph with a constraint that the total cost does not exceed a given threshold. CSD querying has a wide range of applications, such as routing in telecommunications and transportation. With an increasing prevalence of cloud computing paradigm, graph owners desire to outsource their graphs to cloud servers. In order to protect sensitive information, these graphs are usually encrypted before being outsourced to the cloud. This, however, imposes a great challenge to CSD querying over encrypted graphs. Since performing constraint filtering is an intractable task, existing work mainly focuses on unconstrained shortest distance queries. CSD querying over encrypted graphs remains an open research problem. In this paper, we propose Connor , a novel graph encryption scheme that enables approximate CSD querying. Connor is built based on an efficient, tree-based ciphertext comparison protocol, and makes use of symmetric-key primitives and the somewhat homomorphic encryption, making it computationally efficient. Using Connor , a graph owner can first encrypt privacy-sensitive graphs and then outsource them to the cloud server, achieving the necessary privacy without losing the ability of querying. Extensive experiments with real-world data sets demonstrate the effectiveness and efficiency of the proposed graph encryption scheme.\n\nCited Abstract(s):\n[1]: We investigate the problem of solving traditional combinatorial graph problems using secure multi-party computation techniques, focusing on the shortest path and the maximum flow problems. To the best of our knowledge, this is the first time these problems have been addressed in a general multi-party computation setting. Our study highlights several complexity gaps and suggests the exploration of various trade-offs, while also offering protocols that are efficient enough to solve real-world problems.\n\n[2]: This work treats the problem of designing data-oblivious algorithms for classical and widely used graph problems. A data-oblivious algorithm is defined as having the same sequence of operations regardless of the input data and data-independent memory accesses. Such algorithms are suitable for secure processing in outsourced and similar environments, which serves as the main motivation for this work. We provide data-oblivious algorithms for breadth-first search, single-source single-destination shortest path, minimum spanning tree, and maximum flow, the asymptotic complexities of which are optimal, or close to optimal, for dense graphs.\n\n[3]: \n\n[4]: \n\n[5]: Interdomain routing involves coordination among mutually distrustful parties, leading to the requirements that BGP provide policy autonomy, flexibility, and privacy. BGP provides these properties via the distributed execution of policy-based decisions during the iterative route computation process. This approach has poor convergence properties, makes planning and failover difficult, and is extremely difficult to change. To rectify these and other problems, we propose a radically different approach to interdomain-route computation, based on secure multi-party computation (SMPC). Our approach provides stronger privacy guarantees than BGP and enables the deployment of new policy paradigms. We report on an initial exploration of this idea and outline future directions for research.\n\n[6]: \n\n[7]: We present oblivious implementations of several data structures for secure multiparty computation (MPC) such as arrays, dictionaries, and priority queues. The resulting oblivious data structures have only polylogarithmic overhead compared with their classical counterparts. To achieve this, we give secure multiparty protocols for the ORAM of (Asiacrypt ‘11) and the Path ORAM scheme of (CCS ‘13), and we compare the resulting implementations. We subsequently use our oblivious priority queue for secure computation of Dijkstra’s shortest path algorithm on general graphs, where the graph structure is secret. To the best of our knowledge, this is the first implementation of a non-trivial graph algorithm in multiparty computation with polylogarithmic overhead.\n\n[8]: We present three private fingerprint alignment and matching protocols, based on what are considered to be the most precise and efficient fingerprint recognition algorithms, which use minutia points. Our protocols allow two or more honest-but-curious parties to compare their respective privately-held fingerprints in a secure way such that they each learn nothing more than an accurate score of how well the fingerprints match. To the best of our knowledge, this is the first time fingerprint alignment based on minutiae is considered in a secure computation framework. We build secure fingerprint alignment and matching protocols in both the two-party setting using garbled circuit evaluation and in the multi-party setting using secret sharing techniques. In addition to providing precise and efficient secure fingerprint alignment and matching, our contributions include the design of a number of secure sub-protocols for complex operations such as sine, cosine, arctangent, square root, and selection, which are likely to be of independent interest.",
"role": "user"
},
{
"content": "Secure multi-party computation (SMC) techniques have been widely applied to address the privacy-preserving shortest path problem [2] [6] [3] [4] , as well as other secure computation problems [8] . [1] focused on the shortest path problem over traditional combinatorial graph in a general multi-party computation setting, and proposed two protocols for securely computing shortest paths in the graphs. [2] designed data-oblivious algorithms to securely solve the single-source single-destination shortest path problem, which achieved the optimal or near-optimal performance on dense graphs. Keller and Scholl [7] designed several oblivious data structures (e.g., priority queues) for SMC and utilized them to compute shortest paths on general graphs. [5] proposed an SMC-based approach for finding policy-compliant paths that have the least routing cost or satisfy bandwidth demands among different network domains. However, existing general-purpose SMC solutions for the shortest path problem may result in heavy communication overhead.",
"role": "assistant"
}
] |
science.scicite_classification | science.scicite_classification.6085 | [
{
"content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nSimilar to our results, the heart exhibited a high AhRR mRNA content (Korkalainen et al. 2004).",
"role": "user"
},
{
"content": "Result",
"role": "assistant"
}
] |
science.medmentions_ner | science.medmentions_ner.1848 | [
{
"content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMixed corticomedullary adrenal carcinoma - case report: Comparison in features, treatment and prognosis with the other two reported cases Adrenal corticomedullary adenoma was reviewed in many cases in PubMed Library, While the coincidence corticomedullary adrenal carcinoma in the same gland was just described in two cases in the medical literature. Our case is the third to be reported and was treated with surgery and adjuvant chemotherapy and followed for two years. A 50- year -old man suffered from a mass effect in the left abdominal side. While the laboratory showed a mild elevation in the levels of both serum cortisol and 24h urine cortisol, radiological images were highly suggested an adrenal malignant tumor without metastasis. At surgery a 22cm sized mass was completely resected. Immunohistochemical study identified expression of both adrenocortical carcinoma and pheochromocytoma markers. Cases of coincidence corticomedullary tumor have been published in many reviews, cortical and/or medulllary hypersecretion were not always detected preoperatively by biochemical tests. Mixed corticomedullary carcinoma are exceedingly rare, we came across three reported cases in medical literature, in one case laboratory tests confirmed both cortical and medulla hypersecretion, while the two others detected only cortical hypersecretion. The final diagnosis was always confirmed by immunohistochemical staining. It could be noted that this is the first comparison of presentation, diagnosis, treatments and follow-up of the three cases of Mixed corticomedullary carcinoma. This could contribute to understanding the behavior and management of this rare malignancy and make it more familiar in clinical practice.",
"role": "user"
},
{
"content": "{\"AnatomicalStructure\": [\"left abdominal side\", \"gland\", \"corticomedullary\"], \"Bacterium\": [], \"BiologicFunction\": [\"malignancy\", \"pheochromocytoma\", \"adrenal malignant tumor\", \"carcinoma\", \"adrenocortical carcinoma\", \"metastasis\", \"tumor\", \"adenoma\", \"adrenal carcinoma\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [\"markers\"], \"Eukaryote\": [], \"Finding\": [\"mass effect\", \"mass\", \"detected\"], \"Food\": [], \"HealthCareActivity\": [\"urine cortisol\", \"serum cortisol\", \"laboratory\", \"surgery\", \"prognosis\", \"laboratory tests\", \"management\", \"Immunohistochemical\", \"biochemical tests\", \"adjuvant chemotherapy\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"case report\", \"PubMed Library\", \"published\", \"radiological images\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"diagnosis\", \"clinical practice\"], \"SpatialConcept\": [\"Adrenal corticomedullary\", \"mild elevation\", \"sized\"], \"Virus\": []}",
"role": "assistant"
}
] |
science.healthver_entailment | science.healthver_entailment.215 | [
{
"content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: After all, we know smoking is bad for our health. It's a leading risk factor for heart disease, lung disease and many cancers. Smoking also reduces our immunity, and makes us more susceptible to respiratory infections including pneumonia. And smokers touch their mouth and face more, a risk for COVID-19 infection.\n\nAbstract:\nImportance.\nCovid-19 infection has major international health and economic impacts and risk factors for infection are not completely understood.\nCannabis smoking is linked with poor respiratory health, immunosuppression and multiple contaminants.\nPotential synergism between the two epidemics would represent a major public health convergence.\nCigarettes were implicated with disease severity in Wuhan, China.\nObjective.\nIs cannabis use epidemiologically associated with coronavirus incidence rate (CVIR)?\nDesign.\nCross-sectional state-based multivariable study.\nSetting.\nUSA.\nPrimary and Secondary Outcome Measures.\nCVIR.\nMultivariable-adjusted geospatially-weighted regression models.\nAs the American cannabis epidemic is characterized by a recent doubling of daily cannabis use it was considered important to characterize the contribution of high intensity use.\nResults.\nSignificant associations of daily cannabis use quintile with CVIR were identified with the highest quintile having a prevalence ratio 5.11 (95%C.I. 4.90-5.33), an attributable fraction in the exposed (AFE) 80.45% (79.61-81.25%) and an attributable fraction in the population of 77.80% (76.88-78.68%) with Chi-squared-for-trend (14,782, df=4) significant at P<10-500.\nSimilarly when cannabis legalization was considered decriminalization was associated with an elevated CVIR prevalence ratio 4.51 (95%C.I. 4.45-4.58), AFE 77.84% (77.50-78.17%) and Chi-squared-for-trend (56,679, df=2) significant at P<10-500.\nMonthly and daily use were linked with CVIR in bivariate geospatial regression models (P=0.0027, P=0.0059).\nIn multivariable additive models number of flight origins and population density were significant.\nIn interactive geospatial models adjusted for international travel, ethnicity, income, population, population density and drug use, terms including last month cannabis were significant from P=7.3x10-15, daily cannabis use from P=7.3x10-11 and last month cannabis was independently associated (P=0.0365).\nConclusions and Relevance.\nData indicate CVIR demonstrates significant trends across cannabis use intensity quintiles and with relaxed cannabis legislation.\nRecent cannabis use is independently predictive of CVIR in bivariate and multivariable adjusted models and intensity of use is interactively significant.\nCannabis thus joins tobacco as a SARS2-CoV-2 risk factor.",
"role": "user"
},
{
"content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Cannabis smoking is linked with poor respiratory health, immunosuppression and multiple contaminants.\", \"Potential synergism between the two epidemics would represent a major public health convergence.\", \"Cigarettes were implicated with disease severity in Wuhan, China.\", \"Data indicate CVIR demonstrates significant trends across cannabis use intensity quintiles and with relaxed cannabis legislation.\", \"Recent cannabis use is independently predictive of CVIR in bivariate and multivariable adjusted models and intensity of use is interactively significant.\", \"Cannabis thus joins tobacco as a SARS2-CoV-2 risk factor.\"]}",
"role": "assistant"
}
] |
science.mslr2022_ms2_multidoc_summarization | science.mslr2022_ms2_multidoc_summarization.9136 | [
{
"content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nPerceptions of terminally ill patients and family members regarding home and hospice as places of care at the end of life.\nAbstract 1:\nTo enable dem and -based palliative care , it is important to know the perceptions of terminally ill patients and their family members regarding home and hospice as places of care at the end of life . Eight women and five men suffering from cancer and with a life expectancy of 3 months or less were interviewed . In each case one of the family members was also interviewed . Four patients spent their last phase of life at home , nine in a hospice . This paper provides further insight in the patient perspective in palliative care . The results reveal that a cohabiting partner seems an important prerequisite for terminally ill patients to stay at home . For spouses it is an obvious choice to facilitate the patients ' stay at home , even when it becomes too dem and ing , something not discussed between spouse and patient . When sufficient care at home seems impossible and the negotiation between patients and family members results in the opinion that living at home is no longer an option , it is decided that the patient moves to a hospice . The choice for the specific setting of the patients ' new residence seems to be r and om ; one possibility is pointed out to them and seems appropriate\n\n\nTitle 2:\nDignity Through Integrated Symptom Management: Lessons From the Breathlessness Support Service.\nAbstract 2:\nCONTEXT Dignity is poorly conceptualized and little empirically explored in end-of-life care . A qualitative evaluation of a service offering integrated palliative and respiratory care for patients with advanced disease and refractory breathlessness uncovered an unexpected outcome , it enhanced patients ' dignity . OBJECTIVES To analyze what constitutes dignity for people suffering from refractory breathlessness with advanced disease , and its implication s for the concept of dignity . METHODS Qualitative study of cross-sectional interviews with 20 patients as part of a Phase III evaluation of a r and omized controlled fast-track trial . The interviews were transcribed verbatim , imported into NVivo , and analyzed through constant comparison . The findings were compared with Chochinov et al. 's dignity model . The model was adapted with the themes and subthemes specific to patients suffering from breathlessness . RESULTS The findings of this study underscore the applicability of the conceptual model of dignity for patients with breathlessness . There were many similarities in themes and subthemes . Differences specifically relevant for patients suffering from severe breathlessness were as follows : 1 ) physical distress and psychological mechanisms are interlinked with the disability and dependence breathlessness causes , in the illness-related concerns , 2 ) stigma is an important component of the social dignity inventory , 3 ) conditions and perspectives need to be present to practice self-care in the dignity-conserving repertoire . CONCLUSION Dignity is an integrated concept and can be affected by influences from other areas such as illness-related concerns . The intervention shows that targeting the symptom holistically and equipping patients with the means for self-care realized the outcome of dignity",
"role": "user"
},
{
"content": "Conclusions : The Dynamic Reciprocity of Dignity model highlights the importance of adopting a systemic lens to address dignity-related needs and concerns at the end of life , while providing insights on how compassionate care and self-compassion can serve as the foundation of dignified care , which in turn serve as a buffer against patients ’ existential suffering as well as caregivers ’ burnout and fatigue .",
"role": "assistant"
}
] |
science.chemprot_re | science.chemprot_re.619 | [
{
"content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nPeyer 's patch-mediated intestinal immune system modulating activity of pectic-type polysaccharide from peel of Citrus unshiu. An intestinal immune system modulating polysaccharide (CUI-3IIb-3-2, 18kDa) was purified from Citrus unshiu peel. CUI-3IIb-3-2 mainly comprised GalA, GlcA, Ara, Gal and Rha, and it consisted of 4-linked GalA, terminal Araf, 4- or 5-linked/3,4- or 3,5-branched Ara, terminal Gal, and 2-linked/2,4-branched Rha. After CUI-3IIb-3-2 digestion by endo-α-d- (1→4) -polygalacturonase, its hydrolysate was fractionated into PG-1 and PG-2. Methylation analyses of PG-1 and PG-2 using base-catalysed β-elimination suggested that CUI-3IIb-3-2 be assumed as pectic-type polysaccharide. Since the activities of PG-1 and PG-2 were potently decreased, the whole polysaccharide structure of CUI-3IIb-3-2 would be essential to maintain the activity. Meanwhile, when CUI-3IIb was orally administered in mice, bone marrow cell proliferation and GM-CSF/IL-6 production from Peyer 's patch cell were significantly higher (1.76- and 2.03/2.51-fold, respectively) than a saline. Therefore, a pectic-type polysaccharide from citrus peel could stimulate Peyer 's patches and produce hematopoietic growth factors resulted in bone marrow cell proliferation.",
"role": "user"
},
{
"content": "[]",
"role": "assistant"
}
] |