dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.medmentions_ner
science.medmentions_ner.842
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTrisaccharide containing α2,3-linked sialic acid is a receptor for mumps virus Mumps virus (MuV) remains an important pathogen worldwide, causing epidemic parotitis, orchitis, meningitis, and encephalitis. Here we show that MuV preferentially uses a trisaccharide containing α2,3-linked sialic acid in unbranched sugar chains as a receptor. Crystal structures of the MuV attachment protein hemagglutinin-neuraminidase (MuV - HN) alone and in complex with the α2,3-sialylated trisaccharide revealed that in addition to the interaction between the MuV - HN active site residues and sialic acid, other residues, including an aromatic residue, stabilize the third sugar of the trisaccharide. The importance of the aromatic residue and the third sugar in the MuV - HN - receptor interaction was confirmed by computational energy calculations, isothermal titration calorimetry studies, and glycan-binding assays. Furthermore, MuV - HN was found to bind more efficiently to unbranched α2,3-sialylated sugar chains compared with branched ones. Important ly, the strategically located aromatic residue is conserved among the HN protein s of sialic acid -using paramyxoviruses, and alanine substitution compromised their ability to support cell-cell fusion. These results suggest that not only the terminal sialic acid but also the adjacent sugar moiety contribute to receptor function for mumps and these paramyxoviruses. The distribution of structurally different sialylated glycans in tissues and organs may explain in part MuV's distinct tropism to glandular tissues and the central nervous system. In the crystal structure, the epitopes for neutralizing antibodies are located around the α-helices of MuV - HN that are not well conserved in amino acid sequences among different genotypes of MuV. This may explain the fact that MuV reinfection sometimes occurs.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"glandular\", \"organs\", \"tissues\"], \"Bacterium\": [], \"BiologicFunction\": [\"epidemic parotitis\", \"encephalitis\", \"meningitis\", \"mumps\", \"orchitis\", \"tropism\", \"bind\", \"substitution\", \"cell-cell fusion\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"central nervous system\"], \"Chemical\": [\"HN\", \"aromatic\", \"sialylated glycans\", \"epitopes\", \"trisaccharide\", \"\\u03b12,3-sialylated\", \"complex\", \"protein\", \"sugar\", \"sialic acid\", \"hemagglutinin-neuraminidase\", \"neutralizing antibodies\", \"receptor\", \"alanine\", \"Trisaccharide\", \"Crystal structures\", \"\\u03b12,3-linked sialic acid\", \"crystal structure\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"stabilize\"], \"Food\": [], \"HealthCareActivity\": [\"glycan-binding assays\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"studies\"], \"SpatialConcept\": [\"terminal\", \"\\u03b1-helices\", \"branched\", \"adjacent\", \"amino acid sequences\", \"structurally\"], \"Virus\": [\"mumps virus\", \"Mumps virus\", \"MuV\\u0027s\", \"MuV\", \"paramyxoviruses\"]}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.1628
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDoes 3-day course of oral amoxycillin benefit children of non-severe pneumonia with wheeze : a multicentric randomised controlled trial . BACKGROUND WHO-defined pneumonias , treated with antibiotics , are responsible for a significant proportion of childhood morbidity and mortality in the developing countries . Since substantial proportion pneumonias have a viral etiology , where children are more likely to present with wheeze , there is a concern that currently antibiotics are being over-prescribed for it . Hence the current trial was conducted with the objective to show the therapeutic equivalence of two treatments ( placebo and amoxycillin ) for children presenting with non-severe pneumonia with wheeze , who have persistent fast breathing after nebulisation with salbutamol , and have normal chest radiograph . METHODOLOGY This multi-centric , randomised placebo controlled double blind clinical trial intended to investigate equivalent efficacy of placebo and amoxicillin and was conducted in ambulatory care settings in eight government hospitals in India . Participants were children aged 2-59 months of age , who received either oral amoxycillin ( 31-54 mg/Kg/day , in three divided doses for three days ) or placebo , and standard bronchodilator therapy . Primary outcome was clinical failure on or before day- 4 . PRINCIPAL FINDINGS We randomized 836 cases in placebo and 835 in amoxycillin group . Clinical failures occurred in 201 ( 24.0 % ) on placebo and 166 ( 19.9 % ) on amoxycillin ( risk difference 4.2 % in favour of antibiotic , 95 % CI : 0.2 to 8.1 ) . Adherence for both placebo and amoxycillin was > 96 % and 98.9 % subjects were followed up on day- 4 . Clinical failure was associated with ( i ) placebo treatment ( adjusted OR = 1.28 , 95 % CI : 1.01 to1.62 ) , ( ii ) excess respiratory rate of > 10 breaths per minute ( adjusted OR = 1.51 , 95 % CI : 1.19 , 1.92 ) , ( iii ) vomiting at enrolment ( adjusted OR = 1.49 , 95 % CI : 1.13 , 1.96 ) , ( iv ) history of use of broncho-dilators ( adjusted OR = 1.71 , 95 % CI : 1.30 , 2.24 ) and ( v ) non-adherence ( adjusted OR = 8.06 , 95 % CI : 4.36 , 14.92 ) . CONCLUSIONS Treating children with non-severe pneumonia and wheeze with a placebo is not equivalent to treatment with oral amoxycillin . TRIAL REGISTRATION ClinicalTrials.gov NCT00407394 .", "role": "user" }, { "content": "{\"Intervention\": [\"oral amoxycillin\", \"amoxycillin\", \"standard bronchodilator therapy\", \"placebo\", \"amoxicillin\"], \"Outcome\": [\"excess respiratory rate of \\u003e 10 breaths per minute\", \"history of use of broncho-dilators\", \"Adherence\", \"Clinical failures\", \"Clinical failure\", \"vomiting at enrolment\", \"non-adherence\", \"clinical failure on or before day- 4\"], \"Participant\": [\"835\", \"836\", \"2-59\", \"wheeze\", \"non-severe pneumonia\", \"children\"]}", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.490
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nIndividual and group cognitive-behavioral treatment for work-related stress complaints and sickness absence: a randomized controlled trial.\nAbstract 1:\nWork-related stress is widespread and can lead to long-term absenteeism and work disability. Cognitive-behavioral treatment (CBT) has demonstrated effectiveness in treating psychopathology but has only rarely been tested in clinical samples with work-related stress. A randomized controlled trial was conducted to investigate the efficacy of CBT-based stress management training (SMT). Eighty-two patients on sickness leave with work-related stress were randomly assigned to (a) individual SMT, (b) group SMT, or (c) care as usual (CAU). The SMT comprised 12 sessions conducted by a psychologist. Complaints of burnout and distress were measured at baseline, and at 4, 7, and 10 months. Absenteeism was measured during the whole research period. Across treatment conditions, complaints and sickness absence reduced considerably between baseline and 4 months. Thereafter, complaints remained approximately stable, whereas sickness absence further reduced. Hardly any significant group difference emerged, and no consistent pattern could be discerned in favor of any treatment condition. In subgroups with low depressive complaints, though, individual SMT resulted in larger reductions of some complaints than CAU. In conclusion, this study adds to the evidence that CBT-based interventions as currently practiced are not successful in treating patients with clinical levels of work-related stress.\n Copyright (c) 2008 APA, all rights reserved.\n\n\nTitle 2:\nEffects of a stress management intervention on absenteeism and return to work--results from a randomized wait-list controlled trial.\nAbstract 2:\nHigh levels of work-related stress are associated with increased absenteeism from work and reduced work ability. In this study, we investigated the effects of a stress management intervention on absenteeism and return to work.\n We randomized 102 participants into either the intervention or wait-list control (WLC) group. The intervention group received the intervention in weeks 1-16 from baseline, and the WLC group received the intervention in weeks 17-32. Self-reported data on absenteeism (number of days full- or part-time absent from work within the previous three months) were obtained at 16, 32, and 48 weeks follow-up. Register-based data on long-term absence from work were drawn from the Danish public transfer payments (DREAM) database from baseline and 48 weeks onwards. The DREAM database contains weekly information on long-term sickness absence compensation. The threshold to enter DREAM is sick leave for two consecutive weeks.\n At follow-up in week 16, self-reported absenteeism in the intervention group [median 11 days (range 3-25)] was lower (P=0.02) than in the WLC group [median 45 days (range 19-60)], corresponding to a 29% [95% confidence interval (95% CI) 5-52] reduction. On register-based data (cumulated weeks in DREAM, weeks 1-16), the intervention group median [6 weeks (range 0-11)] was lower than that of the WLC group [median 12 weeks (range 8-16)], though not significantly (P=0.06), corresponding to a 21% (95% CI 0-42) reduction. For return to work, a hazard ratio of 1.58 (95% CI 0.89-2.81) favoring the intervention group was found (P=0.12).\n The intervention reduces self-reported absenteeism from work. A similar trend was found from register-based records. No conclusive evidence was found for return to work.\n\n\nTitle 3:\nGuideline-based care of common mental disorders by occupational physicians (CO-OP study): a randomized controlled trial.\nAbstract 3:\nTo evaluate the effectiveness of guideline-based care (GBC) of workers with mental health problems, which promotes counseling by the occupational physician (OP) facilitating return to work (RTW).\n In a randomized controlled trial with police workers on sick leave due to mental health problems (n = 240), trained OPs delivered GBC in the intervention group. Time to RTW and recurrences during 1-year follow-up, analyzed using Cox proportional hazards models, were compared with usual care (UC) with easy access to a psychologist.\n GBC by OPs did not result in earlier RTW than UC. Subgroup analysis showed a small effect in favor of GBC for workers with administrative functions and/or \"minor\" stress-related symptoms.\n GBC did not differ in RTW compared with UC, but may be beneficial for the majority of workers with minor stress-related disorders.\n\n\nTitle 4:\nReducing long term sickness absence by an activating intervention in adjustment disorders: a cluster randomised controlled design.\nAbstract 4:\nTo compare an innovative activating intervention with \"care as usual\" (control group) for the guidance of employees on sickness leave because of an adjustment disorder. It was hypothesised that the intervention would be more effective than care as usual in lowering the intensity of symptoms, increasing psychological resources, and decreasing sickness leave duration.\n A prospective, cluster randomised controlled trial was carried out with 192 patients on first sickness leave for an adjustment disorder. Symptom intensity, sickness duration, and return to work rates were measured at 3 months and 12 months. Analyses were performed on an intention to treat basis.\n At 3 months, significantly more patients in the intervention group had returned to work compared with the control group. At 12 months all patients had returned to work, but sickness leave was shorter in the intervention group than in the control group. The recurrence rate was lower in the intervention group. There were no differences between the two study groups with regard to the decrease of symptoms. At baseline, symptom intensity was higher in the patients than in a normal reference population, but decreased over time in a similar manner in both groups to approximately normal levels.\n The experimental intervention for adjustment disorders was successful in shortening sick leave duration, mainly by decreasing long term absenteeism.\n\n\nTitle 5:\nCognitively oriented behavioral rehabilitation in combination with Qigong for patients on long-term sick leave because of burnout: REST--a randomized clinical trial.\nAbstract 5:\nDespite an increase in the occurrence of burnout, there is no agreement on what kind of rehabilitation these patients should be offered.\n Primary aim of this study was to evaluate effects on psychological variables and sick leave rates by two different group rehabilitation programs for patients on long-term sick leave because of burnout. Rehabilitation program A (Cognitively oriented Behavioral Rehabilitation (CBR) and Qigong) was compared with rehabilitation program B (Qigong only).\n In a randomized clinical trial, 96 women and 40 men with a mean age of 41.6 +/- 7.4 years were allocated to one of the two rehabilitation programs.\n A per-protocol analysis showed no significant difference in treatment efficacy between the groups. Both groups improved significantly over time with reduced levels of burnout, self-rated stress behavior, fatigue, depression, anxiety, obsessive-compulsive symptoms, and sick leave rates. In an intention-to-treat analysis, patients in program A had fewer obsessive-compulsive symptoms and larger effect sizes in self-rated stress behavior and obsessive-compulsive symptoms compared to patients in program B.\n This study showed no differences in effect between CBR and Qigong compared with Qigong only in a per-protocol analysis. Both rehabilitation programs showed positive effect for patients with burnout.\n\n\nTitle 6:\nEffectiveness of an intervention to reduce sickness absence in patients with emotional distress or minor mental disorders: a randomized controlled effectiveness trial.\nAbstract 6:\nThe purpose of this study was to evaluate the effectiveness of an activating intervention designed to reduce sick leave duration in patients with emotional distress or minor mental disorders.\n In a 1.5-year randomized controlled trial, 194 patients with minor mental disorders received either an experimental intervention by social workers or general practitioners' usual care. The intervention focused on understanding causes, developing and implementing problem-solving strategies and promoting early work resumption. Outcome measures were sick leave duration, mental health and physical health (questionnaires included the Hospital Anxiety and Depression Scale, the Four-Dimensional Symptom Questionnaire and SF-36), all measured at baseline at and 3, 6 and 18 months later. Multilevel analyses were used to evaluate differences between groups.\n The groups did not differ significantly on any of the outcome measures, except that the experimental group reported higher satisfaction with treatment.\n Although the intervention has benefits, it was not successful at its primary goal (i.e., to reduce sick leave duration in patients with emotional distress or minor mental disorders). Programs aimed at the reduction of sick leave duration may yield better results if targeted at patients with more severe emotional problems than at those with exclusively emotional distress or minor mental disorders, or if delivered by caregivers who are closer to the work environment than are social workers, such as occupational physicians.\n\n\nTitle 7:\nA workplace intervention for sick-listed employees with distress: results of a randomised controlled trial.\nAbstract 7:\nTo evaluate the effectiveness of a participatory workplace intervention compared with usual care for sick-listed employees with distress, with regard to return to work (RTW) within the 12-month follow-up.\n Employees with distress and sick-listed for 2-8 weeks were randomised to a workplace intervention (n=73) or to usual care (n=72). The participatory workplace intervention is a stepwise process involving the sick-listed employee and their supervisor, aimed at reducing obstacles for RTW by reaching consensus about an action plan for RTW. Outcome variables were lasting RTW, cumulative sickness absence and stress-related symptoms.\n Overall, an HR of 0.99 (95% CI 0.70 to 1.39) indicated no effect of the workplace intervention on lasting RTW. However, the workplace intervention significantly reduced the time until lasting RTW for employees who at baseline intended to return to work despite symptoms with an HR of 2.05 (95% CI 1.22 to 3.45). Employees who intended to return to work despite symptoms returned to work after 55 days in the workplace intervention group and 120 days in the usual care group. No such effect of the intervention was found for employees without baseline intentions to return to work despite symptoms (HR=0.78, 95% CI 0.47 to 1.28).\n No overall effect of the participatory workplace intervention on lasting RTW was found. The workplace intervention appeared effective on lasting RTW for employees who at baseline intended to return to work despite symptoms. For employees who showed no baseline intention to return to work, the intervention did not have any effect. Other approaches are needed for this subgroup. This trial has been registered at the Dutch National Trial Register ISRCTN92307123.\n\n\nTitle 8:\nA cluster-randomised trial evaluating an intervention for patients with stress-related mental disorders and sick leave in primary care.\nAbstract 8:\nMental health problems often affect functioning to such an extent that they result in sick leave. The worldwide reported prevalence of mental health problems in the working population is 10%-18%. In developed countries, mental health problems are one of the main grounds for receiving disability benefits. In up to 90% of cases the cause is stress-related, and health-care utilisation is mainly restricted to primary care. The aim of this study was to assess the effectiveness of our Minimal Intervention for Stress-related mental disorders with Sick leave (MISS) in primary care, which is intended to reduce sick leave and prevent chronicity of symptoms.\n Cluster-randomised controlled educational trial.\n Primary health-care practices in the Amsterdam area, The Netherlands.\n A total of 433 patients (MISS n = 227, usual care [UC] n = 206) with sick leave and self-reported elevated level of distress.\n Forty-six primary care physicians were randomised to either receive training in the MISS or to provide UC. Eligible patients were screened by mail.\n The primary outcome measure was duration of sick leave until lasting full return to work. The secondary outcomes were levels of self-reported distress, depression, anxiety, and somatisation.\n No superior effect of the MISS was found on duration of sick leave (hazard ratio 1.06, 95% confidence interval 0.87-1.29) nor on severity of self-reported symptoms.\n We found no evidence that the MISS is more effective than UC in our study sample of distressed patients. Continuing research should focus on the potential beneficial effects of the MISS; we need to investigate which elements of the intervention might be useful and which elements should be adjusted to make the MISS effective.", "role": "user" }, { "content": "We found moderate-quality evidence that CBT did not significantly reduce time until partial RTW and low-quality evidence that it did not significantly reduce time to full RTW compared with no treatment. Moderate-quality evidence showed that PST significantly enhanced partial RTW at one-year follow-up compared to non-guideline based care but did not significantly enhance time to full RTW at one-year follow-up. An important limitation was the small number of studies included in the meta-analyses and the small number of participants, which lowered the power of the analyses.", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.7185
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe COVID-19 pandemic has forced an abrupt change in the delivery of clinical services, including for individuals with an eating disorder. We present this Virtual Issue as a resource for the eating disorder community to showcase research published in the International Journal of Eating Disorders that provides information on effective strategies to help address the challenges arising from COVID-19-related disruptions. Articles included describe original research or systematic reviews on obstacles to health services use and strategies to improve access to care; technological tools to provide or enhance interventions; patients' and clinicians' attitudes or perspectives on using digital tools for clinical care; factors influencing therapeutic alliance; and ideas for improving reach and uptake of digital interventions. We hope that readers will find ways to observe and record their own experiences during this global crisis; the experiences of people at risk for developing or exhibiting an eating disorder; and the experiences of those who care for people with an eating disorder. These lived experiences will be invaluable in formulating hypotheses for future studies in service of advancing the understanding of eating disorders and improving interventions and policies for reducing the burden of suffering attributable to eating disorders.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.2352
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nStriavasculariswithoutspiralligamentwasobtainedbymicrodissection.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.2725
[ { "content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Social distancing for COVID-19 means avoiding places or gatherings where you are likely to be exposed to respiratory droplets from others - directly or on surfaces\n\nAbstract:\nThe Australian and New Zealand governments both initiated strict social distancing measures in response to the COVID-19 pandemic in late March.\nIt remains difficult to quantify the impact this had in reducing the spread of the virus.\nBayesian structural time series model provide a model to quantify the scenario in which these government-level interventions were not placed.\nOur models predict these strict social distancing measures caused a 79% and 61% reduction in the daily cases of COVID-19 across Australia and New Zealand respectively.\nThis provides both evidence and impetus for governments considering similar measures in response to COVID-19 and other pandemics.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Our models predict these strict social distancing measures caused a 79% and 61% reduction in the daily cases of COVID-19 across Australia and New Zealand respectively.\", \"This provides both evidence and impetus for governments considering similar measures in response to COVID-19 and other pandemics.\"]}", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.8359
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nCox11behavesasapartialnoncompetitiveinhibitorofHKIbyaffectingtheVmaxofHKIforglucose(Figure2B).", "role": "user" }, { "content": "[\"inhibitor\", \"glucose\"]", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.2653
[ { "content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to train a model for object detection from images and texts.\n\nKeyphrases: object detection images text\n\nDataset description:\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nCOCO-Tasks: Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated.\nCOVERAGE: COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\n\nOptions: A: COCO B: COCO-Tasks C: COVERAGE D: SNIPS", "role": "user" }, { "content": "COCO", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.1383
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\n, 2006) and Bombyx mori (Gui et al., 2006; Lee et al., 2009) and are involved in programmed cell death during metamorphosis.", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.742
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nVerapamil prevents torsade de pointes by reduction of transmural dispersion of repolarization and suppression of early afterdepolarizations in an intact heart model of LQT3.\nBACKGROUND: In long QT syndrome (LQTS), prolongation of the QT-interval is associated with sudden cardiac death resulting from potentially life-threatening polymorphic tachycardia of the torsade de pointes (TdP) type. Experimental as well as clinical reports support the hypothesis that calcium channel blockers such as verapamil may be an appropriate therapeutic approach in LQTS. We investigated the electrophysiologic mechanism by which verapamil suppresses TdP, in a recently developed intact heart model of LQT3. METHODS AND RESULTS: In 8 Langendorff-perfused rabbit hearts, veratridine (0.1 microM), an inhibitor of sodium channel inactivation, led to a marked increase in QT-interval and simultaneously recorded monophasic ventricular action potentials (MAPs) (p < 0.05) thereby mimicking LQT3. In bradycardic (AV-blocked) hearts, simultaneous recording of up to eight epi- and endocardial MAPs demonstrated a significant increase in total dispersion of repolarization (56%, p < 0.05) and reverse frequency-dependence. After lowering potassium concentration, veratridine reproducibly led to early afterdepolarizations (EADs) and TdP in 6 of 8 (75%) hearts. Additional infusion of verapamil (0.75 microM) suppressed EADs and consecutively TdP in all hearts. Verapamil significantly shortened endocardial but not epicardial MAPs which resulted in significant reduction of ventricular transmural dispersion of repolarization. CONCLUSIONS: Verapamil is highly effective in preventing TdP via shortening of endocardial MAPs, reduction of left ventricular transmural dispersion of repolarization and suppression of EADs in an intact heart model of LQT3. These data suggest a possible therapeutic role of verapamil in the treatment of LQT3 patients.", "role": "user" }, { "content": "{\"Chemical\": [\"potassium\", \"calcium\", \"sodium\", \"Verapamil\", \"veratridine\", \"verapamil\"], \"Protein\": [\"calcium channel\", \"sodium channel\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.14274
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nRobots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot.\n\nCited Abstract(s):\n[1]: Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent's anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction.", "role": "user" }, { "content": "In a more developmental perspective, [1] proposed a framework to learn composite affordances by starting from low level affordances. Their approach is split into 3 steps: first, the robot explores its environment with a reactive behavior, like a grasp reflex, and collects visual data consisting of SIFT. Then, in a second step, basic affordances are learned with simple actions such as pushing or gripping. Finally, in the third step, the robot learns composite affordances based on a combination of the basic action used in the previous step. For instance, this combination of actions allows the robot to achieve stacking. They validate their framework with a mobile robot equipped with a stereo camera and a magnetized end-effector. In a real environment the robot tries to learn to identify objects that are liftable with its magnetized end-effector.", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.2958
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Biphasic Synaptic Ca Influx Arising from Compartmentalized Electrical Signals in Dendritic Spines\nArticle:\nAbstract:\nExcitatory synapses on mammalian principal neurons are typically formed onto dendritic spines, which consist of a bulbous head separated from the parent dendrite by a thin neck. Although activation of voltage-gated channels in the spine and stimulus-evoked constriction of the spine neck can influence synaptic signals, the contribution of electrical filtering by the spine neck to basal synaptic transmission is largely unknown. Here we use spine and dendrite calcium( Ca) imaging combined with 2-photon laser photolysis of caged glutamate to assess the impact of electrical filtering imposed by the spine morphology on synaptic Ca transients. We find that in apical spines of CA1 hippocampal neurons, the spine neck creates a barrier to the propagation of current, which causes a voltage drop and results in spatially inhomogeneous activation of voltage-gated Ca channels( VGCCs) on a micron length scale. Furthermore, AMPA and NMDA-type glutamate receptors( AMPARs and NMDARs, respectively) that are colocalized on individual spine heads interact to produce two kinetically and mechanistically distinct phases of synaptically evoked Ca influx. Rapid depolarization of the spine triggers a brief and large Ca current whose amplitude is regulated in a graded manner by the number of open AMPARs and whose duration is terminated by the opening of small conductance Ca-activated potassium( SK) channels. A slower phase of Ca influx is independent of AMPAR opening and is determined by the number of open NMDARs and the post-stimulus potential in the spine. Biphasic synaptic Ca influx only occurs when AMPARs and NMDARs are coactive within an individual spine. These results demonstrate that the morphology of dendritic spines endows associated synapses with specialized modes of signaling and permits the graded and independent control of multiple phases of synaptic Ca influx.\nIntroduction:\nVarious theories have been proposed to explain why excitatory synapses of principal neurons are formed onto the heads of dendritic spines. These include proposals that spines increase the maximal density of contacts onto a linear dendrite[1],[2], that the long spine increases the repertoire of possible presynaptic partners for each dendrite[3], and that the thin spine neck diffusionally isolates the associated postsynaptic terminal in order to allow the synapse-specific induction and expression of plasticity[4]. A further proposal that has been controversial is that the thin neck creates an electrical resistance that, in conjunction with the membrane capacitance, significantly filters synaptic potentials[4]–[7]. For example, a high spine neck resistance would allow for the generation of large synaptic potentials that are limited to the active spine head and result in differential activation of voltage-sensitive ion channels on either side of the neck[8]–[10]. Based on its morphology and diffusional properties, the electrical resistance of the neck of most dendritic spines in their basal state has been estimated to be <100 MΩ, too low to create a large voltage drop across the neck in response to typical synaptic currents as measured at the soma[1],[2],[11]. These estimates depend on an approximation of the resistivity of the intracellular space within the neck, which given the thinness of the structure, its high protein content, and our poor understanding of its biochemistry, may differ substantially from that of other portions of the cell. Furthermore, biophysical constraints of charge flow through a narrow juxtamembrane space may not be revealed by studies of diffusion of uncharged molecules. Lastly, current flow out of the neck into the dendrite may be attenuated even if the neck resistance is low due to opening of ion channels or transporters located selectively in the spine neck. Several recent studies have suggested that electrical filtering by the spine neck may be substantial and may be regulated by activity. First, voltage-sensitive dye and second harmonic generation imaging have suggested shunting of synaptic currents at long-neck spines[5]. Similarly, glutamate uncaging at individual spines revealed that stimulation of spines with longer necks produces smaller potentials measured at the soma[5],[12],[13]. However, the lack of independent quantification of the number of AMPARs on these spines renders the interpretation of the data in these studies difficult. Second, diffusional equilibration across the spine neck is retarded following paired pre- and postsynaptic stimulation[14] or prolonged postsynaptic depolarization[9]. In such post-constriction spines the spine neck resistance may approach 1 GΩ, opening the possibility that activity-dependent regulation of spine/dendrite coupling is a mechanism for dynamic control of synaptic signals. Indeed, after apparent constriction of the spine neck by depolarization to 0 mV, the contribution of voltage-gated ion channels to synaptic Ca signaling is large; however, no analysis of synapses in their basal state was performed[9]. Lastly, a separate study of spines in their basal state found that VGCCs in the spine head are activated by unitary synaptic stimuli, suggesting the existence of a large synaptic depolarization within most spines. In particular, VGCCs blocked by SNX-482 are located in the spine head and open during synaptic stimuli, both contributing to synaptic Ca influx and leading to the activation of SK channels[14]. However, SNX-482-sensitive VGCCs are not found in the dendritic shaft, and therefore their selective activation in the spine head may reflect their inhomogeneous distribution and not compartmentalization of synaptic potentials. Here we examine if the segregation of excitatory synapses onto the heads of dendritic spines endows the synapse with specialized modes of regulation and signaling that would not be available to synapses formed directly onto the dendritic shaft. Since quantifying the spine neck resistance and the voltage drop across the neck depends on difficult-to-measure cellular parameters, we avoid performing these estimates and instead focus on uncovering functional evidence that synaptic signals are influenced by the electrical and morphological properties of the spine. We find that, for spines in their basal state, the voltage drop across the spine neck during synaptic stimulation is sufficient to allow for differential activation of the voltage-sensitive ion channels in the spine and dendrite. Furthermore, depolarization reached in the spine during synaptic activity creates a large and brief component of Ca influx that is substantially accelerated compared to the typical slow kinetics of NMDAR signaling. The rapid and slow components of synaptic Ca influx arise through distinct mechanisms and are differentially affected by modulation of AMPARs and SK channels. Lastly, the biphasic nature of Ca influx arises directly from the colocalization of AMPARs and NMDARs on the spine head and cannot be recapitulated by stimulation in the adjacent dendrite.\nDiscussion:\nWe have demonstrated that synaptic depolarization is sufficient to activate voltage-sensitive Ca sources located on the spine but that an EPSP-like dendritic depolarization is not. Previous studies have shown that synaptically evoked Ca influx is limited to the active spine but were performed in conditions in which the contribution of VGCCs to Ca influx was obscured[33],[34]. These studies had been designed to measure NMDAR-mediated Ca influx and either held the neuron at 0 mV to intentionally prevent stimulus-evoked VGCC activation, or imaged using a high affinity Ca buffer, which enhances the contributions of long-lived Ca sources such as the NMDAR relative to those of short-lived Ca sources such as VGCCs. Our study demonstrates that under conditions in which Ca influx through spine VGCCs can be measured, their activation only occurs when they are located on the active spine. This indicates that VGCCs and other voltage-gated channels in one spine are unlikely to be opened by synaptic activity at neighboring spines or subthreshold depolarizations in the dendrite. Since specializations of Ca handling prevent diffusion of Ca from active to inactive spines[26] and electrical filtering prevents the spread of VGCC-activating potentials, signaling cascades triggered by synaptic activation of VGCCs are limited to the active spine. Thus, synaptic cross-talk among neighboring spines is likely to be predominately biochemical in nature and downstream of Ca[35],[36]. It is important to note that these conclusions do not apply to stimuli that trigger local or back-propagating dendritic action potentials, which reliably invade active and inactive spines[17],[21],[34],[37],[38]. In addition, we find that the existence of a large depolarization within the active spine transiently couples AMPAR activation and synaptic Ca sources, generating a fast, spike-like phase of Ca influx. This rapid phase is temporally and mechanistically distinct from the prolonged phase of NMDAR-dependent Ca influx. Since the activation of biochemical events downstream of Ca depend on both the kinetics and amplitude of the Ca transients, independent modulation of these parameters may fine-tune the activation of Ca-dependent signaling pathways. Furthermore, differential regulation of each phase may permit subtle and nuanced regulation of synaptic signaling cascades, such as those underlying spike-timing dependent potentiation and depression. The current study was performed using combined 2PLSM and 2PLP, which allowed delivery of controlled stimuli to visually identified portions of the dendritic arbor[17],[35],[39],[40]. This approach also allows the examination of postsynaptic signaling in conditions, such as blockade of voltage-gated Na and Ca channels, which prevent action potential-evoked release of neurotransmitter from the presynaptic terminal. When using glutamate uncaging, an important parameter that must be carefully considered is the strength at which individual postsynaptic terminals are stimulated and how the amount of glutamate released compares to that contained in a single vesicle. Here we calibrate uncaging laser power delivered to each spine using a photo-bleaching strategy[15]. This approach delivers an amount of glutamate that, on average and under conditions of excellent space-clamp, produces a uncaged-evoked EPSCs of ∼10–13 pA in amplitude, similar to that previously reported for spontaneous miniature EPSCs in hippocampal pyramidal neurons[18],[41],[42]. In current clamped neurons, this same stimulus results in ∼0. 8–1 mV uEPSPs( see Figure 2)[15]. This amplitude is similar to that of unitary EPSPs in 2- to 4-wk-old rat tissue evoked by stimulation of individual synapses formed onto large dendritic spines similar to those studied here[16]. In contrast, a study of adult( 6–10 wk) rat hippocampal CA1 pyramidal neurons found that single vesicle EPSPs recorded from proximal dendrites and measured in Sr2+ or following hypertonic sucrose puffs averaged 0. 2 mV in amplitude and ranged from 0. 1 to 1. 5 mV while EPSCs ranged from a few to tens of picoamps[18]. The large amplitude of uEPSPs in our hands despite the relatively small size of uncaged-evoked EPSCs may result from the smaller size and hence larger input resistance of juvenile mouse CA1 pyramidal neurons. In addition, in this study we selected large-head spines, which have higher AMPA receptor content compared to the average of all synapses found on the apical dendrite and are thus expected to generate larger potentials[43],[44]. In order to ensure that our results are not due to overly strong stimulation, we analyzed the kinetics and NBQX sensitivity of spine head Ca transients evoked by smaller depolarizations( ∼0. 4 mV) evoked by uncaging for 300 µs. This smaller stimulus also produced biphasic calcium influx with a rapid phase that was blocked by antagonists of AMPARs( Figure S1). Taken together, these points suggest that while the uncaging stimulus is at the upper end of the physiologically relevant range, biphasic Ca influx occurs over a spectrum of relevant stimulus strengths. However, one cannot fully discard the possibility that the spatiotemporal pattern of glutamate release following uncaging enhances effects that contribute less robustly during release of glutamate from the presynaptic terminal. Our results predict that changes in the numbers of AMPARs at the synapse over the course of development or following the induction of many forms of plasticity will not only determine the amplitude of the synaptic potential but will also directly alter the synaptic Ca transient. Finally, the morphological features of the spine, specifically the dimensions of the spine neck, can restrict the movement of molecules[11],[14],[45] and attenuate electrical signals exchanged between the spine head and dendrite[5]. Thus, identification of molecules involved in regulating the length and, in particular, the diameter of the spine neck will be important future avenues of study.", "role": "user" }, { "content": "The vast majority of excitatory synapses in the mammalian central nervous system are made onto dendritic spines , small ( < 1 fL ) membranous structures stippled along the dendrite . The head of each spine is separated from its parent dendrite by a thin neck – a morphological feature that intuitively suggests it might function to limit the transmission of electrical and biochemical signals . Unfortunately , the extremely small size of spines has made direct measurements of their electrical properties difficult and , therefore , the functional implications of electrical compartmentalization have remained elusive . In this study , we use spatiotemporally controlled stimulation to generate calcium signals within the spine head and/or neighboring dendrite . By comparing these measurements we demonstrate that spines create specialized electrical signaling compartments , which has at least two functional consequences . First , synaptic stimulation , but not similar dendritic depolarization , can trigger the activation of voltage-gated calcium channels within the spine . Second , voltage changes in the spine head arising from compartmentalization shape the time course of synaptically evoked calcium influx such that it is biphasic . Thus , the electrical compartmentalization provided by spines allows for multiple modes of calcium signaling at excitatory synapses .", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.20599
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Increased Brain Signal Variability Accompanies Lower Behavioral Variability in Development\nArticle:\nAbstract:\nAs the brain matures, its responses become optimized. Behavioral measures show this through improved accuracy and decreased trial-to-trial variability. The question remains whether the supporting brain dynamics show a similar decrease in variability. We examined the relation between variability in single trial evoked electrical activity of the brain( measured with EEG) and performance of a face memory task in children( 8–15 y) and young adults( 20–33 y). Behaviorally, children showed slower, more variable response times( RT), and less accurate recognition than adults. However, brain signal variability increased with age, and showed strong negative correlations with intrasubject RT variability and positive correlations with accuracy. Thus, maturation appears to lead to a brain with greater functional variability, which is indicative of enhanced neural complexity. This variability may reflect a broader repertoire of metastable brain states and more fluid transitions among them that enable optimum responses. Our results suggest that the moment-to-moment variability in brain activity may be a critical index of the cognitive capacity of the brain.\nIntroduction:\nDuring neurodevelopment, behavioural performance tends to improve in terms of speed and accuracy. This improvement usually entails a decrease in trial-to-trial variability as performance approaches ceiling e. g.[1],[2]. A fundamental question is whether the neuronal dynamics that mediate behaviour show a similar decrease in variability. There are two arguments that furnish opposite predictions. The first is that neurodevelopmental trajectories will converge on optimal structure and dynamics, suggesting that trial-to-trial variations in evoked neuronal responses will decrease with age. This argument is particularly attractive in view of known neural pruning that accompanies normal brain maturation[3]. The second argument is that, if the computations underlying performance rest on adaptive, metastable brain dynamics[4], there may be an age-related increase in trial-to-trial variability. Functional variability represents a greater repertoire of metastable brain states and the more facile state transitions[5]. In this work, we disambiguated between these competing hypotheses by relating trial-to-trial variability in behavior with brain electrical activity across a sample of children and adults. Neural systems can show a somewhat counterintuitive property, whereby optimal operations occur in the presence of a moderate amount of internal variability or noise[6]. For example, the phenomena of stochastic resonance describes how a simple nonlinear system can show an optimal signal-to-noise ratio with a moderate amount of noise, which enables the detection of weak periodic signals[7]–[9]. From cell channels to synapse to neural ensembles, noise seems to be an important parameter that shapes responsivity[10],[11]. It has been suggested that there may need to be a degree of physiological variability for the brain to adapt effectively to an uncertain environment[12]. While there are sophisticated computational models demonstrating the beneficial effects of noise for network dynamics[13],[14], the direct relation between neural variability and the behavior variability of the organism has been largely unexplored[15]–[17]. We sought to characterize the relation of neurophysiological variability and behavioral variability in maturation. Critically, our focus was not on variability of the signal across individuals within a group( interindividual), but rather the single-trial variability within an individual( intraindividual)[18]. Measures of single trial variability in electroencephalographic( EEG) signals within subjects were related to variability in response latency and accuracy. EEG signals were measured from children( ages 8–15 yrs, n = 55) and young adults( 20–33 yrs, n = 24) during the performance of a face recognition task[19].\nDiscussion:\nIn contrast to behavioral variability, brain variability increases with maturation. With maturation comes differentiation and specialization of brain regions, but at the same time there is increased integration between distributed neuronal populations and establishment of new functional connections[29]. The change in balance between differentiation and specialization would produce more variability in on-going activity as the number of simultaneous processes possible at any given moment increases. Mature and integrated nervous systems generally have more prolonged and complicated neural transients[30]. Such transients are characteristic of a system with high neural complexity[31]. With the maturational increase in brain signal variability there is an increase in behavioral stability. Across the sample we studied, subjects with higher signal variability showed less variability in response latency( measured with cvRT) and greater performance accuracy. When the measures were adjusted for the chronological age, the relationship between brain and behavioral variability weakened, suggesting that a large part of the relationship represented a maturational effect. It is noteworthy that mean reaction time, which also showed a maturational change, did not significantly correlate with brain variability. It may be that other physiological factors are more important for the response speed change during maturation. By contrast, strong correlations with behavioral consistency indicates that cvRT and accuracy are likely tapping into aspects of the behavioral tuning which are more tightly related to the changes in brain complexity/variability. The present results may seem at odds with the intuitive notion of behavior and brain variability, where one would expect that they go hand in hand. However, the results do make sense when the nonlinear dynamics of the nervous system are considered. Internal variability may be vital to enable the brain to parse weak and ambiguous incoming signals[10],[32],[33]. Variability can facilitate the exchange signals between neurons[34],[35], transitions in metastable systems[7], and the formation of functional networks[17],[36]. As the nervous system matures, physiological variability increases, which is captured by increases in complexity[37],[38], and the system can better adapt to its environment. Maturational changes that have been reported in children's evoked potentials can also be related to increased brain signal complexity. Compared to adults, the average evoked responses in children tends to show higher amplitude and longer latency on early responses, and less well-defined later responses[20],[21],[39]. Spectral power distribution also changes with maturation, with a gradual reduction in low frequencies and an increase in higher frequencies[22],[40]. The relative change in spectral power density presumably underlies the reduction in the latency of the evoked responses and its multicomponent nature, where the lower frequency bias in children would yield slow and broad evoked potentials. The emergence of higher frequencies with maturation would both decrease the evoked response latency and allow additional deflections to emerge( i. e., N100, P200, etc.). Both the spectral power and evoked response changes would be expected given the maturational increase in complexity. The emergence of higher frequencies would reflect the enhanced local processing( segregation), whereas the multicomponent evoked response is thought to reflect reentrant interactions[41], suggesting enhanced integration. In the age range of the children we studied the brain is in a state of structural and functional refinement[42],[43]. Myelination and neural pruning increase differentiation of information flow in the brain, enabling a shift from a system that responds in a slow and stimulus-locked manner, to one that responds more rapidly and where the internal variability reflects the parallel exploration of the functional repertoire before converging to an optimal response[16],[44]. In the case of normal development, the increased variability leads to a stabilization of behavior, increasing the cognitive repertoire of the system. One may postulate that internal variability would mature to some optimum level, based on both physiology and experience, but that further increases or decreases, coming from disease or damage would compromise behavioral stability. The suggestions derived from the present findings contribute to the growing evidence that internal dynamics are a key feature governing brain function[45]–[47].", "role": "user" }, { "content": "Intuitive notions of brain–behavior relationships would suggest that because children show more variability in behavior , their brains should also be more variable . We demonstrate that this is not the case . In measuring brain signal variability with EEG and behavior in a simple face recognition task , we found that brain signal variability increases in children from 8–15 y and is even higher in young adults . Importantly , we show that this increased brain variability correlates with reduced behavioral variability and more accurate performance . A brain that has more variability also has greater complexity and a greater capacity for information processing . The implication of our findings is that variability in brain signals , or what some would call noise , is actually a critical feature of brain function . For the brain to operate at an optimal level , a certain amount of internal noise is necessary . In a certain way it could be stated that a noisy brain is a healthy brain .", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.941
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nOmega-3 free fatty acids for the maintenance of remission in Crohn disease: the EPIC Randomized Controlled Trials.\nAbstract 1:\nMaintenance therapy for Crohn disease features the use of immunosuppressive drugs, which are associated with an increased risk of infection. Identification of safe and effective maintenance strategies is a priority.\n To determine whether the oral administration of omega-3 free fatty acids is more effective than placebo for prevention of relapse of Crohn disease.\n Two randomized, double-blind, placebo-controlled studies (Epanova Program in Crohn's Study 1 [EPIC-1] and EPIC-2) conducted between January 2003 and February 2007 at 98 centers in Canada, Europe, Israel, and the United States. Data from 363 and 375 patients with quiescent Crohn disease were evaluated in EPIC-1 and EPIC-2, respectively.\n Patients with a Crohn's Disease Activity Index (CDAI) score of less than 150 were randomly assigned to receive either 4 g/d of omega-3 free fatty acids or placebo for up to 58 weeks. No other treatments for Crohn disease were permitted.\n Clinical relapse, as defined by a CDAI score of 150 points or greater and an increase of more than 70 points from the baseline value, or initiation of treatment for active Crohn disease.\n For EPIC-1, 188 patients were assigned to receive omega-3 free fatty acids and 186 patients to receive placebo. Corresponding numbers for EPIC-2 were 189 and 190 patients, respectively. The rate of relapse at 1 year in EPIC-1 was 31.6% in patients who received omega-3 free fatty acids and 35.7% in those who received placebo (hazard ratio, 0.82; 95% confidence interval, 0.51-1.19; P = .30). Corresponding values for EPIC-2 were 47.8% and 48.8% (hazard ratio, 0.90; 95% confidence interval, 0.67-1.21; P = .48). Serious adverse events were uncommon and mostly related to Crohn disease.\n In these trials, treatment with omega-3 free fatty acids was not effective for the prevention of relapse in Crohn disease.\n clinicaltrials.gov Identifiers: EPIC-1: NCT00613197, EPIC-2: NCT00074542.\n\n\nTitle 2:\nUsefulness of omega-3 fatty acid supplementation in addition to mesalazine in maintaining remission in pediatric Crohn's disease: a double-blind, randomized, placebo-controlled study.\nAbstract 2:\nTo assess the value of long-chain omega-3 fatty acids (FAs) supplementation in addition to amino-salicylic-acid (5-ASA) in pediatric patients with Crohn's disease (CD).\n Thirty-eight patients (20 males and 18 females, mean age 10.13 years, range 5-16 years) with CD in remission were randomized into two groups and treated for 12 mo. Group I (18 patients) received 5-ASA (50 mg/kg/d)+ omega-3 FAs as triglycerides in gastro-resistant capsules, 3 g/d (eicosapentanoic acid, EPA, 400 mg/g, docosahexaenoic acid, DHA, 200 mg/g). Group II (20 patients) received 5-ASA (50 mg/kg/d)+olive oil placebo capsules. Patients were evaluated for fatty acid incorporation in red blood cell membranes by gas chromatography at baseline 6 and 12 mo after the treatment.\n The number of patients who relapsed at 1 year was significantly lower in group I than in group II (P<0.001). Patients in group I had a significant increase in the incorporation of EPA and DHA (P<0.001) and a decrease in the presence of arachidonic acids.\n Enteric-coated omega-3 FAs in addition to treatment with 5-ASA are effective in maintaining remission of pediatric CD.\n\n\nTitle 3:\nEffect of an enteric-coated fish-oil preparation on relapses in Crohn's disease.\nAbstract 3:\nPatients with Crohn's disease may have periods of remission, interrupted by relapses. Because fish oil has antiinflammatory actions, it could reduce the frequency of relapses, but it is often poorly tolerated because of its unpleasant taste and gastrointestinal side effects.\n We performed a one-year, double-blind, placebo-controlled study to investigate the effects of a new fish-oil preparation in the maintenance of remission in 78 patients with Crohn's disease who had a high risk of relapse. The patients received either nine fish-oil capsules containing a total of 2.7 g of n-3 fatty acids or nine placebo capsules daily. A special coating protected the capsules against gastric acidity for at least 30 minutes.\n Among the 39 patients in the fish-oil group, 11 (28 percent) had relapses, 4 dropped out because of diarrhea, and 1 withdrew for other reasons. In contrast, among the 39 patients in the placebo group, 27 (69 percent) had relapses, 1 dropped out because of diarrhea, and 1 withdrew for other reasons (difference in relapse rate, 41 percentage points; 95 percent confidence interval, 21 to 61; P < 0.001). After one year, 23 patients (59 percent) in the fish-oil group remained in remission, as compared with 10 (26 percent) in the placebo group (P = 0.003). Logistic-regression analysis indicated that only fish oil and not sex, age, previous surgery, duration of disease, or smoking status affected the likelihood of relapse (odds ratio for the placebo group as compared with the fish-oil group, 4.2; 95 percent confidence interval, 1.6 to 10.7).\n In patients with Crohn's disease in remission, a novel enteric-coated fish-oil preparation is effective in reducing the rate of relapse.\n\n\nTitle 4:\nOmega-3 fatty acids and low carbohydrate diet for maintenance of remission in Crohn's disease. A randomized controlled multicenter trial. Study Group Members (German Crohn's Disease Study Group).\nAbstract 4:\nThere is no established therapy for maintaining remission in patients with Crohn's disease. Following different suggestions from the literature, two potential interventions for maintaining remission were tested against placebo, using either 5 g/day of a highly concentrated omega-3 fatty acid compound or a carbohydrate-reduced diet (84 g/day).\n A total of 204 patients were recruited after they had had an acute relapse. After remission (CDAI < or = 150) was attained with steroid therapy, patients were randomized to receive either omega-3 fatty acids (n = 70), placebo (n = 65), or diet (n = 69). Low-dose prednisolone was given to all patients for the first 8 weeks of intervention. CDAI and an acute-phase protein (CRP) were used as criteria for a relapse.\n The proportion of patients without relapse within a year were similar in the placebo and active treatment group (intention-to-treat analysis: placebo, 30%; active treatment, 30%; protocol-adhering patients, 29% versus 28%). Patients did gain benefit (53%; p = 0.023) for as long as they maintained the diet. However, intention-to-treat analysis (diet group, 40%) did not show a noticeable difference when compared with placebo.\n Omega-3 fatty acids did not show an effect on extending the remission in Crohn's disease. For the diet patients the question remains whether the noncompliant patients dropped out early because they sensed a relapse approaching or whether their condition deteriorated because they failed to comply with the diet.", "role": "user" }, { "content": "Omega 3 fatty acids are safe but probably ineffective for maintenance of remission in CD. The existing data do not support routine maintenance treatment of Crohn's disease with omega 3 fatty acids.", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.14721
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nBACKGROUND: Although there is some evidence that severe acute respiratory syndrome coronavirus 2 can invade the human placenta, limited data exist on the gestational age-dependent expression profile of the severe acute respiratory syndrome coronavirus 2 cell entry mediators, angiotensin-converting enzyme 2 and transmembrane protease serine 2, at the human maternal-fetal interface. There is also no information as to whether the expression of these mediators is altered in pregnancies complicated by preeclampsia or preterm birth. This is important because the expression of decidual and placental angiotensin-converting enzyme 2 and transmembrane protease serine 2 across gestation may affect the susceptibility of pregnancies to vertical transmission of severe acute respiratory syndrome coronavirus 2. OBJECTIVE: This study aimed to investigate the expression pattern of specific severe acute respiratory syndrome coronavirus 2 cell entry genes, angiotensin-converting enzyme 2 and transmembrane protease serine 2, in the placenta across human pregnancy and in paired samples of decidua and placenta in pregnancies complicated by preterm birth or preeclampsia compared with those in term uncomplicated pregnancies. STUDY DESIGN: In this study, 2 separate cohorts of patients, totaling 87 pregnancies, were included. The first cohort was composed of placentae from first- (7-9 weeks), second- (16-18 weeks), and third-trimester preterm (26-31 weeks) and third-trimester term (38-41 weeks) pregnancies (n=5/group), whereas the second independent cohort included matched decidua and placentae from pregnancies from term uncomplicated pregnancies (37-41 weeks' gestation; n=14) and pregnancies complicated by preterm birth (26-37 weeks' gestation; n=11) or preeclampsia (25-37 weeks' gestation; n=42). Samples were subjected to quantitative polymerase chain reaction and next-generation sequencing or RNA sequencing for next-generation RNA sequencing for angiotensin-converting enzyme 2 and transmembrane protease serine 2 mRNA expression quantification, respectively. RESULTS: In the first cohort, angiotensin-converting enzyme 2 and transmembrane protease serine 2, exhibited a gestational age-dependent expression profile, that is, angiotensin-converting enzyme 2 and transmembrane protease serine 2 mRNA was higher (P<.05) in the first-trimester placenta than in second-trimester, preterm birth, and term placentae (P<.05) and exhibited a negative correlation with gestational age (P<.05). In the second cohort, RNA sequencing demonstrated very low or undetectable expression levels of angiotensin-converting enzyme 2 in preterm birth, preeclampsia, and term decidua and in placentae from late gestation. In contrast, transmembrane protease serine 2 was expressed in both decidual and placental samples but did not change in pregnancies complicated by either preterm birth or preeclampsia. CONCLUSION: The increased expression of these severe acute respiratory syndrome coronavirus 2 cell entry-associated genes in the placenta in the first trimester of pregnancy compared with those in later stages of pregnancy suggests the possibility of differential susceptibility to placental entry to severe acute respiratory syndrome coronavirus 2 across pregnancy. Even though there is some evidence of increased rates of preterm birth associated with severe acute respiratory syndrome coronavirus 2 infection, we found no increase in mRNA expression of angiotensin-converting enzyme 2 or transmembrane protease serine 2 at the maternal-fetal interface.", "role": "user" }, { "content": "[\"Mechanism\"]", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.17692
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Fog Computing: A Taxonomy, Survey and Future Directions \nChapter Title: Cloudlets \n1. Abstract of Cited Paper (BIB001): Fog computing is rapidly changing the distributed computing landscape by extending the Cloud computing paradigm to include wide-spread resources located at the network edges. This diffused infrastructure is well suited for the implementation of data stream processing (DSP) applications, by possibly exploiting local computing resources. Storm is an open source, scalable, and fault-tolerant DSP system designed for locally distributed clusters. We made it suitable to operate in a geographically distributed and highly variable environment; to this end, we extended Storm with new components that allow to execute a distributed QoS-aware scheduler and give self-adaptation capabilities to the system. In this paper we provide a thorough experimental evaluation of the proposed solution using two sets of DSP applications: the former is characterized by a simple topology with different requirements; the latter comprises some well known applications (i.e., Word Count, Log Processing). The results show that the distributed QoS-aware scheduler outperforms the centralized default one, improving the application performance and enhancing the system with runtime adaptation capabilities. However, complex topologies involving many operators may cause some instability that can decrease the DSP application availability. \n2. Abstract of Cited Paper (BIB002): Application software execution requests, from mobile devices to cloud service providers, are often heterogeneous in terms of device, network, and application runtime contexts. These heterogeneous contexts include the remaining battery level of a mobile device, network signal strength it receives and quality-of-service QoS requirement of an application software submitted from that device. Scheduling such application software execution requests from many mobile devices on competent virtual machines to enhance user quality of experience QoE is a multi-constrained optimization problem. However, existing solutions in the literature either address utility maximization problem for service providers or optimize the application QoS levels, bypassing device-level and network-level contextual information. In this paper, a multi-objective nonlinear programming solution to the context-aware application software scheduling problem has been developed, namely, QoE and context-aware scheduling QCASH method, which minimizes the application execution times i.e., maximizes the QoE and maximizes the application execution success rate. To the best of our knowledge, QCASH is the first work in this domain that inscribes the optimal scheduling problem for mobile application software execution requests with three-dimensional context parameters. In QCASH, the context priority of each application is measured by applying min-max normalization and multiple linear regression models on three context parameters-battery level, network signal strength, and application QoS. Experimental results, found from simulation runs on CloudSim toolkit, demonstrate that the QCASH outperforms the state-of-the-art works well across the success rate, waiting time, and QoE. Copyright © 2016 John Wiley & Sons, Ltd.\n\nLiterature Review:", "role": "user" }, { "content": "Cloudlets are considered as micro-cloud and loacted at the middle layer of end device, cloudlet, and Cloud hierarchy. Basically cloudlets have been designed for extending Cloud based services towards mobile device users and can complement MCC BIB003 . In several works (Dsouza et al. BIB001 , Cardellini et al. BIB002 ), cloudlets are mentioned as Fog nodes. Cloudlet based Fog computing are highly virtualized and can deal with large number of end devices simultaneously. In some cases, due to structural constraints, cloudlets even after deploying at the edge act as centralized components. In this sense, the limitations of centralized computation still remain significant in Fog computing which resist to support IoT.", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.2370
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nVisual Tracking is a complex problem due to unconstrained appearance variations and dynamic environment. Extraction of complementary information from the object environment via multiple features and adaption to the target's appearance variations are the key problems of this work. To this end, we propose a robust object tracking framework based on Unified Graph Fusion (UGF) of multi-cue to adapt to the object's appearance. The proposed cross-diffusion of sparse and dense features not only suppresses the individual feature deficiencies but also extracts the complementary information from multi-cue. This iterative process builds robust unified features which are invariant to object deformations, fast motion, and occlusion. Robustness of the unified feature also enables the random forest classifier to precisely distinguish the foreground from the background, adding resilience to background clutter. In addition, we present a novel kernel-based adaptation strategy using outlier detection and a transductive reliability metric.\n\nCited Abstract(s):\n[1]: Deep neural network-based methods have recently achieved excellent performance in visual tracking task. As very few training samples are available in visual tracking task, those approaches rely heavily on extremely large auxiliary dataset such as ImageNet to pretrain the model. In order to address the discrepancy between the source domain (the auxiliary data) and the target domain (the object being tracked), they need to be finetuned during the tracking process. However, those methods suffer from sensitivity to the hyper-parameters such as learning rate, maximum number of epochs, size of mini-batch, and so on. Thus, it is worthy to investigate whether pretraining and fine tuning through conventional back-prop is essential for visual tracking. In this paper, we shed light on this line of research by proposing convolutional random vector functional link (CRVFL) neural network, which can be regarded as a marriage of the convolutional neural network and random vector functional link network, to simplify the visual tracking system. The parameters in the convolutional layer are randomly initialized and kept fixed. Only the parameters in the fully connected layer need to be learned. We further propose an elegant approach to update the tracker. In the widely used visual tracking benchmark, without any auxiliary data, a single CRVFL model achieves 79.0 with a threshold of 20 pixels for the precision plot. Moreover, an ensemble of CRVFL yields comparatively the best result of 86.3 .\n\n[2]: \n\n[3]: \n\n[4]: Most existing tracking methods are direct trackers, which directly exploit foreground or and background information for object appearance modeling and decide whether an image patch is target object or not. As a result, these trackers cannot perform well when target appearance changes heavily and becomes different from its model. To deal with this issue, we propose a novel relative tracker, which can effectively exploit the relative relationship among image patches from both foreground and background for object appearance modeling. Different from direct trackers, the proposed relative tracker is robust to localize target object by use of the best image patch with the highest relative score to the target appearance model. To model relative relationship among large-scale image patch pairs, we propose a novel and effective deep relative learning algorithm through the convolutional neural network. We test the proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our method consistently outperforms the state-of-the-art trackers due to the powerful capacity of the proposed deep relative model.\n\n[5]: Most existing part-based tracking methods are part-to-part trackers, which usually have two separated steps including the part matching and target localization. Different from existing methods, in this paper, we propose a novel part-to-target (P2T) tracker in a unified fashion by inferring target location from parts directly. To achieve this goal, we propose a novel deep regression model for P2T regression in an end-to-end framework via convolutional neural networks. The proposed model is designed not only to exploit the part context information to preserve object spatial layout structure, but also to learn part reliability to emphasize part importance for the robust P2T regression. We evaluate the proposed tracker on four challenging benchmark sequences, and extensive experimental results demonstrate that our method performs favorably against state-of-the-art trackers because of the powerful capacity of the proposed deep regression model.\n\n[6]: Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a Siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.\n\n[7]: Motion models have been proved to be a crucial part in the visual tracking process. In recent trackers, particle filter and sliding windows-based motion models have been widely used. Treating motion models as a sequence prediction problem, we can estimate the motion of objects using their trajectories. Moreover, it is possible to transfer the learned knowledge from annotated trajectories to new objects. Inspired by recent advance in deep learning for visual feature extraction and sequence prediction, we propose a trajectory predictor to learn prior knowledge from annotated trajectories and transfer it to predict the motion of target objects. In this predictor, convolutional neural networks extract the visual features of target objects. Long short-term memory model leverages the annotated trajectory priors as well as sequential visual information, which includes the tracked features and center locations of the target object, to predict the motion. Furthermore, to extend this method to videos in which it is difficult to obtain annotated trajectories, a dynamic weighted motion model that combines the proposed trajectory predictor with a random sampler is proposed. To evaluate the transfer performance of the proposed trajectory predictor, we annotated a real-world vehicle dataset. Experiment results on both this real-world vehicle dataset and an online tracker benchmark dataset indicate that the proposed method outperforms several state-of-the-art trackers.", "role": "user" }, { "content": "Recently, many deep learning based trackers [7] [2] [3] were proposed pertaining to their favorable performance to model the target's appearance. In [7] , authors considered the problem of visual tracking as a trajectory estimation task using convolutional and recurrent units. A large training data is required to predict trajectories, which is often unavailable. In [5] , author exploited part context information and importance reliability to preserve target's spatial information using deep regression. In [6] , authors used a Siamese network with online adaptive hedge algorithm for similarity measure between the target and weak experts based on CNN layer. Similarly, Gao al [4] train a Siamese neural network to construct a structural relationship between image patches. Mostly, deep learning based visual trackers require pre-training of the neural network on a large auxiliary image dataset. To overcome pre-training, Zhang al [1] initialize random convolutional filters and only train the fully connected layer. Even then, deep-learning based trackers could not be used in real-time scenarios and employ computationally intensive fine-tuning.", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.124
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: Although there have been substantial decreases in overall child mortality and in pneumonia-specific mortality, pneumonia remains the major single cause of death in children outside the neonatal period, causing approximately 900,000 of the estimated 6.3 million child deaths in 2013 [1] . Substantial advances have occurred in the understanding of risk factors and etiology of pneumonia, in development of standardized case definitions, and in prevention with the production of improved vaccines and in treatment. Such advances have led to changes in the epidemiology, etiology and mortality from childhood pneumonia. However in many areas access to these interventions remains sub-optimal, with large inequities between and within countries and regions. In this paper we review the impact of recent preventative and management advances in pneumonia epidemiology, etiology, radiologic presentation and outcome in children. The overall burden of childhood pneumonia has been reduced substantially over the last decade, despite an increase in the global childhood population from 605 million in 2000 to 664 million in 2015 [2] . Recent data suggest that there has been a 25% decrease in the incidence of pneumonia, from 0.29 episodes per child year in low-and middle-income countries in 2000, to 0.22 episodes per child year in 2010 [3] . This is substantiated by a 58% decrease in pneumonia-associated disability-adjusted life years between 1990 and 2013, from 186 million to 78 million as estimated in the Global Burden of Disease study [1] . Pneumonia deaths decreased from 1.8 million in 2000 to 900,000 in 2013 [1] . These data do not reflect the full impact of increasingly widespread use of pneumococcal conjugate vaccine in low-and middle-income countries because the incidence of pneumonia and number of deaths are likely to decrease still further as a result of this widespread intervention [4] . Notwithstanding this progress, there remains a disproportionate burden of disease in low-and middle-income countries, where more than 90% of pneumonia cases and deaths occur. The incidence in high-income countries is estimated at 0.015 episodes per child year, compared to 0.22 episodes per child year in low-and middle-income countries [3] . On average, 1 in 66 children in high-income countries is affected by pneumonia per year, compared to 1 in 5 children in low-and middle-income countries. Even within low-and middleincome countries there are regional inequities and challenges with access to health care services: up to 81% of severe pneumonia deaths occur outside a hospital [5] . In addition to a higher incidence of pneumonia, the case fatality rate is estimated to be almost 10-fold higher in low-and middle-income countries as compared to high-income countries [3, 5] . Childhood pneumonia can also lead to significant morbidity and chronic disease. Early life pneumonia can impair longterm lung health by decreasing lung function [6] . Severe or recurrent pneumonia can have a worse effect on lung function; increasing evidence suggests that chronic obstructive pulmonary disease might be related to early childhood pneumonia [7, 8] . A meta-analysis of the risk of long-term outcomes after childhood pneumonia categorized chronic respiratory sequelae into major (restrictive lung disease, obstructive lung disease, bronchiectasis) and minor (chronic bronchitis, asthma, abnormal pulmonary function) groups [9] . The risk of developing at least one of the major sequelae was estimated as 6% after an ambulatory pneumonia event and 14% after an episode of hospitalized pneumonia. Because respiratory diseases affect almost 1 billion people globally and are a major cause of mortality and morbidity [10] , childhood pneumonia might contribute to substantial morbidity across the life course. Chest radiologic changes have been considered the gold standard for defining a pneumonia event [11] because clinical findings can be subjective and clinical definitions of pneumonia can be nonspecific. In 2005, to aid in defining outcomes of pneumococcal vaccine studies, the World Health Organization's (WHO) standardized chest radiograph description defined a group of children who were considered most likely to have pneumococcal pneumonia [12] . The term \"end-point consolidation\" was described as a dense or fluffy opacity that occupies a portion or whole of a lobe, or the entire lung. \"Other infiltrate\" included linear and patchy densities, peribronchial thickening, minor patchy infiltrates that are not of sufficient magnitude to constitute primary end-point consolidation, and small areas of atelectasis that in children can be difficult to distinguish from consolidation. \"Primary end-point pneumonia\" included either end-point consolidation or a pleural effusion associated with a pulmonary parenchymal infiltrate (including \"other\" infiltrate). Widespread use of pneumococcal conjugate vaccination and Haemophilus influenzae type B conjugate vaccination has decreased the incidence of radiologic pneumonia. In a review of four randomized controlled trials and two case-control studies of Haemophilus influenzae type B conjugate vaccination in high-burden communities, the vaccination was associated with an 18% decrease in radiologic pneumonia [13] . Introduction of pneumococcal conjugate vaccination was associated with a 26% decrease in radiologic pneumonia in California between 1995 and 1998 [14] . In vaccine efficacy trials in low-and middle-income countries, pneumococcal conjugate vaccination reduced radiologic pneumonia by 37% in the Gambia [15] , 25% in South Africa [16] and 26% in the Philippines [17] . The WHO radiologic case definition was not intended to distinguish bacterial from viral etiology but rather to define a sub-set of pneumonia cases in which pneumococcal infection was considered more likely and to provide a set of standardized definitions through which researchers could achieve broad agreement in reporting chest radiographs. However, despite widespread field utilization, there are concerns regarding inter-observer repeatability. There has been good consensus for the description of lobar consolidation but significant disagreement on the description of patchy and perihilar infiltrates [18, 19] . In addition, many children with clinically severe lung disease do not have primary end-point pneumonia: in one pre-pneumococcal conjugate vaccination study, only 34% of children hospitalized with pneumonia had primary end-point pneumonia [20] . A revised case definition of \"presumed bacterial pneumonia\" has been introduced, and this definition includes pneumonia cases with WHO-defined alveolar consolidation, as well as those with other abnormal chest radiograph infiltrates and a serum C-reactive protein of at least 40 mg/L [21, 22] . This definition has been shown to have greater sensitivity than the original WHO radiologic definition of primary end-point pneumonia for detecting the burden of pneumonia prevented by pneumococcal conjugate vaccination [23] . Using the revised definition, the 10-valent pneumococcal conjugate vaccine (pneumococcal conjugate vaccination-10), had a vaccine efficacy of 22% in preventing presumed bacterial pneumonia in young children in South America [22] , and pneumococcal conjugate vaccination-13 had a vaccine efficacy of 39% in preventing presumed bacterial pneumonia in children older than 16 weeks who were not infected with human immunodeficiency virus (HIV) in South Africa [21] . Thus there is convincing evidence that pneumococcal conjugate vaccination decreases the incidence of radiologic pneumonia; however there is no evidence to suggest that pneumococcal conjugate vaccination modifies the radiologic appearance of pneumococcal pneumonia. Empyema is a rare complication of pneumonia. An increased incidence of empyema in children was noted in some high-income countries following pneumococcal conjugate vaccination-7 introduction, and this was attributed to pneumococcal serotypes not included in pneumococcal conjugate vaccination-7, especially 3 and 19A [24] . In the United States, evidence from a national hospital database suggests that the incidence of empyema increased 1.9-fold between 1996 and 2008 [25] . In Australia, the incidence rate ratio increased by 1.4 times when comparing the pre-pneumococcal conjugate vaccination-7 period (1998 to 2004) to the post-pneumococcal conjugate vaccination-7 period (2005 to 2010) [26] . In Scotland, incidence of empyema in children rose from 6.5 per million between 1981 and 1998, to 66 per million in 2005 [27] . These trends have been reversed since the introduction of pneumococcal conjugate vaccination-13. Data from the United States suggest that empyema decreased by 50% in children younger than 5 years [28] ; similarly, data from the United Kingdom and Scotland showed substantial reduction in pediatric empyema following pneumococcal conjugate vaccination-13 introduction [29, 30] . Several national guidelines from high-income countries, as well as the WHO recommendations for low-and middleincome countries, recommend that chest radiography should not be routinely performed in children with ambulatory pneumonia [31] [32] [33] . Indications for chest radiography include hospitalization, severe hypoxemia or respiratory distress, failed initial antibiotic therapy, or suspicion for other diseases (tuberculosis, inhaled foreign body) or complications. However, point-of-care lung ultrasound is emerging as a promising modality for diagnosing childhood pneumonia [34] . In addition to the effect on radiologic pneumonia, pneumococcal conjugate vaccination reduces the risk of hospitalization from viral-associated pneumonia, probably by reducing bacterial-viral co-infections resulting in severe disease and hospitalization [35] . An analysis of ecological and observational studies of pneumonia incidence in different age groups soon after introduction of pneumococcal conjugate vaccination-7 in Canada, Italy, Australia, Poland and the United States showed decreases in all-cause pneumonia hospitalizations ranging from 15% to 65% [36] . In the United States after pneumococcal conjugate vaccination-13 replaced pneumococcal conjugate vaccination-7, there was a further 17% decrease in hospitalizations for pneumonia among children eligible for the vaccination, and a further 12% decrease among unvaccinated adults [28] . A systematic review of etiology studies prior to availability of new conjugate vaccines confirmed S. pneumoniae and H. influenzae type B as the most important bacterial causes of pneumonia, with Staphylococcus aureus and Klebsiella pneumoniae associated with some severe cases. Respiratory syncytial virus was the leading viral cause, identified in 15-40% of pneumonia cases, followed by influenza A and B, parainfluenza, human metapneumovirus and adenovirus [37] . More recent meta-analyses of etiology data suggest a changing pathogen profile, with increasing recognition that clinical pneumonia is caused by the sequential or concurrent interaction of more than one organism. Severe disease in particular is often caused by multiple pathogens. With high coverage of pneumococcal conjugate vaccination and Haemophilus influenzae type B conjugate vaccination, viral pathogens increasingly predominate [38] . In recent case-control studies, at least one virus was detected in 87% of clinical pneumonia cases in South Africa [39] , while viruses were detected in 81% of radiologic pneumonia cases in Sweden [40] . In a large multi-center study in the United States, viral pathogens were detected in 73% of children hospitalized with radiologic pneumonia, while bacteria were detected in only 15% of cases [41] . A meta-analysis of 23 case-control studies of viral etiology in radiologically confirmed pneumonia in children, completed up to 2014, reported good evidence of causal attribution for respiratory syncytial virus, influenza, metapneumovirus and parainfluenza virus [42] . However there was no consistent evidence that many other commonly described viruses, including rhinovirus, adenovirus, bocavirus and coronavirus, were more commonly isolated from cases than from controls. Further attribution of bacterial etiology is difficult because it is often not possible to distinguish colonizing from pathogenic bacteria when they are isolated from nasal specimens [43] . Another etiology is pertussis. In the last decade there has also been a resurgence in pertussis cases, especially in highincome countries [44] . Because pertussis immunity after acellular pertussis vaccination is less long-lasting than immunity after wild-type infection or whole-cell vaccination, many women of child-bearing age have waning pertussis antibody levels. Their infants might therefore be born with low transplacental anti-pertussis immunoglobulin G levels, making them susceptible to pertussis infection before completion of the primary vaccination series [45] . In 2014, more than 40,000 pertussis cases were reported to the Centers for Disease Control and Prevention in the United States; in some states, population-based incidence rates are higher than at any time in the last 70 years [44] . In contrast, most low-and middleincome countries use whole-cell pertussis vaccines and the numbers of pertussis cases in those countries were stable or decreasing until 2015 [46] . However recent evidence from South Africa (where the acellular vaccine is used) shows an appreciable incidence of pertussis among infants presenting with acute pneumonia: 2% of clinical pneumonia cases among infants enrolled in a birth cohort were caused by pertussis [39] , and 3.7% of infants and young children presenting to a tertiary academic hospital had evidence of pertussis infection [47] . Similarly, childhood tuberculosis is a major cause of morbidity and mortality in many low-and middle-income countries, and Mycobacterium tuberculosis has increasingly been recognized as a pathogen in acute pneumonia in children living in high tuberculosis-prevalence settings. Postmortem studies of children dying from acute respiratory illness have commonly reported M. tuberculosis [48, 49] . A recent systematic review of tuberculosis as a comorbidity of childhood pneumonia reported culture-confirmed disease in about 8% of cases [50] . Because intrathoracic tuberculosis disease is only culture-confirmed in a minority of cases, the true burden could be even higher; tuberculosis could therefore be an important contributor to childhood pneumonia incidence and mortality in high-prevalence areas\n\nQuestion: What is the effect of childhood tuberculosis in childhood pneumonia?", "role": "user" }, { "content": "A recent systematic review of tuberculosis as a comorbidity of childhood pneumonia reported culture-confirmed disease in about 8% of cases [50] . Because intrathoracic tuberculosis disease is only culture-confirmed in a minority of cases, the true burden could be even higher; tuberculosis could therefore be an important contributor to childhood pneumonia incidence and mortality in high-prevalence areas", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.11116
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe sobering reality of the COVID-19 pandemic is that it has brought people together at home at a time when we want them apart in the community. This will bring both benefits and challenges. It will affect people differently based upon their age, health status, resilience, family support structures, and socio-economic background. This article will assess the impact in high income countries like Australia, where the initial wave of infection placed the elderly at the greatest risk of death whilst the protective measures of physical distancing, self-isolation, increased awareness of hygiene practices, and school closures with distance learning has had considerable impact on children and families acutely and may have ramifications for years to come.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.pubmedqa_qa
science.pubmedqa_qa.218
[ { "content": "You will be shown an abstract from a research paper and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nAbstract:\n(BACKGROUND) The objective of the current study is to determine to what extent the reduction of Chile's traffic fatalities and injuries during 2000-2012 was related to the police traffic enforcement increment registered after the introduction of its 2005 traffic law reform.\n(METHODS) A unique dataset with assembled information from public institutions and analyses based on ordinary least square and robust random effects models was carried out. Dependent variables were traffic fatality and severe injury rates per population and vehicle fleet. Independent variables were: (1) presence of new national traffic law; (2) police officers per population; (3) number of traffic tickets per police officer; and (4) interaction effect of number of traffic tickets per police officer with traffic law reform. Oil prices, alcohol consumption, proportion of male population 15-24 years old, unemployment, road infrastructure investment, years' effects and regions' effects represented control variables.\n(RESULTS) Empirical estimates from instrumental variables suggest that the enactment of the traffic law reform in interaction with number of traffic tickets per police officer is significantly associated with a decrease of 8% in traffic fatalities and 7% in severe injuries. Piecewise regression model results for the 2007-2012 period suggest that police traffic enforcement reduced traffic fatalities by 59% and severe injuries by 37%.\n\nQuestion:\nDid Chile's traffic law reform push police enforcement?\n\nAnswer:", "role": "user" }, { "content": "YES", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.432.0
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Question Generation by Transformers\n\nA machine learning model was developed to automatically generate questions from Wikipedia passages using transformers, an attention-based model eschewing the paradigm of existing recurrent neural networks (RNNs). The model was trained on the inverted Stanford Question Answering Dataset (SQuAD), which is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles. After training, the question generation model is able to generate simple questions relevant to unseen passages and answers containing an average of 8 words per question. The word error rate (WER) was used as a metric to compare the similarity between SQuAD questions and the model-generated questions. Although the high average WER suggests that the questions generated differ from the original SQuAD questions, the questions generated are mostly grammatically correct and plausible in their own right.\n\nIntroduction\nExisting question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.\n\nBackground and Related Work\nA relatively simple method for question generation is the fill-in-the-blank approach, which is also known as cloze tasks. Such a method typically involves the sentence first being tokenized and tagged for part-of-speech with the named entity or noun part of the sentence masked out. These generated questions are an exact match to the one in the reading passage except for the missing word or phrase. Although fill-in-the-blank questions are often used for reading comprehension, answering such questions correctly may not necessarily indicate comprehension if it is too easy to match the question to the relevant sentence in the passage. To improve fill in the blank type questions, a prior study used a supervised machine learning model to generate fill-in-the-blank type questions. The model paraphrases the sentence from the passage with the missing word by anonymizing entity markers BIBREF0.\nSemi-automatic methods can also be use for question generation. Semi-automatic question generation involves human-generated templates in combination with querying the linked database repositories to complete the question BIBREF3, BIBREF4. The answer to the question is also extracted from the linked database. If the question is to be answered selecting from multiple choices, then distractors could also be selected from the database and randomly generated as incorrect choices for the answer. Another example of template-based question-and-answer generator using linked data is called Sherlock that has been shown to generate questions with varying levels of difficulty BIBREF5. However, designing a large set of high quality questions using semi-automatic question generation methods can be cognitively demanding and time-consuming. The types of questions created are also constrained to the templates. Generating a large dataset of questions is therefore cumbersome.\nOther automatic question generators require human-made rules for the model to follow BIBREF1, BIBREF2. Educators are recruited to define the rules that will convert declarative sentences into interrogative questions BIBREF10, BIBREF11, BIBREF12. The rules generated requires the educator to possess both linguistic knowledge and subject knowledge. As with the template-based methods described above, this rules-based method can also be time-consuming and cognitively demanding. Moreover, the quality of the questions is limited by the quality of the handcrafted rules, and rules-based approaches are not scalable beyond human capacity.\nPerhaps the most automated method reported thus far utilizes RNNs as sequence transduction (seq2seq) models to generate questions from sentences or passages BIBREF6, BIBREF7. In the most successful variant of RNNs, the Long Short-Term Memory (LSTM) networks, the model reads from left to right and includes an encoder and a decoder BIBREF13. The encoder takes the input and converts it to hidden vectors, while the decoder takes the vectors from the encoder and creates its own hidden vector to predict the next word based on the previous hidden vector BIBREF13. The hidden vector to the decoder stores all of the information about the context. The components in between the encoder and the decoder of the seq2seq model consists of attention, beam search, and bucketing. The attention mechanism takes the input to the decoder and allows the decoder to analyze the input sequence selectively. Beam search mechanism allows the decoder to select the highest probability word occurrence based on previous words. The bucketing mechanism allows the length of sequences to vary based on what we designate the bucket size to be. The decoder is then rewarded for correctly predicting the next word and penalized for incorrect predictions.\nIn this study, we developed a seq2seq model to automatically generate questions from Wikipedia passages. Our goal is to produce plausible questions with minimal human intervention that can assist educators in developing their quizzes and tests. Our model is based on transformers instead of RNNs. Transformers can train faster than RNNs because it is more parallelizable, working well with large and limited datasets BIBREF8, BIBREF9.\nTransformers can also achieve better performance at a fraction of the training cost. Like the RNN approach, transformers have an encoder and a decoder. Transformers also incorporate the beam search and bucketing mechanisms. Unlike RNNs, transformers adopt multiple attention heads without requiring any recurrence, though recurrence can be added. The self-attention mechanism used is the scaled dot-product attention according to\nwhere $d$ is the dimension (number of columns) of the input queries $Q$, keys $K$, and values $V$. By using self-attention, transformers can account for the whole sequence in its entirety and bidirectionally. For multi-head attention with $h$ heads that jointly attend to different representation subspaces at different positions given a sequence of length $m$ and the matrix $H\\in \\mathbf {R}^{m \\times d}$, the result is\nwhere the projections are learned parameter matrices $H^W_i,H^K_i,H^V_i\\in \\mathbf {R}^{(d \\times d)/h}$ and $W^O\\in \\mathbf {R}^{(d \\times d)}$.\nModels utilizing transformers have achieved state-of-the-art performance on many NLP tasks, including question answering BIBREF14, BIBREF15. It is therefore interesting to study how transformers might be used to generate questions by training on the inverted SQuAD.\n\nExperimental Methods ::: Data.\nIn this study, we used the Stanford Question Answering Dataset (SQuAD). SQuAD is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage BIBREF16. To generate the data for SQuAD, the top 10,000 English Wikipedia articles were ranked by Project Nayuki's Wikipedia's internal PageRanks as high-quality. Paragraphs that are longer than 500 characters were then extracted from the articles and partitioned into a training set (80%), a development set (10%), and a test set (10%). Only the former two datasets are publicly available. Crowdworkers were then employed to generate the questions and then the answers to the questions based on the extracted paragraphs. Another subset of crowdworkers were then asked to answer the questions that were generated given the corresponding passage to compare the model's answer with human generated answers and provide a benchmark for machine learning models.\n\nExperimental Methods ::: Pre-processing.\nWe used the publicly available data from SQuAD to train our model to generate the questions. We used SQuAD's training and dev sets as our training and test sets, respectively. The reading passage, question, and answer data were pre-processed as described in the next section. For the test set, we provided the model with the pre-processed reading passages and answers that were never seen by the model. We inverted SQuAD by training a machine learning model to infer a question given a reading passage and an answer separated by a special token (i.e., `*') as input.\nFor pre-processing the reading passages, questions and answers, spaCy was used for named entity recognition and part-of-speech tagging BIBREF17, and WordPiece was used for tokenization BIBREF18. To ensure intelligible outputs, stop words are removed from the context passages and answers but not the questions. After lowercasing, tokenizing and removing the stop words, the named entities are then replaced with their respective tags to better allow the model to generalize and learn patterns in the data. We address a variety of named and numeric entities, including companies, locations, organizations and products, etc. (Table TABREF5). To account for multiple occurrences of a named entity type in the context passage, we also included an index after the named entity tag. As an example, consider the following context passage:\nSuper Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.\nApplying the pre-processing described above, including the indexed-named entity tag replacement but not yet removing stop words, would produce\nEVENT 0 DATE 0 was an NORP 0 football game to determine the champion of ORG 0 ( ORG 1 ) for DATE 1 . the NORP 0 football conference ( ORG 2 ) champion ORG 3 defeated ORG 4 ( ORG 5 ) champion ORG 6 24 – 10 to earn their ORDINAL 0 EVENT 0 title . the game was played on DATE 2 , at FAC 0 in FAC 1 at GPE 0 , GPE 1 . as this was the ORDINAL 1 EVENT 0 , the league emphasized the \" golden anniversary \" with various gold - themed initiatives , as well as temporarily suspend ##ing the tradition of naming each EVENT 0 game with LANGUAGE 0 nu ##meral ##s ( under which the game would have been known as \" EVENT 0 l \" ) , so that the logo could prominently feature the LANGUAGE 1 nu ##meral ##s DATE 0 .\nNote that the index is separated from the named entity tag by a space and therefore interpreted by the model as a separate token so that named entities of similar types can be associated and generalized from without sacrificing the ability to distinguish between different entities of the same type. This spacing is necessary since we do not employ character-level embedding.\nTo generate the sub-word embeddings, we used the pre-trained WordPiece model from BERT, which has a 30,000 token vocabulary. WordPiece is a statistical technique used to segment text into tokens at the sub-word level. The vocabulary is initialized with all individual characters and iteratively aggregates the most frequently and likely combination of symbols into a vocabulary. The generation of WordPieces allows the model to capture the meaning of commonly occurring word segments, such as the root word suspend from suspending in the example context passage above. This dispenses with the need for the model to learn different variants or conjugations of a word.\nEach input to the model comprised of a concatenation of the pre-processed answer and context passage. The most commonly agreed upon answer was chosen from among the three plausible answers for each question in SQuAD. Unlike prior question generation studies using SQuAD by isolating the sentence containing the answer, here we include the entire passage because the answers can depend on the context outside of the answer-containing sentence. Compared to RNNs used in prior studies, transformers allow us to more conveniently train and perform inference on longer sequence lengths. The model was developed with TensorFlow BIBREF19 and Tensor2Tensor BIBREF20, and then trained with an Nvidia Tesla T4 GPU for 1 million training steps.\nTo evaluate and analyze the results, the generated questions are post-processed by removing unnecessary spaces and consolidating the resulting WordPieces into a single coherent word. In other words, the results were de-tokenized using BERT's pre-trained WordPiece model.\n\nFigure 2: First word frequency of the model-generated questions in descending order for our test set (SQuAD dev set). Results for all first words are shown.\n\nFigure 3: First word frequency of the SQuAD questions in descending order for the dev set. Only the 21 most frequent first words are shown. The first words all capitalized, i.e. PERSON and ORG, are named entity tags assigned after preprocessing.\n\nTable 2: Select SQuAD and model-generated questions with various word error rates (WERs). The model-generated questions have been de-tokenized using BERT’s pre-trained WordPiece model.\n\nFigure 5: Word count histogram of the SQuAD questions for our test set (SQuAD dev set).\n\nQuestion: What is the motivation behind the work? Why question generation is an important task?", "role": "user" }, { "content": "{\"answer\": [\"Such a system would benefit educators by saving time to generate quizzes and tests.\"], \"evidence\": [\"Existing question generating systems reported in the literature involve human-generated templates, including cloze type BIBREF0, rule-based BIBREF1, BIBREF2, or semi-automatic questions BIBREF3, BIBREF4, BIBREF5. On the other hand, machine learned models developed recently have used recurrent neural networks (RNNs) to perform sequence transduction, i.e. sequence-to-sequence BIBREF6, BIBREF7. In this work, we investigated an automatic question generation system based on a machine learning model that uses transformers instead of RNNs BIBREF8, BIBREF9. Our goal was to generate questions without templates and with minimal human involvement using machine learning transformers that have been demonstrated to train faster and better than RNNs. Such a system would benefit educators by saving time to generate quizzes and tests.\"]}", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.8254
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nPerformingRMCEinMEFs.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.anat_em_ner
science.anat_em_ner.169
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be an empty JSON array \"[]\".\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\n1. Background\nThe United Republic of Tanzania, is among the many countries in sub-Saharan Africa facing a human resources crisis in its health sector, with a small and inequitably distributed health workforce [1] that shoulders a disproportionately high burden of disease[2]. Although all poor countries in the world face a severe human resource crisis in their health sectors [3,4], the problem is most acute in Sub-Saharan Africa, in which an estimated workforce of 750 000 health workers in the region serves 682 million people [2]. By comparison, the ratio is 10 to 15 times higher in developed countries. Moreover, this estimated workforce of doctors, nurses and allied health workers in Sub-Saharan Africa constitutes 1.3% of the world's health workforce, while Africa suffers from 25% of the world's burden of disease [2].\nA minimum level of a health workforce of 2.5 health workers per 1000 people is required to achieve the Millennium Development Goals [5]. Africa is far from this level with a health workforce density that only averages 0.8 worker per 1000 people, while the world median density of health personnel is 5 per 1000 people [5].\nThere is a positive correlation between health worker density and various health indices, most notably infant mortality rate, maternal mortality rates, and various disease specific mortality and morbidity rates [6,7]. An increase in the number of health workers per capita is associated with a notable decline in the rates mentioned above. As a consequence, it has been argued that health worker shortages have impeded the implementation of development goals in many poor countries [8].", "role": "user" }, { "content": "{\"AnatomicalEntity\": []}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.6281
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nSevere acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a novel coronavirus responsible for an ongoing human pandemic (COVID-19). There is a massive international effort underway to develop diagnostic reagents, vaccines, and antiviral drugs in a bid to slow down the spread of the disease and save lives. One part of that international effort involves the research community working with plants, bringing researchers from all over the world together with commercial enterprises to achieve the rapid supply of protein antigens and antibodies for diagnostic kits, and scalable production systems for the emergency manufacturing of vaccines and antiviral drugs. Here, we look at some of the ways in which plants can and are being used in the fight against COVID-19.", "role": "user" }, { "content": "[\"Diagnosis\", \"Treatment\"]", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.192
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is there an association between FGFR3 mutation and plagiocephaly?\n\nContext: Series of neurosurgical interventions were carried out, principally for acrocephaly and posterior plagiocephaly. The most common achondroplasia mutation, a p.Gly380Arg in the fibroblast growth factor receptor 3 (FGFR3) gene, was detected.\nThe most common mutation for achondroplasia (FGFR3 Gly380Arg, resulting in 1138G>A) was identified. Imaging studies disclosed complex craniosynostosis and neurosurgical intervention was carried out, particularly for posterior plagiocephaly.\nFGFR mutations and plagiocephaly.\nFGFR genes have important effects on bone development, and mutations in 4 \"hot spot\" exons of FGFR1-3 are found in many patients with craniosynostosis and some with synostotic plagiocephaly.\nMutation analyses in the FGFR3 gene revealed nucleotide alterations located in the mutational hot spot at amino acid residue 250 (g.C749).\nRESULTS: In our cohort of 159 patients with various craniosynostosis syndromes, mutations were found in 100% of patients with Apert syndrome, 83.3% with Pfeiffer syndrome, 72.7% with Crouzon syndrome, 50.0% with Saethre-Chotzen syndrome, 27.7% with plagiocephaly, 31.8% with brachicephaly, 20% of complex cases, and 6.9% of mixed cases.\nThe genetic alterations that could cause unilateral coronal synostosis are more elusive.\nMutations were found in eight of 47 patients: two patients with different single-amino-acid changes in FGFR2, three patients with FGFR3 Pro250Arg, and three patients with TWIST mutations.\nOther abnormalities in the craniofacial region and extremities were clues to a particular mutation in FGFR2, FGFR3, TWIST, or the X-linked mutation.\nTo determine whether the autosomal dominant fibroblast growth factor receptor 3 (FGFR3) Pro250Arg mutation causes anterior plagiocephaly, patients with either apparently sporadic unicoronal synostosis (N = 37) or other forms of anterior plagiocephaly (N = 10) were studied for this mutation.\nThe occurrence of the FGFR3 mutation among patients with unicoronal synostosis provides evidence for a genetic basis of certain forms of plagiocephaly.\nNone of the 6 patients with nonsynostotic plagiocephaly and none of the 4 patients with additional suture synostosis had the FGFR3 mutation.\nBetween January and December of 1996, patients with a diagnosis of plagiocephaly at the Children's Hospital of Philadelphia were evaluated for the FGFR3 mutation.\nBetween January and December of 1996, patients with a diagnosis of plagiocephaly at the Children&apos;s Hospital of Philadelphia were evaluated for the FGFR3 mutation\nFGFR genes have important effects on bone development, and mutations in 4 &quot;hot spot&quot; exons of FGFR1-3 are found in many patients with craniosynostosis and some with synostotic plagiocephaly\nTo determine whether the autosomal dominant fibroblast growth factor receptor 3 (FGFR3) Pro250Arg mutation causes anterior plagiocephaly, patients with either apparently sporadic unicoronal synostosis (N = 37) or other forms of anterior plagiocephaly (N = 10) were studied for this mutation\nNone of the 6 patients with nonsynostotic plagiocephaly and none of the 4 patients with additional suture synostosis had the FGFR3 mutation\nThe occurrence of the FGFR3 mutation among patients with unicoronal synostosis provides evidence for a genetic basis of certain forms of plagiocephaly\nFGFR mutations and plagiocephaly\nBetween January and December of 1996, patients with a diagnosis of plagiocephaly at the Children's Hospital of Philadelphia were evaluated for the FGFR3 mutation.\nNone of the 6 patients with nonsynostotic plagiocephaly and none of the 4 patients with additional suture synostosis had the FGFR3 mutation.\nThe occurrence of the FGFR3 mutation among patients with unicoronal synostosis provides evidence for a genetic basis of certain forms of plagiocephaly.\nTo determine whether the autosomal dominant fibroblast growth factor receptor 3 (FGFR3) Pro250Arg mutation causes anterior plagiocephaly, patients with either apparently sporadic unicoronal synostosis (N = 37) or other forms of anterior plagiocephaly (N = 10) were studied for this mutation. Of 37 patients with unicoronal synostosis, 4 tested positive for the Pro250Arg mutation in FGFR3, and 33 were negative for this mutation.\nTo determine whether the autosomal dominant fibroblast growth factor receptor 3 (FGFR3) Pro250Arg mutation causes anterior plagiocephaly, patients with either apparently sporadic unicoronal synostosis (N = 37) or other forms of anterior plagiocephaly (N = 10) were studied for this mutation.\nIn a girl with seemingly isolated plagiocephaly we identified a P250L (749C-->T) mutation in FGFR3.\nFGFR mutations and plagiocephaly.\nNone of the 6 patients with nonsynostotic plagiocephaly and none of the 4 patients with additional suture synostosis had the FGFR3 mutation.\nThe occurrence of the FGFR3 mutation among patients with unicoronal synostosis provides evidence for a genetic basis of certain forms of plagiocephaly.", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.80035
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nLaser-Induced Graphene-Based Electrochemical Sensor for 4-Nitrophenol from Polyimide and Polyethersulfone Precursors\n\nPaper sections:\nIntroduction:\n<p>4-Nitrophenol(4-NP) is a toxic organic compound used extensively as a precursor and intermediate material for the manufacturing of various products, including pesticides, insecticides, dyes, leather, explosives, pharmaceuticals, polymers etc. 1,2 . Hence it is found in the effluents from these industries and introduced into surface waters when insufficiently monitored and untreated. The environmental and health impacts of 4-NP include carcinogenic, mutagenic and teratogenic effects on aquatic life, plant and human beings 1 . The United States Environmental Protection Agency has included 4-NP as one of the priority pollutants 3 .</p><p>Many conventional instrumentation methods exist for the detection of 4-NP. These include gas chromatography (GC), high-performance liquid chromatography (HPLC), mass spectrometry, fluorescence, electrophoresis, etc. 4 . However, these methods are time-consuming, require costly instrumentation, complex sample preparation methods, and skilled personnel for operation. The major disadvantage of these techniques lies in the fact that they are not applicable for the on-site detection of pollutants. Electrochemical techniques have received a lot of attention for in-situ detection of pollutants such as phenols, pesticides, heavy metals, etc. [5][6][7] . The sensors based on electrochemical mechanisms offer the merits of high sensitivity and selectivity, rapid response, simple operation, and low cost.</p><p>The detection of 4-Nitrophenol is critical to ensure its removal from the industrial wastewaters and consequent contamination of natural streams. Earlier efforts for electrochemical detection of 4-NP using graphene-based material as electrode have been made in combination with different nanomaterials.</p><p>Metal nanoparticles such as gold and silver have been used in combination with reduced graphene oxide Graphene is a 2D nanostructured allotrope of carbon that has excellent electrochemical properties. It also has high electrical conductivity, good chemical stability, and a large surface area. These properties, along with its low cost compared to other materials, make it a viable option for electrochemical sensors.</p><p>However, conventional fabrication techniques such as mechanical exfoliation, chemical vapor deposition, reduction of graphene oxide, etc., are time-consuming and require chemically intensive steps. Manufacturing large-scale sensors require a simple, fast, and industry scalable option while still offering high sensitivity. Laser-induced graphene (LIG) has garnered increasing interest due to its advantage of simple mask-free and chemical-free fabrication of graphene electrodes 14 . One of the major advantages of LIG lies in the fact that it can be produced from a multitude of substrates ranging from commercial polymers 15,16 to waste organic materials such as cloth, paper, food,etc 17 . This flexibility offers the potential to explore recyclable and cheaper materials for flexible electronics and other applications 18,19 Furthermore; porous graphene-based sensors have shown great promise in the development of electrochemical sensors. The porous structure creates efficient multi-dimensional pathways for electron and mass transfer while also creating a higher surface area for the electrochemical interactions between the electrolyte and the electrode surface 20 . The advantage of high electrochemical performance and scalable fabrication process has allowed the detection of many pollutants of environmental significance with improved performance compared to conventional graphene-based sensors [21][22][23][24][25] . To the best of the authors' knowledge, a LIG-based sensor for the detection of 4-Nitrophenol hasn't been proposed yet.</p><p>This work demonstrated a novel electrochemical sensor for 4-Nitrophenol using LIG as the electrode material. The performances of two different precursor materials for LIG fabrication, viz. polyimide(PI) and polyethersulfone (PES) have been compared. The laser settings were optimized for each of the materials to obtain LIG that gave the best electrochemical performance in the detection of 4-Nitrophenol. The sensor demonstrated a high sensitivity of 3025 µAmM -1 cm -2 for LIG-PI and 3793 µAmM -1 cm -2 for LIG-PES sensor with a limit of detection of 0.1 mM using cyclic voltammetry technique.</p>\n\nMaterials:\n<p>Polyimide (PI) sheets of 127 µm thickness were purchased from Dali Electronics, India.</p><p>Polyethersulfone (PES), Dichloromethane, 4-Nitrophenol (4-NP), Potassium ferricyanide (K3Fe(CN)6), Potassium Chloride (KCl) were purchased from LOBA Chemie, India. All chemicals used were of AR/ACS grade.</p>\n\nPES sheet fabrication:\n<p>PES Sheets were fabricated by dissolving 2.65gm PES powder in a 22 ml dichloromethane solution.</p><p>The resulting mixture was poured into a glass Petri plate and allowed to consolidate by limited exposure to the ambient atmosphere. PES sheets as prepared and polyimide sheets as obtained from the manufacturer were used for laser scribing after cleaning the surface with ethanol.</p>\n\nLIG fabrication using PES and PI sheets:\n<p>The substrate PES and PI were scribed with 10.6 µm CO2 laser with a VLS 3.50 laser platform equipped with 50 W, 2.0 inch Lens kit (Universal Laser Systems, (Distributor: Caliber, Tel Aviv, Israel)). The laser settings were adjusted to optimize the conductivity of LIG and the stability of the electrodes during electrochemical operation. The LIG electrodes were designed to have a working surface area of 3 mm diameter and a connecting length of 10 mm. The connecting area of the LIG was passivated using insulating adhesive tape. The material characterization of fabricated LIG electrodes was done by scanning electron microscopy (SEM). Electrochemical characterization of the electrodes was performed using cyclic voltammetry.</p><p>Optimal laser settings are required to produce the highest quality of LIG on either of the polymer substrates, viz. PES and PI. In order to get the best output of LIG fabrication, the power settings and scan speeds were adjusted to different levels, and the resistance of the resulting electrode was measured.</p><p>Square dimensions of 1cm × 1cm were laser scribed onto the PES sheet using 20% power and 25% scan speed. The settings were tried with a 1% step increase in power and a subsequent decrease in speed of the same magnitude. The increase in power presumably leads to more penetration of the polymer thickness, and hence more amount of graphene is produced. A similar effect is possible due to reduced speed as the energy of a slower-moving laser can accumulate and penetrate deeper into the polymer depth. The optimal settings for LIG formation on the PES sheet were found to be at 20 % power and 25 % speed. The procedure of optimization of LIG formation was similarly performed on the PI sheet.</p><p>Settings at which optimal LIG formation was observed on PI were 17 % power and 25 % speed.</p><p>Increasing the power of the laser is known to cause more penetration into the polymer sheet, thus generating more amount of graphene. However, if the power is exceedingly high, the graphene formation is unstable and collapses easily with minor deformation. Further, increase in power results in complete burning off the substrate, causing complete dismantling of the electrode structure 20 .</p>\n\nElectrochemical setup:\n<p>Cyclic voltammetry analysis was carried out using a Multi Autolab 204 electrochemical analyzer (Metroohm, India). A three-electrode setup was used for the electrochemical characterization as well as the sensor performance studies. Electrochemical measurements were taken at room temperature. LIG was used as the working electrode, saturated calomel electrode (SCE) was used as the reference electrode, and platinum coil as the counter electrode. The electrochemical cell had an electrolyte solution of 80ml volume. Activation of LIG electrodes was carried out prior to performing the electrochemical procedures. Cyclic voltammetry runs at 100 mV/s were performed for 30 cycles to activate the LIG electrodes and remove any organic binder impurities.</p>\n\nMaterial Characterization:\n<p>SEM images of the surface modifications show the formation of porous structure on polyethersulfone surface (Fig. 1a and 1b). This observed structure is in line with previously reported LIG-based work using PES 26,27 . The porous nanofibre-like structure allows for increased interaction of the electrolyte with the electrode surface. In the SEM image, the porous structure on the surface with abundant edgeplane sites is visible. The focused laser beam acting on the polymer surface produces a temperature at more than 2500 o C, resulting in burning off some of the carbon in the presence of ambient oxygen and moisture. The burning produces CO2 and other gases, which results in the porous nature of the graphene surface while still maintaining considerable structural stability depending on the optimization of laser parameters2. A stable 3D morphological structure facilitates a high electron transfer rate, and the porous nature increases the electrochemically active surface area.</p>\n\nElectrochemical characterization:\n<p>The electrochemical characterization of the fabricated LIG electrodes is typically carried out using a model redox probe of potassium ferricyanide in potassium chloride 29 . Figure 2 shows the electrochemical characterization of LIG-PES by cyclic voltammetry. 5mM potassium ferricyanide K3Fe(CN)6 was used along with 0.1M KCl to perform the experiment. The increasing scan rate results in the shifting of the oxidation peak to become more positive, whereas the reduction peak shifts to a more negative side as expected 30 . The electrocatalytic properties of LIG were evaluated using the cyclic voltammetry technique. 4a and 4b for PES and PI-based LIG electrodes, respectively. The CV peaks in the case of LIG-PI are found to be visually more prominent as compared to the peaks obtained using LIG-PES.</p><p>However, from the response current to input concentration ratio, the sensitivity of LIG-PES at 0.1mM concentration of 4-NP is 3793 µAmM -1 cm -2 , which is significantly higher than 3025 µAmM -1 cm -2 for LIG-PI. The higher sensitivity of LIG-PES toward 4-NP may be due to the presence of sulfur groups on the graphene surface of LIG-PES. From the graph, the sensor's limit of detection (LoD) is found to be 100 µM. The PES-based LIG electrode sensor demonstrated a slightly wider linear range up to 1mM as against the PI-based LIG electrode with up to 0.9mM upper range. The LoD was estimated based on The calibration curves have been shown in the insets of Figures 4a and 4b. These analytical curves can be obtained by plotting the peak currents for different concentrations of 4-NP under optimized experimental conditions 31 . The cathodic peak current is proportional to the concentration in the range of 0.1 -1 mM for LIG-PES and 0.1 -0.9 mM for LIG-PI. The linear regression equation for LIG-PES can be expressed as IPC (µA) = -21.249 C (mM) -24.991, with a correlation coefficient (R 2 ) of 0.9977.</p><p>The linear regression equation for LIG-PI can be expressed as IPC (µA) = -32.101 C (mM) -17.736, with a correlation coefficient (R 2 ) of 0.9962. The efficient performance of the proposed LIG-based sensors can be attributed to the electrocatalytic properties and adsorption ability of graphene in porous form. LIG possesses a high electrochemical response due to its porous structure, which offers more sites for electrochemical redox reactions as compared to conventional planar graphene 32 . The porous structure also offers more edge defects than basal defects which can be related to the enhanced electrochemical activity.</p>", "role": "user" }, { "content": "Electrochemical sensors provide an excellent alternative for the in-situ detection of pollutants in water.4-Nitrophenol is a critical pollutant owing to its acute toxicity and adverse health effects on humans and other living organisms. It is known to have carcinogenic, mutagenic, and teratogenic effects on aquatic life plants and human beings at very low concentrations. In this work, a facile electrochemical sensor for 4-Nitrophenol detection is proposed by laser-induced graphene (LIG) printed on polyethersulfone (PES) and polyimide (PI) films respectively. The laser irradiation of polymeric films results in 3D porous graphene structure formation that increases electron transfer rate as well as the electrochemically active surface area (EASA). This fabrication approach by laser-scribing provides a simple, fast, chemical-free, mask-free, and scalable solution to produce graphene-based electrochemical sensors for 4-Nitrophenol. Cyclic voltammetry is used as the electrochemical technique for the highly sensitive detection of 4-Nitrophenol. PES-based LIG sensors exhibit a higher sensitivity of 3793 µAmM -1 cm -2 as compared to that of PI-based LIG sensors with a sensitivity of 3025 µAmM -1 cm -2 .", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.1198
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Should perampanel be used for amyotrophic lateral sclerosis?\n\nContext: RESULTS: Six participants were enrolled. All had adverse events, mostly behavioral. Two completed the trial and the other four withdrew due to adverse events. All participants reported resolution of these events after discontinuation of the drug. The trial was halted due to the large number of adverse events.DISCUSSION: The use of perampanel in this study of ALS was limited by its poor tolerability\nCONCLUSIONS: Perampanel was associated with a significant decline in ALSFRS-R score and was linked to worsening of the bulbar subscore in the 8 mg group.\nDISCUSSION: The use of perampanel in this study of ALS was limited by its poor tolerabilit", "role": "user" }, { "content": "no", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.6107
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nThisdisordercanresultinsuddeninfantdeath.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.13411
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe novel coronavirus disease 2019 (COVID-19) pandemic is a major global threat affecting millions of lives throughout the world physically and psychologically. With the asymptomatic presentation of COVID-19 in many patients and the similarity of its symptoms with the common cold and influenza, the need for accurate information on the disease is very important for its identification and proper management. Accurate information on the disease, its prevention and treatment can be disseminated through drug information centers (DICs). DICs are usually staffed by pharmacists and/or clinical pharmacists/pharmacologists. DICs are a reliable source of current and unbiased information on COVID-19 and its associated complications, including management options for healthcare professionals and the public. In addition to health and drug information, pharmacists working in the DICs can be involved in the management of the patients' health by providing information on home care and safety, medication management of patients with chronic comorbid illnesses, and psychological advice. This article explores the possible additional roles DICs can play, besides providing drug information within the hospital or in the community.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.2764
[ { "content": "Given a short query with keyphrases about a research idea in 2018 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: In this paper, we present a method for Visual Question Answering. Our method first detects all objects\n\nKeyphrases: visual question answering images\n\nDataset description:\nGuessWhat?!: GuessWhat?! is a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images.\nCLEVR: CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; each image comes with a number of highly compositional questions that fall into different categories.\nVideoNavQA: The VideoNavQA dataset contains pairs of questions and videos generated in the House3D environment. The goal of this dataset is to assess question-answering performance from nearly-ideal navigation paths, while considering a much more complete variety of questions than current instantiations of the Embodied Question Answering (EQA) task.\nGQA: The GQA dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced question-answer pairs. Each training and validation image is also associated with scene graph annotations describing the classes and attributes of those objects in the scene, and their pairwise relations.\n\nOptions: A: GuessWhat?! B: CLEVR C: VideoNavQA D: GQA", "role": "user" }, { "content": "CLEVR", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.202661
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nthe lowest order chiral lagrangian supplemented by the chiral symmetry breaking terms @xmath4 of o(@xmath5 ) and @xmath6 of o(@xmath7 ) ( which model the mass - dependence of the chiral slope for the meson decay constants ) is given by@xcite : .\n@xmath8 where @xmath9 we use explicit formulas for the pseudoscalar masses , and pseudoscalar and axial vector decay constants , consistent through order @xmath10 ( for details see @xcite ) .\nthe coefficients @xmath11 follow the notation of gasser and leutwyler @xcite .\nthe expression for the pseudoscalar mass squared up to first order in the hairpin mass , @xmath12 and @xmath13 is : @xmath14 + \\frac{1}{f^2}8 l_5 \\chi_{ij } \\delta \\tilde{j}_{ij } \\ }\n\\end{aligned}\\ ] ] where @xmath15 is a slope parameter , @xmath16 , @xmath17 and @xmath18 . here\n@xmath19 and @xmath20 .\nthe loop integrals @xmath21 and @xmath22 are defined in @xcite . for the pseudoscalar decay constants : @xmath23 \\nonumber \\\\ & & ~-~\\frac{4}{f^2}l_{5}\\delta \\chi_{ij } [ ( \\frac{\\tilde{i}_{ij}}{2 } + i_{ij } ) - ( \\tilde{j}_{ii}+\\tilde{j}_{jj})]\\ } \\end{aligned}\\ ] ] while for the axial vector decay constants : @xmath24 = 0.45 = 0.45 = 0.45 = 0.45 = 0.45 = 0.45 we study 350 configurations on a @xmath25 lattice with @xmath26 .\nclover fermions ( @xmath27 ) and the mqa procedure@xcite was used .\nwe consider six @xmath28 values ( @xmath29 ) with @xmath30 .\nthe fits are shown in figures [ fig : diag]-[fig : ratio ] .\nin the quenched theory the disconnected part of the eta prime correlator ( hairpin term ) has a double pole @xmath31 whose coefficient determines @xmath32 .\nthe correlator for @xmath33 and a double pole fit(@xmath34 and @xmath35 ) is shown in figure [ fig : etap ] .\nthe @xmath2 parameter is given by : @xmath36 = 0.42 as the quenched chiral limit is approached the isovector scalar correlator shows negative norm behaviour .\nthe correlators for the lightest and heaviest @xmath28 values are shown in figure [ fig : scalar ] .\nproperly accounting for the effects of the hairpin - pion bubble@xcite allows a good fit of the isovector scalar correlator for all quark masses .\none output is the ( @xmath37 ) mass .\nusing the mqa technique , meson properties ( masses and decay constants ) can be extracted with sufficient accuracy to allow a fit of higher order chiral parameters , @xmath4 and @xmath6 .\nthe physical results for @xmath38 ( @xmath39 gev @xmath40 ) are compared with our @xmath0 ( @xmath41\ngev @xmath42 ) results@xcite .\nthe physical values of @xmath32 ( in gev ) extracted from the hairpin analysis ( at @xmath38 ) are @xmath43 , @xmath44 , @xmath45 , @xmath46 , @xmath47 and @xmath48 for @xmath49 and @xmath50 gev respectively . extrapolating to @xmath51 we obtain @xmath52 . the corresponding value for @xmath0 is @xmath53 .\nextracting @xmath37 and @xmath54 masses ( at @xmath55 ) from scalar and axial vector propagators gives @xmath56 and @xmath57 gev and @xmath58 and @xmath59 gev for @xmath38 and @xmath60 respectively .\nthe value for @xmath61 gev suggests that the observed @xmath62 resonance is a @xmath63 `` molecule '' and not an ordinary @xmath3 meson .\n16 w. bardeen , a. duncan , e. eichten , and h. thacker , phys .\nd62 ( 2000 ) 114505 .\nj. gasser and h. leutwyler , nucl .\nb250 ( 1985 ) 465 .\nw. bardeen , a. duncan , e. eichten , and h. thacker , phys .\nd57 ( 1998 ) 1633 ; phys .\nd59 ( 1999 ) 014507 .\nw. bardeen , a. duncan , e. eichten , n. isgur and h. thacker , phys . rev .\nd * 65 * ( 2002 ) 014509 .", "role": "user" }, { "content": "we present results of a high - statistics study of scalar and pseudoscalar meson propagators in quenched qcd at two values of lattice spacing , @xmath0 and 5.9 , with clover - improved wilson fermions . \n the study of the chiral limit is facilitated by the pole - shifting ansatz of the modified quenched approximation . \n pseudoscalar masses and decay constants are determined as a function of quark mass and quenched chiral log effects are estimated . \n a study of the flavor singlet @xmath1 hairpin diagram yields a precise determination of the @xmath1 mass insertion . the corresponding value of the quenched chiral log parameter @xmath2 is compared with the observed qcl effects . \n removal of qcl effects from the scalar propagator allows a determination of the mass of the lowest lying isovector scalar @xmath3 meson .", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.1820
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nInterfacing quantum dots and graphitic surfaces with chlorine atomic ligands. The performance of devices based on semiconductor nanocrystals (NCs) improves both with stronger interface interactions among NCs and between NCs and solid electrode surfaces. The combination of X-ray photoelectron spectroscopy (XPS) and solid (31)P CP/MAS NMR (cross-polarization/magic angle spinning nuclear magnetic resonance) shows that the selective substitution of long organic chains by chlorine atomic ligands during the colloidal synthesis by the hot injection method promotes the adsorption of CdSe NCs to carbon sp(2) surfaces, leading to the formation of well-ordered NC monolayers on graphitic materials.", "role": "user" }, { "content": "{\"Chemical\": [\"carbon\", \"chlorine\", \"CdSe\", \"(31)P\"]}", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1290
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which glands in the bee secretes royal jelly?\n\nContext: The hypopharyngeal glands (HGs) of honey bee nurse workers secrete the major protein fraction of jelly, a protein and lipid rich substance fed to developing larvae, other worker bees, and queens.\nThe first MRJP was identified in royal jelly, a secretion of the bees' hypopharyngeal glands that is used by young worker bees, called nurses, to feed developing larvae\nThe nurse hypopharyngeal glands produce the protein fraction of the worker and royal jelly that is fed to developing larvae and queens.\nThe hypopharyngeal glands (HPGs) of worker honeybees undergo physiological changes along with the age-dependent role change from nursing to foraging: nurse bee HPGs secrete mainly major royal jelly proteins, whereas forager HPGs secrete mainly α-glucosidase III, which converts the sucrose in the nectar into glucose and fructose.", "role": "user" }, { "content": "hypopharyngeal glands", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.70897
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: A Review of Convolutional Neural Networks for Inverse Problems in Imaging \nChapter Title: V. THEORY \n1. Abstract of Cited Paper (BIB001): In Sparse Coding (SC), input vectors are reconstructed using a sparse linear combination of basis vectors. SC has become a popular method for extracting features from data. For a given input, SC minimizes a quadratic reconstruction error with an L1 penalty term on the code. The process is often too slow for applications such as real-time pattern recognition. We proposed two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms. The main idea is to train a non-linear, feed-forward predictor with a specific architecture and a fixed depth to produce the best possible approximation of the sparse code. A version of the method, which can be seen as a trainable version of Li and Osher's coordinate descent method, is shown to produce approximate solutions with 10 times less computation than Li and Os-her's for the same approximation error. Unlike previous proposals for sparse code predictors, the system allows a kind of approximate \"explaining away\" to take place during inference. The resulting predictor is differentiable and can be included into globally-trained recognition systems. \n2. Abstract of Cited Paper (BIB002): In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU. \n3. Abstract of Cited Paper (BIB003): Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed.\n\nLiterature Review:", "role": "user" }, { "content": "The excellent performance of CNNs for various applications is undisputed, but the question of why remains mostly unanswered. Here, we bring together a few different theoretical perspectives that begin to explain why CNNs are a good fit for solving inverse problems in imaging. 1) Universal approximation: We know that neural networks are universal approximators. More specifically, a fullyconnected neural network with one hidden layer can approximate any continuous function arbitrarily well provided that its hidden layer is large enough BIB001 . The result does not directly apply to CNNs because they are not fully connected, but, if we consider the network patch by patch, we see that each input patch is mapped to the corresponding output patch by a fully connected network. Thus, CNNs are universal approximators for shift-invariant functions. From this perspective, statements such as \"CNNs work well because they generalize X algorithm\" are vacuously true because CNNs generalize all shiftinvariant algorithms. On the other hand, the notion of universal approximation tells us what the network can learn, not what it does learn, and comparison to established algorithms can help guide our understanding of CNNs in practice. 2) Unrolling: The most concrete perspective on CNNs as generalizations of established algorithms comes from the idea of unrolling, which we discussed in Section IV-B. The idea originated in BIB002 , where the authors unrolled the ISTA algorithm for sparse coding into a neural network. This network is not a typical CNN because it includes recurrent connections, but it does share the alternating linear/nonlinear motif. A more general perspective is that nearly all state-of-the-art iterative reconstruction algorithms alternate between linear steps and pointwise nonlinear steps, so it follows that CNNs should be able to perform similarly well given appropriate training. One refinement of this idea comes from BIB003 , which establishes conditions on the forward model, H, that ensure that the linear step of the iterative method is a convolution. All of the inverse problems surveyed here meet these conditions, but the theory predicts that certain inverse problems, e.g. structured illumination microscopy, should not be amenable to reconstruction via CNNs. Another refinement concerns the popular rectified linear unit (ReLU) employed as the nonlinearity by most CNNs: results from spline theory can be adapted to show that combinations of ReLUs can approximate any continuous function. This suggests that the combinations of ReLUs usually employed in CNNs are able to closely approximate the proximal operators used in traditional iterative methods. 3) Invariance: Another perspective comes from work on scattering transforms, which are cascades of linear operations (convolutions with wavelets) and nonlinearities (absolute value) BIB004 with no combinations formed between the different channels. This simplified model shows invariance to translation and, more importantly, to small deformations of the input (diffeomorphisms). CNNs generalize the scattering transform, giving the potential for additional invariances, e.g., to rigid transformations, frequency shifts, etc. Such invariances are attractive for image classification, but more work is needed to connect these results to inverse problems.", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.1724
[ { "content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: If diabetes is well-managed and blood sugar under control, the risk of having severe illness with a coronavirus infection is about \"what most of the normal population would have\n\nAbstract:\nBACKGOUND: To figure out whether diabetes is a risk factor influencing the progression and prognosis of 2019 novel coronavirus disease (COVID-19).\nMETHODS: A total of 174 consecutive patients confirmed with COVID-19 were studied.\nDemographic data, medical history, symptoms and signs, laboratory findings, chest computed tomography (CT) as well the treatment measures were collected and analysed.\nRESULTS: We found that COVID-19 patients without other comorbidities but with diabetes (n = 24) were at higher risk of severe pneumonia, release of tissue injury-related enzymes, excessive uncontrolled inflammation responses and hypercoagulable state associated with dysregulation of glucose metabolism.\nFurthermore, serum levels of inflammation-related biomarkers such as IL-6, C-reactive protein, serum ferritin and coagulation index, D-dimer, were significantly higher (P < .01) in diabetic patients compared with those without, suggesting that patients with diabetes are more susceptible to an inflammatory storm eventually leading to rapid deterioration of COVID-19.\nCONCLUSIONS: Our data support the notion that diabetes should be considered as a risk factor for a rapid progression and bad prognosis of COVID-19.\nMore intensive attention should be paid to patients with diabetes, in case of rapid deterioration.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"CONCLUSIONS: Our data support the notion that diabetes should be considered as a risk factor for a rapid progression and bad prognosis of COVID-19.\"]}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.4499
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDifferential effects of angiotensin converting enzyme inhibitors on the vasodepressor and prostacyclin responses to bradykinin . Angiotensin converting enzyme ( ACE ) inhibitors block degradation of bradykinin and bradykinin stimulates prostacyclin production . ACE inhibitors are reported to increase prostaglandins . Therefore , we set out to determine 1 ) the contribution of prostacyclin to the bradykinin-mediated vasodepressor effects of ACE inhibitors , 2 ) whether ACE inhibitors alter the effect of bradykinin on prostacyclin , and 3 ) whether the effects of ACE inhibitors on bradykinin and prostaglandins are class effects or dependent on ACE inhibitor structure . To address these questions , we compared the effects of captopril , quinapril and placebo on blood pressure , urinary excretion of 2,3-dinor-6-keto-PGF1 alpha , and the vasodepressor response to i.v . bradykinin in 21 salt-replete normal-to-high renin hypertensive patients . Captopril and quinapril doses were titrated to lower pressure similarly . Captopril , but not quinapril , increased excretion of prostacyclin metabolite ( 217 +/- 50 vs. 135 +/- 21 pg/mg Cr base line , P < .05 ) . Both ACE inhibitors dramatically , equally potentiated the vasodepressor response to bradykinin ; the bradykinin dose required to decrease mean arterial pressure 15 mm Hg or increase pulse 20 bpm was 50-fold lower in ACEI-treated than in placebo-treated subjects ( 10 +/- 0 and 12.1 +/- 2.1 ng/kg/min in captopril and quinapril groups vs. 567 +/- 109 ng/kg/min in the placebo group ; P < .005 ) . ACE inhibition significantly attenuated the prostacyclin response to bradykinin at any given level of hypotensive response . Indomethacin abolished the prostacyclin response to bradykinin but did not alter the vasodepressor response . These data demonstrate that ACE inhibitors potentiate bradykinin-mediated vasodepression through a prostaglandin-independent mechanism . They suggest that although ACE inhibitors increase prostaglandins by increasing bradykinin , ACE inhibitors may attenuate prostaglandin production through a second bradykinin-independent mechanism .", "role": "user" }, { "content": "{\"Intervention\": [\"ACE\", \"captopril\", \"quinapril\", \"angiotensin converting enzyme inhibitors\", \"placebo-treated\", \"captopril , quinapril\", \"ACE inhibitors\", \"Captopril and quinapril\", \"ACEI-treated\", \"Angiotensin converting enzyme ( ACE )\", \"bradykinin .\", \"Indomethacin\", \"bradykinin\", \"Captopril\", \"placebo\"], \"Outcome\": [\"prostacyclin response\", \"mean arterial pressure\", \"vasodepressor response\", \"prostacyclin metabolite\", \"vasodepressor response .\", \"prostacyclin response to bradykinin\"], \"Participant\": []}", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.961
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nwere noted to influence pain perception in previous studies (Nafiu, Shanks, Abdo, Taylo, & Tremper, 2013; Tüfekci et al., 2009) were obtained by a research assistant from the medical records.", "role": "user" }, { "content": "Method", "role": "assistant" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.3680
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: A new motor synergy that serves the needs of oculomotor and eye lid systems while keeping the downtime of vision minimal\nArticle:\nAbstract:\nThe purpose of blinks is to keep the eyes hydrated and to protect them. Blinks are rarely noticed by the subject as blink-induced alterations of visual input are blanked out without jeopardizing the perception of visual continuity, features blinks share with saccades. Although not perceived, the blink-induced disconnection from the visual environment leads to a loss of information. Therefore there is critical need to minimize it. Here we demonstrate evidence for a new type of eye movement serving a distinct oculomotor demand, namely the resetting of eye torsion, likewise inevitably causing a loss of visual information. By integrating this eye movement into blinks, the inevitable down times of vision associated with each of the two behaviors are synchronized and the overall downtime minimized.\nIntroduction:\nWe normally blink around 15 to 20 times per minute to hydrate the eyes and to protect them from potentially harmful agents( Evinger, 1995; Evinger et al., 1991). Although normal visual input is blanked out for ~300 milliseconds during blinks, we rarely perceive these gaps due to what is called blink suppression( Bristow et al., 2005; Volkmann et al., 1980; Volkmann, 1986; Manning et al., 1983), the attenuation of visual sensitivity during blinks. This mechanism is useful to eliminate the visual consequences of eye and lid movements during blinks( Ridder and Tomlinson, 1997; Volkmann et al., 1968). But it comes with the cost of losing the connection to the visual world for the duration of the blink( VanderWerf et al., 2003; Bour et al., 2000). Without compensation this loss would lead to a discontinuous visual perception. However, previous experiments in humans have shown that the visual system is able to ensure an illusion of continuity by bridging the perceptual gaps( Bristow et al., 2005; Shultz et al., 2011). Moreover, in cases quick behavioral responses may be demanded, the blink frequency may be temporarily reduced in order to provide more time for the processing of behaviorally relevant visual information. For example during reading, we blink only 4–5 instead of 15–20 times per minute( Shultz et al., 2011; Bentivoglio et al., 1997). Another mechanism, reducing the down time of the visual system is the synchronization of blinks with saccades, the latter likewise blanking out vision due to saccadic suppression( Fogarty and Stern, 1989; Zee et al., 1983; Evinger et al., 1991, 1984). Not only voluntary saccades interfere with vision but also the fast resetting movements that are part of oculomotor reflexes that stabilize the visual background. One of these reflexes is the torsional optokinetic nystagmus( tOKN)( Farooq et al., 2004). It consists of a slow phase in which the eyes pursue the rotation of the visual stimulus in order to reduce the slip of its retinal image, followed by a torsional fast phase in the opposite direction. The fast phase, generally considered to rely on the brainstem premotor circuitry for saccades( Tozzi et al., 2007), ensures that the eyes stay within the mechanical limits of the plant. In view of the tight interaction of voluntary saccades and blinks we expected that also the involuntary fast phases of the tOKN might be synchronized with blinks in order to minimize the overall down time of the visual system. Here we report that the overarching goal to optimize visual perception is actually met in a surprisingly different manner. Rather than combining fast phases with blinks, the oculomotor system deploys a novel type of resetting movement which is qualitatively different from fast phases.\nDiscussion:\nIn this study we have demonstrated the existence of a distinct, hitherto unknown type of eye movement. It helps to avoid the accumulation of excessive eye torsion, building up in response to torsional optic flow by time and again resetting the torsional deviation. While this resetting eye movement serves a similar purpose as the well-known torsional fast phase component of the torsional optokinetic nystagmus( tOKN), it is different from the latter for several reasons: 1. This new eye movement is a key element of a motor synergy in which it is yoked with eye blinks, which is why we refer to it as blink-associated resetting eye movement( BARM). 2. The horizontal and vertical movement components complementing the torsional resetting response are qualitatively different from those of the fast phases of the tOKN. 3. The amplitude of BARMs is correlated with the pre-blink torsional eye position but not with the amplitude of the preceding slow phase component of the tOKN, a relationship, which is exactly opposite to the fast phases of the OKN as also reported before( Watanabe et al., 1994; Waddington and Harris, 2012). 4. BARMs differ in the main sequence relationship from tOKN fast phases. 5. BARMs are also observed in the absence of an optokinetic stimulus in case significant amounts of eye torsion may have arisen spontaneously. 6. Similar blink-associated resetting eye movements are deployed to reset eye deviations from straight ahead in the vertical and horizontal dimension. In sum, we suggest that BARMs are a distinct type of eye movement that is a ubiquitous partner of eye blinks. As said before, BARMs were found independent of the specific condition that had led to eye torsion, consistently across subjects and independent of the two eye movement recording methods we used, video eye tracking and search coil recordings. Video eye tracking has the advantage of avoiding any interaction with the eye and the eye lids. On the other hand, it does not provide information for the time the eyes are covered by the eye lids. Complementary search coil recordings allowed us to fill this gap and to document a sequence of two movement components. Whereas the first one does not account for the properties of BARMs, the second one does. Perhaps one might object that the second component is only an artifact of the search coil recording technique, e. g. due to the need to anesthetize the eye( Bergamin et al., 2002, 2004). Yet, the congruency of the overall amount of resetting documented with video eye tracking and search coil recordings clearly speaks against this concern. There are two additional aspects of BARM control which need to be considered. Firstly, although BARMs reset eye torsion, this resetting is not complete if torsion is induced by torsional optic flow. This stands in contrast to torsional deviations observed in the absence of torsional optic flow where we observed more BARMs leading the eyes back to zero. A possible explanation of the incomplete resetting of optic flow induced torsion might be the lack of naturalness of the condition. Continuous torsional optic flow in one direction, present for 270 s as in our experiments, might result from being spun in a leisure park attraction but hardly from ego motion in a natural environment. Actually it is tempting to speculate that the building up of a torsion plateau in reaction to the exposure to steady state torsional optic flow might reflect an adaptive shift of the subjective vertical. Interpreted this way, BARMs would not be driven by the deviation of the eyes from a baseline zero but by the deviation from the adapted torsional zero( Thilo and Gresty, 2002; Ibbotson et al., 2005). The second aspect is that the influence of the level of pre-blink eye torsion on the amplitude of BARMs depends on context and history. This is indicated by slight, yet significant differences of the BARM-amplitude vs. pre-blink eye torsion regressions. The influence was strongest- as expressed by the steepest regression line slopes as well as the largest coefficients of correlation- for experimental phases which led to higher initial levels of torsion due to a build-up of torsion as in the ‘optic flow’ phase I and II. This modulation of the slope is actually quite useful as it helps to counteract the accumulation of eye torsion in the presence of strong torsional optic flow. Are BARMs simply inevitable passive biomechanical consequences of blinks? While blinks may have an impact on eyes that deviate from straight ahead, the rotation symmetry of the eyes suggests a mechanical reaction that should be independent of eye torsion. Yet, BARMs are always compensatory, which requires that they consider information on the initial torsional position in order to select an appropriate amplitude and velocity as well as a relationship between the two that is reminiscent of a saccade-like main sequence characteristic. Hence, the evidence available clearly argues for active control of BARMs rather than a biomechanical artifact. It is an active process linked to blinks rather than being elicited by an interruption of vision as indicated by control experiments in which we simulated the blink-associated interruption of the visual input. The second blink-associated eye movement component seen in search coil recordings which is largely rotational, lacking significant accompanying horizontal or vertical eye movements corresponds to BARMs as seen in video eye tracker recordings. The first component corresponds to the classical blink-associated eye movement which is directed nasally, downward and extorsional- a reflex that has been suggested to support the hydration and protection of the eye( Evinger et al., 1991; Evinger, 1995). The independence of the two components is documented by the consistent direction of the first independent of the resetting direction of torsion, and the absence of the consistent amplitude vs. velocity relationship that characterizes the second component, an independence that argues that the latter is a rigid blink-specific eye movement that is not related to other types of oculomotor behavior serving vision. Although previous work on search coil recordings of blink-associated eye movements had noted the occurrence of a second component, it did not address its function( Bergamin et al., 2002). Is the second component underlying the BARM a slowed down saccade, spread out over the whole period of the visual gap resulting from the eye blink? Reducing saccadic speed as much as possible would satisfy demands of energy efficiency while not further burden vision. Changes in saccade velocity have actually been observed in response to a wide variety of conditions( Prsa et al., 2010; Shadmehr, 2010). Most importantly, it is well-known that saccades which coincide with blinks have latencies and kinematics that differ from those of non-blink-associated saccades( Goossens and Van Opstal, 2000; Powers et al., 2013; Rottach et al., 1998; Rambold et al., 2002). On the other hand reports on patients suffering from pathology of the brainstem premotor circuits for saccades may actually suggest a saccade-independent nature of BARMs. Unlike healthy subjects, these patients exhibit slow saccades which may be faster if they are associated with a blink. Actually in extreme cases only blink-associated saccades may be left, while others are lacking completely( Zee et al., 1983). Rather than assuming a facilitatory influence of blink-related signals on a run-down saccade generator, a more parsimonious interpretation could be that the putative blink-associated saccades in these clinical case are actually BARMs, still possible because the underlying circuitry may have been spared by the pathology. On the other hand, the blink-associated oscillations that have been observed in patients suffering from neurodegenerative brainstem disease with normal saccades( Hain et al., 1986) might be a manifestation of a degenerated BARM generator sparing the saccade generator. In other words, the speculative conclusion would be the assumption of a far-reaching dissociation of the generators of BARMs and saccades. The practical implication of this scenario could be that patients suffering from a loss of saccades might actually train to deploy BARMs more frequently and efficiently. In summary, blink-associated resetting eye movements represent a distinct type of eye movement intimately linked to eye blinks, helping the observer to keep the deviation from a baseline torsional eye position small, thereby supporting stable spatial vision. While we explored the properties of these eye movements primarily in the torsional domain, preliminary evidence suggests that they may not be restricted to torsion but may help to keep also horizontal and vertical deviations at bay. Yoking these eye movements with the closure of the eyelids is an ecologically useful synergy that helps to minimize the downtime of vision that is inevitably associated with both. Considering that we blink on average 18 times per minute over 16 hr of wakefulness per day and furthermore that each BARM makes a resetting saccade of 50 ms length waivable, the gain in uncompromised vision is substantial, amounting to 15 mins every day.", "role": "user" }, { "content": "Blinking lubricates the surface of the eye and prevents it from becoming too dry . People blink around 20 times per minute , but we rarely notice the gaps in our vision because the brain becomes less sensitive to visual input during each blink . Eye movements can also hinder our vision . For example , our vision could easily become blurred when we redirect our gaze to a new target . However , like during blinking , the brain blocks visual input while these movements – known as saccades – take place . Because saccades and blinking both temporarily disrupt vision , the brain synchronizes the two , which keeps this downtime to a minimum . Involuntary eye movements can also disrupt our vision . We perceive an object most clearly when its image falls on the very center of the back of the eye . If the image drifts away from this central position , our eyes make small involuntary ‘resetting’ movements to counteract the drifting . Given that these movements would compromise our vision much like saccades do , Khazali et al . wondered whether the brain might also synchronize them with blinks as well . Khazali et al . tracked the eye movements of people staring straight ahead at a red dot in an otherwise dark room . This revealed that correcting eye movements did indeed occur at the same time as blinks , but unexpectedly the movements were different from those that had been recorded in previous studies . Further experiments then confirmed that the blinks themselves triggered this new type of corrective movement , rather than the loss of vision that resulted from the blinks . Khazali et al . propose that by synchronizing the resetting movements with blinks , the brain ensures that the eyes are damp wiped and restored to their optimal positions while minimizing visual disturbance . The next challenge is to identify the circuit of neurons in the brain that underlies these eye movements and to clarify how this circuit interacts with the one that controls blinking .", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.1023
[ { "content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nSymbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge.\nHowever, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language.\nCurrently, most work in natural language processing focuses on neural networks which learn distributed representations of words and their composition, thereby performing well in the presence of large linguistic variability.\nWe propose to reap the benefits of both approaches by applying a combination of neural networks and logic programming to natural language question answering.\nWe propose to employ an external, non-differentiable Prolog prover which utilizes a similarity function over pretrained sentence encoders.\nWe fine-tune these representations via Evolution Strategies with the goal of multi-hop reasoning on natural language. \nThis allows us to create a system that can apply rule-based reasoning to natural language and induce domain-specific natural language rules from training data.\nWe evaluate the proposed system on two different question answering tasks, showing that it complements two very strong baselines – BIDAF (Seo et al., 2016a) and FASTQA (Weissenborn et al.,2017) – and outperforms both when used in an ensemble.\nWe consider the problem of multi-hop reasoning on natural language input.\nFor instance, consider the statements Socrates was born in Athens and Athens belongs to Greece, together with the question Where was Socrates born?\nThere are two obvious answers following from the given statements: Athens and Greece.\nWhile Athens follows directly from the single statement Socrates was born in Athens, deducing Greece requires a reader to combine both provided statements using the knowledge that a person that was born in a city, which is part of a country, was also born in the respective country.Most recent work that addresses such challenges leverages deep learning based methods BID41 BID29 BID38 BID30 BID18 BID21 BID17 BID8 , capable of dealing with the linguistic variability and ambiguity of natural language text.\nHowever, the black-box nature of neural networks makes it hard to interpret the exact reasoning steps leading to a prediction (local interpretation), as well as the induced model (global interpretation).Logic\nprogramming languages like Prolog BID46 , on the other hand, are built on the idea of using symbolic rules to reason about entities, which makes them highly interpretable both locally and globally. The capability\nto use user-defined logic rules allows users to incorporate external knowledge in a straightforward manner. Unfortunately,\nbecause of their reliance on symbolic logic, systems built on logic programming need extensive preprocessing to account for the linguistic variability that comes with natural language BID23 .We introduce NLPROLOG\n, a system which combines a symbolic reasoner and a rule-learning method with pretrained sentence representations to perform rule-based multi-hop reasoning on natural language input.1 Like inductive logic\nprogramming methods, it facilitates both global as well as local interpretation, and allows for straightforward integration of prior knowledge. Similarly to deep learning\nbased approaches, it can be applied to natural language text without the need to transforming it to formal logic.At the core of the proposed method is an external non-differentiable theorem prover which can take similarities between symbols into account. Specifically, we modify a\nProlog interpreter to support weak-unification as proposed by BID39 . To obtain similarities between\nsymbols, we utilize sentence encoders initialized with pretrained sentence embeddings BID28 and then fine-tune these for a downstream question answering task via gradient-based optimization methods. Since the resulting system contains\nnon-differentiable components, we propose using Evolution Strategies (ES) BID9 as a gradient estimator BID47 for training the systemenabling us to fine-tune the sentence encoders and to learn domain-specific logic rules (e.g. that the relation is in is transitive) from natural language training data. This results in a system where training\ncan be trivially parallelized, and which allows to change the logic formalism by simply exchanging the external prover without the need for an intricate re-implementation as an end-to-end differentiable function.In summary, our main contributions are: a) we show how Prolog-like reasoning can\nbe applied to natural language input by employing a combination of pretrained sentence embeddings, an external logic prover, and fine-tuning using Evolution Strategies, b) we extend a Prolog interpreter with weak\nunification based on distributed representations, c) we present Gradual Rule Learning (GRL),\na training algorithm that allows the proposed system to learn First-Order Logic (FOL) rules from entailment, and d) we evaluate the proposed system on two\ndifferent Question Answering (QA) datasets and demonstrate that its performance is on par with state-of-the-art neural QA models in many cases, while having different failure modes. This allows to build an ensemble of NLPROLOG\nand a neural QA model that outperforms all individual models.\nWe have developed NLPROLOG, a system that is able to perform rule-based reasoning on natural language input, and can learn domain-specific natural language rules from training data.\nTo this end, Figure 3 : Example proof trees generated by NLPROLOG.\nEach of the two trees shows an application of a transitive rule, the first for the predicate developer and the second for the predicate country.\nThe rule templates are displayed with the most similar predicate.\nNote the noise introduced by the Open IE process, e.g. QUANT_0_1 and that entities and predicates do not need to match exactly.we have proposed to combine a symbolic prover with pretrained sentence embeddings and to train the resulting system with Evolution Strategies.\nWe have evaluated NLPROLOG on two different QA tasks, showing that it can learn domain-specific rules and produce predictions which complement those of the two strong baselines BIDAF and FASTQA.\nThis allows to build an ensemble of a baseline and NLPROLOG which outperforms all single models.While we have focused on a subset of First Order Logic in this work, the expressiveness of NL-PROLOG could be extended by incorporating a different symbolic prover.\nFor instance, a prover for temporal logic BID27 ) would allow to model temporal dynamics in natural language and enable us to evaluate NLPROLOG on the full set of BABI tasks.\nWe are also interested in incorporating future improvements of symbolic provers, Open IE systems and pretrained sentence representations to further enhance the performance of NLPROLOG.\nTo study the performance of the proposed method without the noise introduced by the Open IE step, it would be useful to evaluate it on tasks like knowledge graph reasoning.\nAdditionally, it would be interesting to study the behavior of NLPROLOG in the presence of multiple WIKIHOP query predicates.\nelse if x is f (x 1 , . . . , x n ), y is f (y 1 , . . . , y n ), and f ∼ f ≥ λ then S := S ∧ f ∼ f return unify(x 1 :: . . . :: x n , y 1 :: . . . :: y n , θ, S ) end else if x is p(x 1 , . . . , x n ), y is p (y 1 , . . . , y n ), and p ∼ p ≥ λ then S := S ∧ f ∼ f return unify(x 1 :: . . . :: x n , y 1 :: . . . :: y n , θ, S ) end else if x is x 1 :: . . . :: x n and y is y 1 :: . . . :: y n then (θ , S ) := unify(x 1 , y 1 , θ, S) return unify(x 2 :: . . . :: x n , y 2 :: . . . :: y n , θ , S ) end else if x is empty list and y is empty list then return (θ, S) else return (failure, 0) fun unify_var (v, o, θ, S) if {v/val} ∈ θ then return unify(val, o, θ, S) else if {o/val} ∈ θ then return unify(var, val, θ, S) else return ({v/o} + θ, S) Algorithm 1: The weak unification algorithm in Spyrolog without occurs check A.2\nRUNTIME OF PROOF SEARCHThe worst case complexity vanilla logic programming is exponential in the depth of the proof BID34 .\nHowever, in our case this is a particular problem because weak unification requires the prover to attempt unification between all entity/predicate symbols.To keep things tractable, NLPROLOG only attempts to unify symbols with a similarity greater than some user-defined threshold λ.\nFurthermore, in the search step for one statement q, for the rest of the search, λ is set to λ := max(λ, S) whenever a proof for q with success score S is found.\nDue to the monotonicity of the employed aggregation functions, this allows to prune the search tree without losing the guarantee to find the proof yielding the maximum success score.\nWe found this optimization to be crucial to make the proof search scale for the studied wikihop predicates.", "role": "user" }, { "content": "We introduce NLProlog, a system that performs rule-based reasoning on natural language by leveraging pretrained sentence embeddings and fine-tuning with Evolution Strategies, and apply it to two multi-hop Question Answering tasks.", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.1492
[ { "content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nIn this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms.\nUnder our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples.\nThe metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD).\nDriven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering.\nExperimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.\n\nCluster analysis has broad applications in various disciplines.\nGrouping data samples into categories with similar features is an efficient way to summarize the data for further processing.\nIn measuring the similarities among data samples, the Euclidean distance is the most common choice in clustering algorithms.\nUnder Euclidean distance, feature components are assigned with the same weight, which essentially assumes all features are equally important across the entire data space.\nIn practice, such setup is often not optimal.\nLearning a customized metric function from the data samples can usually boost the performance of various machine learning algorithms BID1 .\nWhile metric learning has been extensively researched under supervised BID19 BID18 BID17 BID14 and semi-supervised settings BID15 BID3 BID23 BID13 , unsupervised metric learning (UML) remains a challenge, in part due to the absence of ground-truth label information to define a learning optimality.\nIn this paper, we focus on the problem of UML for clustering.As the goal of clustering is to capture the natural separations among data samples, one common practice in the existing UML solutions is to increase the data separability and make the separations more identifiable for the ensuing clustering algorithm.\nSuch separability gain can be achieved by projecting data samples onto a carefully chosen low-dimensional manifold, where geometric relationships, such as the pairwise distances, are preserved.\nThe projections can be carried out linearly, as through the Principle Component Analysis, or nonlinearly, as via manifold learning solutions.\nUnder the dimension-reduced space, clustering algorithms, such as K-means, can then be applied.Recent years have seen the developments of UML solutions exploring different setups for the lowdimensional manifolds.\nFME ) relies on the learning of an optimum linear regression function to specify the target low-dimensional space.\nBID0 model local sample densities of the data to estimate a new metric space, and use the learned metric as the basis to construct graphs for manifold learning.\nApplication-specific manifolds, such as Grassmann space BID6 and Wasserstein geometry BID16 , have also been studied.\nWhen utilized as a separate preprocessing step, dimensionality reduction UML solutions are commonly designed without considering the ensuing clustering algorithm and therefore cannot be fine-tuned accordingly.AML takes a different approach, performing clustering and distance metric learning simultaneously.\nThe joint learning under AML is formulated as a trace maximization problem, and numerically solved through an EM-like iterative procedure, where each iteration consists of a data projection step, followed by a clustering step via kernel K-means.\nThe projection is parameterized by an orthogonal, dimension-reducing matrix.\nA kernelized extension of AML was proposed in BID2 .\nAs the projection models are built on linear transformations, their capabilities to deal with complex nonlinear structures are limited.\nUML solutions performing under the original input space have also been proposed.\nSSO BID7 learns a global similarity metric through a diffusion procedure that propagates smooth metrics through the data space.\nCPCM BID4 relies on the ratio of within cluster variance over the total data variance to obtain a linear transformation, aiming to improved data separability.\nAs the original spaces are usually high-dimensional, UML solutions in this category tend to suffer from the local minima problem.In light of the aforementioned limitations and drawbacks, we propose a new nonlinear UML framework in this paper.\nOur solution integrates nonlinear feature transformation and manifold embedding together to improve the data separability for K-means clustering.\nOur model can be regarded as a fully nonlinear generalization of AML, in which the transformation model is upgraded to a geometric model called Coherent Point Drifting (CPD) BID8 .\nData points are driven by CPD to reach a higher level of linear separability, which will be subsequently picked up by the manifold embedding component to generate well-separable sample projections.\nAt the end, K-means is applied on the transformed, dimension-reduced embeddings to produce label predictions.\nThe choice of CPD is with the consideration of its capability of generating high-order yet smooth transformations.\nThe main contributions of this paper include the following.•\nOur proposed fully nonlinear UML solution enhances data separability through the combination of CPD-driven deformation and spectral embeddings.•\nTo the best of our knowledge, this is the first work that utilizes dense, spatial varying deformations in unsupervised metric learning.•\nThe CPD optimization has a closed-form solution, therefore can be efficiently computed.•\nOur model outperforms state-of-the-art UML methods on six benchmark databases, indicating promising performance in many real-world applications.The rest of this paper is organized as follows. Section\n2 describes our proposed method in detail. It includes\nthe description of CPD model, formulation of our CPD based UML, optimization strategy and the approach to kernelize our model. Experimental\nresults are presented in Section 3 to validate our solutions with both synthetic and real-world datasets. Section 4 concludes\nthis paper.\nThe proposed CPD-UML model learns a nonlinear metric and the clusters for the given data simultaneously.\nThe nonlinear metric is achieved by a globally smooth nonlinear transformation, which improves the separability of given data during clustering.\nCPD is used as the transformation model because of its capability in deforming feature space in sophisticated yet smooth manner.\nEvaluations on synthetic and benchmark datasets demonstrate the effectiveness of our approach.\nApplying the proposed approach to other computer vision and machine learning problems are in the direction of our future research.", "role": "user" }, { "content": "a nonlinear unsupervised metric learning framework to boost the performance of clustering algorithms.", "role": "assistant" } ]
science.chia_ner
science.chia_ner.526
[ { "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:", "role": "user" }, { "content": "{\"Condition\": [], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [], \"Visit\": []}", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.861.0
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Question Answering via Integer Programming over Semi-Structured Knowledge\n\nAnswering science questions posed in natural language is an important AI challenge. Answering such questions often requires non-trivial inference and knowledge that goes beyond factoid retrieval. Yet, most systems for this task are based on relatively shallow Information Retrieval (IR) and statistical correlation techniques operating on large unstructured corpora. We propose a structured inference system for this task, formulated as an Integer Linear Program (ILP), that answers natural language questions using a semi-structured knowledge base derived from text, including questions requiring multi-step inference and a combination of multiple facts. On a dataset of real, unseen science questions, our system significantly outperforms (+14%) the best previous attempt at structured reasoning for this task, which used Markov Logic Networks (MLNs). It also improves upon a previous ILP formulation by 17.7%. When combined with unstructured inference methods, the ILP system significantly boosts overall performance (+10%). Finally, we show our approach is substantially more robust to a simple answer perturbation compared to statistical correlation methods.\n\nIntroduction\nAnswering questions posed in natural language is a fundamental AI task, with a large number of impressive QA systems built over the years. Today's Internet search engines, for instance, can successfully retrieve factoid style answers to many natural language queries by efficiently searching the Web. Information Retrieval (IR) systems work under the assumption that answers to many questions of interest are often explicitly stated somewhere BIBREF0 , and all one needs, in principle, is access to a sufficiently large corpus. Similarly, statistical correlation based methods, such as those using Pointwise Mutual Information or PMI BIBREF1 , work under the assumption that many questions can be answered by looking for words that tend to co-occur with the question words in a large corpus.\nWhile both of these approaches help identify correct answers, they are not suitable for questions requiring reasoning, such as chaining together multiple facts in order to arrive at a conclusion. Arguably, such reasoning is a cornerstone of human intelligence, and is a key ability evaluated by standardized science exams given to students. For example, consider a question from the NY Regents 4th Grade Science Test:\nWe would like a QA system that, even if the answer is not explicitly stated in a document, can combine basic scientific and geographic facts to answer the question, e.g., New York is in the north hemisphere; the longest day occurs during the summer solstice; and the summer solstice in the north hemisphere occurs in June (hence the answer is June). Figure 1 illustrates how our system approaches this, with the highlighted support graph representing its line of reasoning.\nFurther, we would like the system to be robust under simple perturbations, such as changing New York to New Zealand (in the southern hemisphere) or changing an incorrect answer option to an irrelevant word such as “last” that happens to have high co-occurrence with the question text.\nTo this end, we propose a structured reasoning system, called TableILP, that operates over a semi-structured knowledge base derived from text and answers questions by chaining multiple pieces of information and combining parallel evidence. The knowledge base consists of tables, each of which is a collection of instances of an $n$ -ary relation defined over natural language phrases. E.g., as illustrated in Figure 1 , a simple table with schema (country, hemisphere) might contain the instance (United States, Northern) while a ternary table with schema (hemisphere, orbital event, month) might contain (North, Summer Solstice, June). TableILP treats lexical constituents of the question $Q$ , as well as cells of potentially relevant tables $T$ , as nodes in a large graph $\\mathcal {G}_{Q,T}$ , and attempts to find a subgraph $G$ of $\\mathcal {G}_{Q,T}$ that “best” supports an answer option. The notion of best support is captured via a number of structural and semantic constraints and preferences, which are conveniently expressed in the Integer Linear Programming (ILP) formalism. We then use an off-the-shelf ILP optimization engine called SCIP BIBREF3 to determine the best supported answer for $Q$ .\nFollowing a recently proposed AI challenge BIBREF4 , we evaluate TableILP on unseen elementary-school science questions from standardized tests. Specifically, we consider a challenge set BIBREF2 consisting of all non-diagram multiple choice questions from 6 years of NY Regents 4th grade science exams. In contrast to a state-of-the-art structured inference method BIBREF5 for this task, which used Markov Logic Networks (MLNs) BIBREF6 , TableILP achieves a significantly (+14% absolute) higher test score. This suggests that a combination of a rich and fine-grained constraint language, namely ILP, even with a publicly available solver is more effective in practice than various MLN formulations of the task. Further, while the scalability of the MLN formulations was limited to very few (typically one or two) selected science rules at a time, our approach easily scales to hundreds of relevant scientific facts. It also complements the kind of questions amenable to IR and PMI techniques, as is evidenced by the fact that a combination (trained using simple Logistic Regression BIBREF2 ) of TableILP with IR and PMI results in a significant (+10% absolute) boost in the score compared to IR alone.\nOur ablation study suggests that combining facts from multiple tables or multiple rows within a table plays an important role in TableILP's performance. We also show that TableILP benefits from the table structure, by comparing it with an IR system using the same knowledge (the table rows) but expressed as simple sentences; TableILP scores significantly (+10%) higher. Finally, we demonstrate that our approach is robust to a simple perturbation of incorrect answer options: while the simple perturbation results in a relative drop of 20% and 33% in the performance of IR and PMI methods, respectively, it affects TableILP's performance by only 12%.\n\nRelated Work\n BIBREF2 aristo2016:combining proposed an ensemble approach for the science QA task, demonstrating the effectiveness of a combination of information retrieval, statistical association, rule-based reasoning, and an ILP solver operating on semi-structured knowledge. Our ILP system extends their model with additional constraints and preferences (e.g., semantic relation matching), substantially improving QA performance.\nA number of systems have been developed for answering factoid questions with short answers (e.g., “What is the capital of France?”) using document collections or databases (e.g., Freebase BIBREF7 , NELL BIBREF8 ), for example BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . However, many science questions have answers that are not explicitly stated in text, and instead require combining information together. Conversely, while there are AI systems for formal scientific reasoning (e.g., BIBREF16 , BIBREF17 ), they require questions to be posed in logic or restricted English. Our goal here is a system that operates between these two extremes, able to combine information while still operating with natural language.\nThe task of Recognizing Textual Entailment (RTE) BIBREF18 , BIBREF19 is also closely related, as QA can be cast as entailment (Does corpus entail question+answer? BIBREF20 ). However, RTE has primarily focused on the task of linguistic equivalence, and has not addressed questions where some form of scientific reasoning is required. Recent work on Natural Logic BIBREF21 , BIBREF22 has extended RTE to account for the logical structure within language. Our work can be seen as going one step further, to add a layer of structured reasoning on top of this; in fact, we use an RTE engine as a basic subroutine for comparing individual table cells in our ILP formulation.\nILP based discrete optimization has been successful in several NLP tasks BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . While our ILP formulation also operates on natural language text, our focus is on the use of a specific semi-structured table representation for QA. BIBREF28 cohen2000:joins studied tables with natural language text requiring soft matching, with a focus on efficiently computing the top few candidates given a database query. In contrast, our system, given a natural language question, (implicitly) seeks to generate a query that produces the most supported answer.\n\nQA as Subgraph Optimization\nWe begin with our knowledge representation formalism, followed by our treatment of QA as an optimal subgraph selection problem over such knowledge, and then briefly describe our ILP model for subgraph selection.\n\nSemi-Structured Knowledge as Tables\nWe use semi-structured knowledge represented in the form of $n$ -ary predicates over natural language text BIBREF2 . Formally, a $k$ -column table in the knowledge base is a predicate $r(x_1, x_2, \\ldots , x_k)$ over strings, where each string is a (typically short) natural language phrase. The column headers capture the table schema, akin to a relational database. Each row in the table corresponds to an instance of this predicate. For example, a simple country-hemisphere table represents the binary predicate $r_{\\text{ctry-hems}}(c,h)$ with instances such as (Australia, Southern) and (Canada, Northern). Since table content is specified in natural language, the same entity is often represented differently in different tables, posing an additional inference challenge.\nAlthough techniques for constructing this knowledge base are outside the scope of this paper, we briefly mention them. Tables were constructed using a mixture of manual and semi-automatic techniques. First, the table schemas were manually defined based on the syllabus, study guides, and training questions. Tables were then populated both manually and semi-automatically using IKE BIBREF29 , a table-building tool that performs interactive, bootstrapped relation extraction over a corpus of science text. In addition, to augment these tables with the broad knowledge present in study guides that doesn't always fit the manually defined table schemas, we ran an Open IE BIBREF30 pattern-based subject-verb-object (SVO) extractor from BIBREF31 clark2014:akbc over several science texts to populate three-column Open IE tables. Methods for further automating table construction are under development.\n\nConclusion\nAnswering real science questions is a challenging task because they are posed in natural language, require extensive domain knowledge, and often require combining multiple facts together. We presented TableILP, a system that can answer such questions, using a semi-structured knowledge base. We treat QA as a subgraph selection problem and then formulate this as an ILP optimization. Most importantly, this formulation allows multiple, semi-formally expressed facts to be combined to answer questions, a capability outside the scope of IR-based QA systems. In our experiments, this approach significantly outperforms both the previous best attempt at structured reasoning for this task, and an IR engine provided with the same knowledge. It also significantly boosts performance when combined with unstructured methods (IR and PMI). These results suggest that the approach is both viable and promising for natural language question answering.\n\nTable 4: Solver combination results\n\nTable 5: TableILP statistics averaged across questions\n\nQuestion: How is the semi-structured knowledge base created?", "role": "user" }, { "content": "{\"answer\": [\"using a mixture of manual and semi-automatic techniques\"], \"evidence\": [\"Although techniques for constructing this knowledge base are outside the scope of this paper, we briefly mention them. Tables were constructed using a mixture of manual and semi-automatic techniques. First, the table schemas were manually defined based on the syllabus, study guides, and training questions. Tables were then populated both manually and semi-automatically using IKE BIBREF29 , a table-building tool that performs interactive, bootstrapped relation extraction over a corpus of science text. In addition, to augment these tables with the broad knowledge present in study guides that doesn\\u0027t always fit the manually defined table schemas, we ran an Open IE BIBREF30 pattern-based subject-verb-object (SVO) extractor from BIBREF31 clark2014:akbc over several science texts to populate three-column Open IE tables. Methods for further automating table construction are under development.\"]}", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.297
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTremor side effects of salbutamol, quantified by a laser pointer technique. OBJECTIVE: To study tremor side effects of salbutamol an easily applicable, quick and low-priced method is needed. A new method using a commercially available, pen-shaped laser pointer was developed. Aim of the study was to determine sensitivity, reproducibility, reference values and the agreement with a questionnaire. METHODS: Tremor was measured using a laser pointer technique. To determine sensitivity we assessed tremor in 44 patients with obstructive lung disease after administration of cumulative doses of salbutamol. Subjects were asked to aim at the centre of a target, subdivided in concentric circles, from 5 m distance. The circle in which the participant succeeded to aim was recorded in millimetres radius. In another series of measurements, reproducibility and reference values of the tremor was assessed in 65 healthy subjects in three sessions, at 9 a.m., 4 p.m. and 9 a.m., respectively, 1 week later. Postural tremor was measured with the arm horizontally outstretched rest tremor with the arm supported by an armrest and finally tremor was measured after holding a 2-kg weight until exhaustion. Inter-observer variability was measured in a series of 10 healthy subjects. Tremor was measured simultaneously by two independent observers. RESULTS: Salbutamol significantly increased tremor severity in patients in a dose-dependent way. Within healthy adults no age-dependency could be found (b = 0.262 mm/year; P = 0.72). There was no agreement between the questionnaire and tremor severity (r = 0.093; P = 0.53). Postural tremor showed no significant difference between the first and third session (P = 0.07). Support of the arm decreased tremor severity, exhaustion increased tremor severity significantly. A good agreement was found between two independent observers (interclass correlation coefficient 0.72). DISCUSSION: Quantifying tremor by using an inexpensive laser pointer is, with the exception of children (<12 years) a sensitive and reproducible method.", "role": "user" }, { "content": "{\"Chemical\": [\"Salbutamol\", \"salbutamol\"], \"Disease\": [\"Tremor\", \"obstructive lung disease\", \"tremor\"]}", "role": "assistant" } ]
science.ddi_ner
science.ddi_ner.415
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMay interact with thyroid medication (e.g., levothyroxine), iodine-containing products, antacids, H2-antagonists (e.g., famotidine, ranitidine), and proton pump inhibitors (e.g., lansoprazole, omeprazole). This product can affect the results of certain lab tests.", "role": "user" }, { "content": "{\"Brand\": [], \"Drug\": [\"omeprazole\", \"lansoprazole\", \"famotidine\", \"iodine\", \"ranitidine\", \"levothyroxine\"], \"DrugGroup\": [\"proton pump inhibitors\", \"H2-antagonists\", \"antacids\"], \"UnapprovedDrug\": []}", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.19
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nBeta1 adrenergic receptor-mediated enhancement of hippocampal CA3 network activity.\nNorepinephrine is an endogenous neurotransmitter distributed throughout the mammalian brain. In higher cortical structures such as the hippocampus, norepinephrine, via beta adrenergic receptor (AR) activation, has been shown to reinforce the cognitive processes of attention and memory. In this study, we investigated the effect of beta1AR activation on hippocampal cornu ammonis 3 (CA3) network activity. AR expression was first determined using immunocytochemistry with antibodies against beta1ARs, which were found to be exceptionally dense in hippocampal CA3 pyramidal neurons. CA3 network activity was then examined in vitro using field potential recordings in rat brain slices. The selective betaAR agonist isoproterenol caused an enhancement of hippocampal CA3 network activity, as measured by an increase in frequency of spontaneous burst discharges recorded in the CA3 region. In the presence of alphaAR blockade, concentration-response curves for isoproterenol, norepinephrine, and epinephrine suggested that a beta1AR was involved in this response, and the rank order of potency was isoproterenol > norepinephrine = epinephrine. Finally, equilibrium dissociation constants (pK(b)) of subtype-selective betaAR antagonists were functionally determined to characterize the AR subtype modulating hippocampal CA3 activity. The selective beta1AR antagonists atenolol and metoprolol blocked isoproterenol-induced enhancement, with apparent K(b) values of 85 +/- 36 and 3.9 +/- 1.7 nM, respectively. In contrast, the selective beta2AR antagonists ICI-118,551 and butoxamine inhibited isoproterenol-mediated enhancement with apparent low affinities (K(b) of 222 +/- 61 and 9268 +/- 512 nM, respectively). Together, this pharmacological profile of subtype-selective betaAR antagonists indicates that in this model, beta1AR activation is responsible for the enhanced hippocampal CA3 network activity initiated by isoproterenol.", "role": "user" }, { "content": "{\"Chemical\": [\"isoproterenol\", \"ICI-118,551\", \"epinephrine\", \"metoprolol\", \"norepinephrine\", \"butoxamine\", \"Norepinephrine\", \"atenolol\"], \"Protein\": [\"alphaAR\", \"AR\", \"beta1ARs\", \"beta2AR\", \"Beta1 adrenergic receptor\", \"beta adrenergic receptor\", \"betaAR\", \"beta1AR\"]}", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.13597
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nExperience corps: Design of an intergenerational program to boost social capital and promote the health of an aging society\nAbstract 1:\nPopulation aging portends a crisis of re sources and values . Desired solutions could include intergenerational strategies to harness the untapped potential of older adultsto address societal needs and to generate health improvements for older adults . Despite the desire of many older adults to remain socially engaged and productive , the creation of productive roles has lagged . This report describes the conceptual framework and major design features of a new model of health promotion for older adults called Experience Corps ® . Experience Corps operates at , and leads to benefits , across multiple levels , including individuals , schools , and the larger community . At the individual level , we propose a model based on Erikson ’s concept of generativity to explain bow and why experience Corps works . At the level of schools , we propose a parallel model based on social capital . Experience Corps is a volunteer service program design ed to improve the lives of urban childre and to yield health improvement for older persons . It illustrates how population aging creates new opportunities to address difficult social problems . This article explores how the linkage of concepts at multiple levels motivates a potentially cost-effective , feasible , and high-impact program\n\n\nTitle 2:\nA social model for health promotion for an aging population: Initial evidence on the experience corps model\nAbstract 2:\nThis report evaluates whether a program for older volunteers , design ed for both benerativity and health promotion , leads to short-term improvements inmultiple behavioral risk factors and positive effects on intermediary risk factors for disability and other morbidities . The Experience Corps ® places older volunteers in public elementary schools in roles design ed to meet schools ’ needs and increase the social , physical , and cognitive activity of the volunteers . This article reports on a pilot r and omized trial in Baltimore , Maryl and . The 128 volunteers were 60–86 years old ; 95 % were African American . At follow-up of 4–8 months , physical activity , strength , people one could turn to for help , and cognitive activity increased significantly , and walking speed decreased significantly less , in participants compared to controls . In this pilot trial , physical , cognitive , and social activity increased , suggesting the potential for the Experience Corps to improve health for an aging population and simultaneously improve educational outcomes for children\n\n\nTitle 3:\nVolunteer transitions among older adults: the role of human, social, and cultural capital in later life.\nAbstract 3:\nOBJECTIVES We aim to underst and how human , social , and cultural capitals are associated with the volunteer process , that is , engagement ( starting ) , intensity ( number of hours ) , and cessation ( stopping ) , among older adults . METHOD Data from the 2000 through 2008 Health and Retirement Study and the 2001 through 2009 Consumption and Activity Mail Survey provide a sample of 4,526 respondents . R and om-effects pooled time series analyses incorporate not only the presence of various types of capital but also the quality of that capital . RESULTS Human and cultural capitals were positively associated with increased volunteer involvement . Effects of social capital ( relationships in the family , employment status , and the community ) depended on the quality of the relationships , not necessarily on their presence alone . DISCUSSION Results suggest that bolstering older adults ' capitals , particularly among lower socioeconomic status groups , can increase volunteer engagement and intensity and reduce cessation . Additionally , a variety of organizational policies including respite programs for caregivers and employer policies allowing employees to reduce their work hours might indirectly affect participation rates and commitment . Potential pools of volunteers exist in families , workplaces , and religious organizations , but more research is necessary to identify how to recruit and retain individuals in social networks where volunteer participatory rates are low\n\n\nTitle 4:\nEffects of a peer-led senior health education program.\nAbstract 4:\nThe effect of a health education course guided by peers aged 55 and over was evaluated . The aim of the course was to empower older adults to participate in society and to promote their wellbeing . Evaluation included determining the effect on attitude toward aging , self-efficacy , perception of the societal opinion regarding the place of the elderly in society ( social influence ) , social participation , perceived social support , and wellbeing of the participants aged 55 to 79 years . A quasi-experimental approach was used . The effect on the experimental group of course participants was studied compared to a control group of older adults on the waiting list . The respondents filled out postal question naires at three time points ( before starting the course ( t0 ) , immediately after termination ( t1 ) and three months later ( t2 ) ) . Using a multivariate analysis procedure , a significant interaction effect between time of measurement and group membership was found with respect to the outcome of social influence . At t1 an effect was absent , but at t2 , the current idea that elderly occupy a marginal position in society , found less favour with the experimental group than the control group . Moreover perceived social support and subjective health improved significantly at t1 and t2 among the course members , when compared to the control group . No effect was found on attitude , self-efficacy , social participation and wellbeing in the short time span of a three months follow-up\n\n\nTitle 5:\nExploring the effects of an \"everyday\" activity program on executive function and memory in older adults: Experience Corps.\nAbstract 5:\nPURPOSE There is little empirical translation of multimodal cognitive activity programs in \" real-world \" community-based setting s. This study sought to demonstrate in a short-term pilot r and omized trial that such an activity program improves components of cognition critical to independent function among sedentary older adults at greatest risk . DESIGN AND METHODS We r and omized 149 older adults to Experience Corps ( EC ) or a wait-list control arm . Participants r and omized to EC trained in teams to help elementary school children with reading achievement , library support , and classroom behavior for 15 hr/week during an academic year . We compared baseline and follow-up assessment s of memory , executive function ( EF ) , and psychomotor speed at 4 to 8 months by intervention arm , adjusting for exposure duration . We observed a range of EF abilities at baseline and stratified analyses according to the presence of baseline impairment using established norms . RESULTS Overall , EC participants tended to show improvements in EF and memory relative to matched controls ( ps < .10 ) . EC participants with impaired baseline EF showed the greatest improvements , between 44 % and 51 % in EF and memory at follow-up , compared to declines among impaired-EF controls ( ps < .05 ) . IMPLICATION S Short-term participation in this community-based program design ed to increase cognitive and physical activity in a social , real-world setting may train memory and , particularly , executive functions important to functional independence . This community-based program represents one potentially effective model to bring high doses of sustainable cognitive exercise to the greatest proportion of older adults , particularly those sedentary individuals at elevated risk for health disparities", "role": "user" }, { "content": "Interventions providing explicit roles and using supportive group structures were somewhat effective in improving one or more of the following : life satisfaction , social support and activity , physical health and activity , functional health , and cognition .\nCONCLUSIONS Social role interventions may improve health and well-being for people in retirement transition .", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.61563
[ { "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\nbut when asked for its general distinctive features , we would start a complicated enumeration , doomed to be incomplete .\nconsequently , there is to date not a single agreed - on definition of cellular life . instead , many contemporary researchers , being generally more humble than the pioneers of their disciplines , consider this question ill - posed and prefer not to address it .\nthis is understandable as intellectual honesty , but it is rather unsatisfactory in a more general perspective . leaving the central object of study undefined neglects its fundamental relevance and does not express sufficiently what biology is all about .\nif we do not want to adhere to the persistent metaphysical notion of a spirit that fundamentally distinguishes living from nonliving systems , we will have to deliver an answer to the question of when and how a transition between these two regimes actually occurs in order arrive at a basic definition of a cell .\nwe may remember the famous whler experiment , demonstrating that a molecule like urea , previously considered to belong to the sacred sphere of animate matter , could be synthesized in a flask ( whler , 1828 ) .\nthus , there certainly is nt any fundamental distinction to be made on the level of molecules . but regarding cells\n, we still do not have a strategy to escape the circular dictum of 19th century cell theory attributed to rudolf virchow that every cell derives from a cell ( omnis cellula e cellula ) .\npresumably , there was nt a cell right after the big bang , so where did the first one really come from ?\nwhat did the molecules on earth ( or anywhere else in the universe ) look like before life made its first appearance ? how did they self - assemble and self - organize into the first cell - like entity ? but do we even have to know this ? instead of taking a tedious historical approach to the origin of life , aiming to reconstruct the actual conditions and components of an early earth , we may instead try and derive its preconditions and driving forces from physical and chemical laws ( pross , 2012 ) .\nerwin schrdinger , the eminent physicist , made a very intriguing step in this direction ( schrdinger , 1944 ) by characterizing life as the generation of ordered structures far from equilibrium , using the continuous flow of ( light or chemical ) energy through them .\nshortly after , alan turing formulated the mathematical conditions for biological self - organization and morphogenesis , involving remarkably few molecular species and parameters ( turing , 1952 ) .\nso even if molecules and conditions have dramatically changed over the last billions of years , the same physical laws apply today , and may allow us to reconstitute another origin of life under laboratory conditions .\nin other words , a synthetic whler - like approach , not to biological molecules but to the cell as a whole .\nthis , if at all possible , will clearly require a huge cross - disciplinary effort , as major insights will have to come from chemistry as well as physics .\nthere is one modern branch of biology that raises some hopes that the old fundamental questions may be formulated in a new and more pragmatic way , interfacing better with the quantitative as well as technological disciplines .\nthis is the concept of synthetic biology , with its distinctive idea of dissecting biological systems into modular functional units , or \nbio - bricks , that can ideally be recombined to assemble existing , but also wholly new biological systems ( fig .\n1 ) . functional units , and in particular minimal functional units , may be much better defined than unknown ancient molecules whose exact composition could only be deduced from a postulated abundance of certain elements .\nin such a modular approach , the question would no longer be what the first cell and its environment actually looked like , but rather what elementary set of functional modules would have to be combined for the system to start showing features of life .\ntop - down versus bottom - up : strategies to arrive at the potential minimal living unit .\nwhile the top - down approach eliminates redundant genes from already living organisms , the bottom - up approach aims to jump - start cellular life from a fundamental system of functional modules , e.g. , the basic ingredients of ganti s chemoton .\n in fact , synthetic biology follows at least two different approaches toward such a minimal living unit .\nthe more prominent one , often referred to as the minimal genome approach , is motivated by the biotechnological quest for an ultimately efficient production organism . compared with known microorganisms\n, this idealized cell would contain the absolute minimum number of genes required for metabolism and replication in an optimal environment ( fig .\nthe underlying idea is that the genomes of even the smallest known organisms contain a substantial degree of redundancy , or other features imposed on them by coevolution with competitors , that could in principle be eliminated by genome engineering without compromising their principle viability .\nthe minimal genome has become a prominent entry in synthetic biology literature , after the venter group first formulated what the minimal genome of a free - living organism would have to look like ( although much smaller genomes of symbionts have been identified ) and termed it mycoplasma laboratorium .\nthey then fully synthesized and transplanted another small ( but not quite minimal ) mycoplasma mycoides genome into another mycoplasma species that actually thrived and replicated ( gibson et al . , 2010 ) .\nrecently , a similar transplantation was accomplished with a full yeast chromosome ( annaluru et al . , 2014 ) .\nthe minimal genome approach is arguably the best testable way to explore what functional modules can be taken away from an existing system for it to still remain alive .\nhowever , it is more than questionable whether , using this strategy , one will ever find out what the smallest possible configuration of a living system looks like .\nsay , a car and removing a module of recent innovation , e.g. , the car s electrical system .\nthe car may immediately stop its operation , even though it still contains the part essential for the working of a car , i.e. , a combustion engine .\nthus , a more conceptually rewarding , but also obviously more risky , synthetic biology approach would be to try and assemble a living system from scratch , combining its necessary and sufficient modules from the bottom - up ( fig . 1 ; szostak et al .\nif successful , this strategy would indeed result in the very transition that supposedly occurred for the first time 3.5 billion years ago : the transition from nonliving to living .\nhowever , in contrast to the minimal genome concept , which relies on systems that nature has already successfully established , this strategy ca nt rely on living or fossil examples , but instead requires a detailed master plan , or at least a convincing hypothesis about the minimal set of functional modules to \nalthough this short article is not the place for a detailed discussion of the definitions of life , we first need to determine what particular aspects and features need to be addressed first in a bottom - up reconstitution approach .\nwhere to start ? out of the many more or less sophisticated theories and models , i would like to introduce a particularly elegant one to be revisited in interdisciplinary enterprises , taking the modern physical , chemical , and biological insights into account .\n theory of the hungarian biochemist tibor gnti ( gnti , 2003 ) . in short ,\nganti described the cell as a chemical automaton characterized by three main features : ( 1 ) a self - reproducing chemical motor ( e.g. , metabolism ) , ( 2 ) a chemical information system ( e.g. , dna / rna ) , and ( 3 ) a chemical boundary system ( e.g. , membrane ) .\nthe first two elementary features of metabolism and information replication , and their interdependence , have been subjects of heated chicken - or - egg discussions on the origin of life in the genomic era . the third pillar of gnti s chemoton , the boundary system ,\napart from physical chemists and soft matter physicists , who have always been fascinated by the rich phenomenology of bilayers composed of amphiphiles in aqueous environments , the biological interest in membranes , the paradigmatic cell boundaries , arose mainly from their role of harboring channels , receptors , and energy converting machinery .\nprotocell , that the replicating membrane vesicle has entered the focus of fundamental considerations of how to emulate the necessary self - replication of boundaries , and how to couple this to information replication and evolution ( chen et al . , 2004 ) . in fact , simple fatty acid vesicles can be easily grown and transformed , and even spontaneously divide as a result of surface or volume growth , although their suitability as reliable containers for genetic material is limited . regarding the more biologically suitable phospholipid membranes , it has primarily been the technical advances of the last decade in creating useful models , such as giant unilamellar vesicles ( guvs)cell - sized , free - standing membranes perfectly adapted to observation by light microscopy that moved the fundamental task of compartment generation and replication forward on the agenda of bottom - up synthetic biology ( rasmussen et al . , 2008 ) .\nas outlined in the previous sections , defining a minimal compartment to be divided and replicated along with the genetic material contained within seems to be a relatively well - defined but crucial task toward the synthesis of a cell . in this last section , i will shortly illustrate my own recent work aiming for exactly that : a minimal model system to reconstitute controlled division of a cell - like compartment from the bottom - up .\ntrue to the synthetic biology concept of identifying modules of biological functionality , we have been searching for relatively simple but well - described mechanisms that could be mimicked in a first approach , and then reduced and conceptually further simplified after recognition of their most fundamental features .\nwe chose the cell division system of e. coli , an arguably simple but by no means minimal organism , that , however , supports the molecular biology and biochemistry for reconstitution experiments .\ne. coli is rod - shaped , grows mainly along its long axis , and divides symmetrically by assembling a contractile protein ring at its central plane .\nthe positioning of this ring is accomplished by a peculiar oscillation of regulatory proteins mincde between the cell poles ( raskin and de boer , 1999 ) .\nmost remarkably , this min protein system can be demonstrated to represent the smallest possible set of biochemical agents for initiating self - organization and pattern formation ( loose et al . , 2008 ) , requiring two proteins , atp as an energy source , and a membrane as a template to yield wave - like dynamic protein gradients on flat extended membranes .\nintroduced into cell - shaped compartments , the cellular oscillations could be reconstituted ( zieske and schwille , 2013 ) . moreover ,\nthe bottom - up experiments clearly demonstrated how time - averaged gradients act as spatial cues to direct downstream processes ( zieske and schwille , 2014 ) .\nspecifically , proto - ring filaments of ftsz , the primary constituent of the contractile z ring , could be exclusively positioned in the middle of the synthetic compartment ( fig .\nalthough this is only the first , and by no means the crucial step toward reconstituting division of a membrane compartment based on protein self - assembly and self - organization , this shows the advantage of the approach of dissecting a phenomenon into simple functional modules , which can in principle be derived from any organism or even designed from scratch . toward cell division in vitro . shown is a model of reconstitution of protein self - organization and gradient formation in artificial cell - shaped containers by minimal functional elements of the e. coli cell division machinery .\nreaction / diffusion induced oscillations of mincde proteins position ftsz protofilaments to the center of the compartment , mimicking the first step in z ring assembly . adapted from zieske and schwille ( 2014 ) .\nmodern cell biology has , despite its huge advances in the past 50 years , not yet arrived at a fundamental understanding of our main topic of research : the cell . specifically , we are still unable to pinpoint the line of division between the living and the nonliving\n. even less are we able to reproduce the transition between these domains , which are thus still considered separate .\nhowever , we know so much more than any generation before us about the exact features and functions of living systems .\nspeaking and thinking in terms of functional modules , as synthetic biology has lately taught us to do , the possibility of reconstituting and bringing together known and biologically established functionalities allows us to build with a cellular lego toolkit with unprecedented efficiency .\nthus , the time may be just right to once again consider the possibility of constructing not only with and within biological systems , but to extend the synthetic approach to the very construction of biological systems from nonbiological ones , as accomplished at least once by nature , billions of years ago .\nit goes without saying that this enterprise is not a genuinely biological one , because biology is only what results from it .\nbut it will require the specific expertise and insights of biologists , and particularly cell biologists , into the many essential functional elements that are at the core of life as we know it .", "role": "user" }, { "content": "what is life and how could it originate ? \n this question lies at the core of understanding the cell as the smallest living unit . \n although we are witnessing a golden era of the life sciences , we are ironically still far from giving a convincing answer to this question . in this short article , \n i argue why synthetic biology in conjunction with the quantitative sciences may provide us with new concepts and tools to address it .", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.4
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: A better understanding of the mechanisms acting at the maternal-fetal interface is crucial for the design of alternative interventions to antiretroviral therapy for transmission prevention. Dendritic cell-specific ICAM-grabbing non-integrin-related (DC-SIGNR, also known as CD209L or liver/lymph node-specific ICAM-grabbing non-integrin (L-SIGN)) can interact with a plethora of pathogens including HIV-1 and is expressed in placental capillary endothelial cells [2] . DC-SIGNR is organized in three distinct domains, an N-terminal cytoplasmic tail, a repeat region containing seven repeat of 23 amino acids and a C-terminal domain implicated in pathogen binding. Alternative splicing of DC-SIGNR gene leads to the production of a highly diversify isoforms repertoire which includes membrane-bound and soluble isoforms [3] . It has been proposed that interaction between DC-SIGNR and HIV-1 might enhance viral transfer to other susceptible cell types [2] but DC-SIGNR can also internalize and mediate proteasome-dependant degradation of viruses [4] that may differently affect the outcome of infection. Given the presence of DC-SIGNR at the maternal-fetal interface and its interaction with HIV-1, we hypothesized that it could influence MTCT of HIV-1. To investigate the potential role of DC-SIGNR in MTCT of HIV-1, we carried out a genetic association study of DC-SIGNR in a well-characterized cohort of HIV-infected mothers and their infants recruited in Zimbabwe, and identified specific DC-SIGNR variants associated with increased risks of HIV transmission. We further characterized the functional impact of these genetic variants on DC-SIGNR expression and show that they affect both the level and type of DC-SIGNR transcripts produced in the placenta. Samples consisted of stored DNA extracts obtained from 197 mother-child pairs co-enrolled immediately postpartum in the ZVITAMBO Vitamin A supplementation trial (Harare, Zimbabwe) and followed at 6 weeks, and 3-monthly intervals up to 24 months. The ZVITAMBO project was a randomized placebocontrolled clinical trial that enrolled 14,110 mother-child pairs, between November 1997 and January 2000, with the main objective of investigating the impact of immediate postpartum vitamin A supplementation on MTCT of HIV-1. The samples used in the present study were from mother-child pairs randomly assigned to the placebo group of the ZVITAMBO project. Antiretroviral prophylaxis for HIV-1-positive antenatal women was not available in the Harare public-sector during ZVITAMBO patient recruitment. The samples were consecutively drawn from two groups: 97 HIV-1-positive mother/HIV-1-positive child pairs and 100 HIV-1-positive mother/HIV-negative child pairs. Mother's serological status was determined by ELISA and confirmed by Western Blot. Infants were considered to be infected if they were HIV-1 seropositive at 18 months or older and had two or more positive HIV-1-DNA polymerase chain reaction (PCR) results at earlier ages. 100 infants were considered to be uninfected as they were ELISA negative at 18 months or older and had two DNA PCR negative results from samples collected at a younger age. Of the 97 HIV-1-infected infants, 57 were infected IU, 11 were infected IP, and 17 were infected PP as determined by PCR analyses of blood samples collected at birth, 6 weeks, 3 and 6 months of age and according to the following definitions adapted from Bryson and colleagues [5] . Briefly, infants who were DNA PCR positive at birth were infected IU. Infants with negative PCR results from sample obtained at birth but who become positive by 6 weeks of age were infected IP. Infants with negative PCR results at birth and 6 weeks of age but who subsequently became DNA PCR positive were considered to be infected during the PP period. In the analysis comparing the 3 different modes of MTCT, 12 HIV-1-infected infants were excluded because the PCR results were not available at 6 weeks of age. Full methods for recruitment, baseline characteristics collection, laboratory procedures have been described elsewhere [6] . The nucleotide sequence variation of the entire promoter, coding and part of 39-UTR regions of DC-SIGNR gene in the study population was determined previously [7] . Haplotype reconstruction was performed using Bayesian statistical method implemented in PHASE [8] , version 2.1.1, using single nucleotide polymorphism (SNP) with a minimum allele frequency (MAF) of 2%. We applied the algorithm five times, using different randomly generated seeds, and consistent results were obtained across runs ( Figure 1 ). Fifteen haplotype-tagged SNPs (htSNPs) were identified by the HaploBlockFinder software [9] with a MAF $5%. These htSNPs were genotyped in the 197 infants by direct PCR sequencing analysis as we have described previously [7] . The DC-SIGNR exon 4 repeat region genotype was determined by PCR amplification followed by migration in 1.5% agarose gels [10] . DNA sequences in the promoter region were analysed with the TESS interface (http//:www.cbil.upenn.edu/tess) for putative transcription factors binding sites using the TRANSFAC database. Luciferase reporter assays using pGL2-Basic vector were performed in order to investigate the functional effect of mutations on DC-SIGNR promoter activity. Genomic DNA from subjects homozygous for the promoter variants and WT was amplified from nucleotide position 2715 to 21 and cloned between the BglII and HindIII multiple cloning sites in the pGL2-Basic vector which harbours a reporter firefly luciferase gene downstream (Invitrogen Canada inc, Burlington, Canada). All recombinants clones were verified by DNA sequencing. The firefly luciferase test reporter vector was co-transfected at a ratio of 10:1 with the constitutive expressor of Renilla luciferase, phRL-CMV (Promega, Madison, WI, USA). We cultured HeLa cells in 6 wells plates (2610 5 cells) and transfected them the following day using lipofectamine (Invitrogen) according to the manufacturer. Cells were lysed and luciferase assays were performed using 20 mg of protein extract according to the manufacturer (Promega) at 44 h post-transfection. Firefly luciferase activity was normalized to Renilla luciferase activity. 0 mg, 0,5 mg or 1 mg CMV-Tat vector was transfected with LTR-Luc as a positive control in these experiments. We carried out lucierase assays in triplicate in three independent experiments. Results are expressed as mean6 standard error of the mean (S.E.M). First-term placental tissues were obtained from abortions following voluntary interruption of pregnancy at CHUM Hôpital Saint-Luc (Montreal, Canada). Tissues from 3 H1 (associated with MTCT of HIV-1) and 3 H15 (wild-type) homozygous haplotypes were used to analyse possible differences in isoform expression. Total placental RNAs were extracted by MasterPure DNA and RNA Extraction Kit (Epicentre Biotechnologies, Madison, WI, USA) according to the manufacturer. Fragments corresponding to the DC-SIGNR coding region were reversed transcribed (RT) and then amplified by nested PCR with the following primers; RT primers RR, first PCR RF and RR and second PCR RcF and RcR according to Liu and colleagues [11] . 1 mg of total RNA was reverse transcribed with Expand RT (Roche Applied Science, Indianapolis, IN, USA) according to the manufacturer and were PCR-amplified with DNA Platinum Taq Polymerase (Invitrogen). Major PCR products from the second PCR reaction were gel extracted with the Qiagen Gel Extraction Kit (Qiagen Canada inc, Mississauga, ON, Canada) and cloned using the TOPO TA Cloning Kit for sequencing (Invitrogen). For each placenta, 15 different clones were randomly selected and amplified with M13 primers and sequenced with ABI PRISM 3100 capillary automated sequencer (Applied Biosystems, Foster City, CA, USA). Sequences were analysed and aligned with GeneBank reference sequence NM_014257 using Lasergene software (DNA Stars, Madison, WI, USA). Quantitative expression of DC-SIGNR isoforms 1,5 mg of placental RNA was reverse transcribed using 2.5 mM of Oligo dT 20 and Expand RT in 20 ml volume according to the manufacturer (Roche Applied Science). 15 ng of total cDNA in a final volume of 20 ml was used to perform quantitative real-time PCR using Universal Express SYBR GreenER qPCR Supermix (Invitrogen) on a Rotor Gene Realtime Rotary Analyser (Corbett Life Science, Sydney, Australia). Samples from 2 subjects in each group were used because RNA quality of others was not suitable for a qRT-PCR analysis. Amplification of all DC-SIGNR isoforms was performed using an exon 5 specific primer pair (Table S1 ). Membrane-bound isoforms were amplified using primers specific for exon 3, corresponding to the common trans-membrane domain of DC-SIGNR. Primers were targeted to the exon-exon junction and RNA extracts were treated with DNase (Fermantas International inc, Burlington, ON, Canada) to avoid amplification of contaminant DNA. Standard curves (50-500 000 copies per reaction) were generated using serial dilution of a full-length DC-SIGNR or commercial GAPDH (Invitrogen) plasmid DNA. All qPCR reactions had efficiencies ranging from 99% to 100%, even in the presence of 20 ng of non-specific nucleic acids, and therefore could be compared. The copy number of unknown samples was estimated by placing the measured PCR cycle number (crossing threshold) on the standard curve. To correct for differences in both RNA quality and quantity between samples, the expression levels of transcripts were normalised to the reference GAPDH gene transcripts. GAPDH primer sequences were kindly provided by A. Mes-Masson at the CHUM. The results are presented as target gene copy number per 10 5 copies of GAPDH. The ratio of membrane-bound isoforms was calculated as E3/E5. Soluble isoforms were calculated by subtracting membrane-bound from total isoforms. We carried out qPCR assays in triplicate in three independent experiments. Results are expressed as mean6S.E.M. Statistical analysis was performed using the GraphPad PRISM 5.0 for Windows (GraphPad Software inc, San Diego, CA, USA). Differences in baseline characteristics and genotypic frequencies of haplotypes or htSNPs were compared between groups using the x 2 analysis or Fisher's exact test. Logistic regression analysis was used to estimate odds ratios (OR) for each genotype and baseline risk factors. Multiple logistic regression was used to define independent predictors identified as significant in the crude analysis. ORs and 95% confidence interval were calculated with the exact method. Comparisons of continuous variables between groups were assessed with the unpaired two-tailed Student's t test when variables were normally distributed and with the Mann-Whitney U test when otherwise. Differences were considered significant at P,0.05. Written informed consent was obtained from all mothers who participated in the study and the ZVITAMBO trial and the investigation reported in this paper were approved by The \n\nWe carried out an association study of DC-SIGNR polymorphism in 197 infants born to untreated HIV-1-infected mothers recruited in Harare, Zimbabwe. Among them, 97 infants were HIV-1-infected and 100 infants remained uninfected. Of the 97 HIV-1-infected infants, 57 were infected IU, 11 were infected IP, and 17 were infected PP. Timing of infection was not determined for 12 HIV-1-infected infants. Baseline characteristics of mothers and infants are presented in Table 1 . Maternal age and CD4 cell count, child sex, mode of delivery, duration of membrane rupture and gestational age were similar among all groups. However, maternal viral load .29 000 copies/ml was associated with increased risk in both IU and PP with odds ratios (OR) of 3.64 (95% CI = 1.82-7.31, P = 0.0002) and 4.45 (95% CI = 1.50-13.2, P = 0.0045) for HIV-1 transmission, respectively. Fifteen haplotype-tagged SNPs (htSNPs) corresponding to the 15 major DC-SIGNR haplotypes ( Figure 1 ) described among Zimbabweans [7] were genotyped in our study samples (Tables S2 and S3 ). H1 (31%) and H3 (11%) were the most frequent haplotypes observed (Figure 1 ). Being homozygous for the H1 haplotype was associated with increased risk of both IU (OR: 4.42, P = 0.022) and PP (OR: 7.31, P = 0.016) HIV-1 transmission ( Table 2) . Infants harbouring two copy combinations of H1 and/ or H3 haplotypes (H1-H1, H1-H3 or H3-H3) had increased risk of IU (OR: 3.42, P = 0.007) and IP (OR: 5.71, P = 0.025) but not PP (P = 0.098) HIV-1 infection compared to infant noncarriers ( Table 2 ). The latter associations remained significant after adjustment was made for the maternal viral load for both IU (OR: 3.57, 95% CI = 1.30-9.82, P = 0.013) and IP (OR: 5.71, 95% CI = 1.40-23.3, P = 0.025) HIV-1 transmission. The H1 and H3 haplotypes share a cluster of mutations (p-198A, int2-391C, int2-180A, ex4RPT, int5+7C) ( Figure 1 ). Of these, the p-198A and int2-180A variants were significantly associated with MTCT of HIV-1 (Table S2 ).\n\nQuestion: What is DC-GENR and where is it expressed?", "role": "user" }, { "content": "Dendritic cell-specific ICAM-grabbing non-integrin-related (DC-SIGNR, also known as CD209L or liver/lymph node-specific ICAM-grabbing non-integrin (L-SIGN)) can interact with a plethora of pathogens including HIV-1 and is expressed in placental capillary endothelial cells", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1186
[ { "content": "You will be shown an excerpt from a computer science research paper, followed by a question about the paper. Please answer the question given the context. Do not include any text in your response other than the answer.\n\nContext: Finally, for graph property prediction, there are various ways to aggregate node-level representations into a graph representation, such as by taking the average or sum. Alternatively, one can use the embedding of a virtual [CLS] node (Jain et al., 2021) that is attached to the input graph without any connectivity to other nodes. We compare these approaches in Section 5.\n\nQuestion: Does graph property prediction task create one representation corresponding to the graph?", "role": "user" }, { "content": "Yes, in general, a graph representation is created by aggregating node representations to predict graph property. The reason is that many existing works for graph property prediction aggregate node representations into a graph representation.", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.1411
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nTwo new diterpenoids from Excoecaria acerifolia. A new tigliane diterpenoid, acerifolin A (1), and a new isopimarane diterpenoid, acerifolin B (2), together with two known compounds, were isolated from Excoecaria acerifolia. Their structures were elucidated on the basis of their spectroscopic methods, including 1D and 2D NMR techniques. All of the compounds were evaluated for cytotoxicity against five human cancer cell lines with cisplantin as a positive control.", "role": "user" }, { "content": "{\"Chemical\": [\"tigliane\", \"acerifolin B\", \"diterpenoid\", \"diterpenoids\", \"isopimarane\", \"acerifolin A\", \"cisplantin\"]}", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.720
[ { "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a JSON array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the JSON array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Combined therapy : what does acamprosate and naltrexone combination tell us ? Relapse prevention treatment with both acamprosate and naltrexone has been shown to be efficacious in the treatment of alcoholism. Whereas both compounds act pharmacologically differently, there is up to now only limited evidence as to whether combined treatment is efficacious and pharmacologically safe. It remains to be answered whether data justify the combination of both drugs in clinical practice. ### methods Review of the three pre-clinical and four clinical studies that have been published to date on either combined tolerability or efficacy. ### results Data available up to now show no occurrence of severe adverse events during combined treatment. Diarrhoea and nausea were shown to be the most significant side-effects. Whereas pre-clinical studies regarding efficacy of combined treatment are not yet conclusive, clinical data show the superiority of combined treatment compared with both placebo and acamprosate monotherapy. The synergistic effect of combined treatment remained after 12 weeks of drug-free follow-up. ### conclusions The combination of acamprosate with naltrexone in a clinical sample seems to be efficacious and safe. Numerous alcohol dependent patients could benefit, particularly those that responded insufficiently on monotherapeutic treatment with either acamprosate or naltrexone.\n\nsentence: Combined therapy : what does acamprosate and naltrexone combination tell us ?", "role": "user" }, { "content": "[[\"acamprosate\", \"naltrexone\", \"POS\"]]", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1062
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which is the effect of the HP1a protein on chromatin?\n\nContext: Here we show that PIWI, an ARGONAUTE/PIWI protein family member that binds to Piwi-interacting RNAs (piRNAs), strongly and specifically interacts with heterochromatin protein 1a (HP1a), a central player in heterochromatic gene silencing.\nHeterochromatin protein 1 (HP1a) positively regulates euchromatic gene expression through RNA transcript association and interaction with hnRNPs in Drosophila.\nHeterochromatin Protein 1 (HP1a) is a well-known conserved protein involved in heterochromatin formation and gene silencing in different species including humans.\nHP1a is associated with transcripts of more than one hundred euchromatic genes.\nHeterochromatin protein 1 (HP1) proteins, recognized readers of the heterochromatin mark methylation of histone H3 lysine 9 (H3K9me), are important regulators of heterochromatin-mediated gene silencing and chromosome structure.\nheterochromatin protein 1a (HP1a).\nHP1a targets the Drosophila KDM4A demethylase to a subset of heterochromatic genes to regulate H3K36me3 levels.\nEnrichment of HP1a on Drosophila chromosome 4 genes creates an alternate chromatin structure critical for regulation in this heterochromatic domain.\nHP1a: a structural chromosomal protein regulating transcription.\nHeterochromatin protein 1 (HP1a in Drosophila) is a conserved eukaryotic chromosomal protein that is prominently associated with pericentric heterochromatin and mediates the concomitant gene silencing.\nHeterochromatin-associated interactions of Drosophila HP1a with dADD1, HIPP1, and repetitive RNAs.\nHeterochromatin protein 1 (HP1a) has conserved roles in gene silencing and heterochromatin and is also implicated in transcription, DNA replication, and repair.\nMaintenance of Heterochromatin by the Large Subunit of the CAF-1 Replication-Coupled Histone Chaperone Requires Its Interaction with HP1a Through a Conserved Motif.\nThe chromatin assembly factor 1 (CAF-1) is involved in the assembly of H3-H4 histone dimers on newly synthesized DNA and in the maintenance of a higher order structure, the heterochromatin, through an interaction of its large subunit with the heterochromatin protein HP1a\nAs expected, active gene promoters are mostly not present in LADs, HP1a and Pc domains.\nFinally, we find that HP1a also specifies initiation sites of nuclear envelope reassembly on undamaged chromatin.", "role": "user" }, { "content": "heterochromatic gene silencing", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.92791
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nplanets that transit their stars offer us the opportunity to study the physics of planetary atmospheres and interiors , which may help constrain theories of planet formation . from the photometric light curve\n, we can measure the planetary radius and also the orbital inclination , which when combined with radial velocity ( rv ) observations , allows us to measure the mass and density of the planet .\nall of the known transiting planets orbit so close to their parent stars that the stellar flux plays a major role in heating these planets . in contrast , the detection of transiting planets with longer periods ( @xmath2 ) and consequently lower equilibrium temperatures would allow us to probe a completely different regime of stellar insolation , one more like that of jupiter and saturn whose energy budgets are dominated by their internal heat . because of its long mission lifetime ( @xmath3 ) , continuous observations , and large number of target stars ( @xmath4 ) , the _ kepler _ satellite ( borucki et al .\n2004 ; basri et al .\n2005 ) has a unique opportunity to discover long - period transiting systems .\nnot only will _ kepler _ observe multiple transits of planets with periods up to the mission lifetime , but it is also likely to observe single transits of planets with periods longer than the mission lifetime . as the period of a system increases beyond @xmath5 , the probability of observing more than one transit decreases until , for periods longer than @xmath6 , only one transit will ever be observed . for periods longer than @xmath6 , the probability of seeing a single transit diminishes as @xmath7 .\neven with only a single transit observation from _ kepler _ , these long - period planets are invaluable .\nfigure [ f1 ] gives the total number of systems that _ kepler _ should observe that exhibit exactly one and exactly two transits during the mission lifetime of 3.5 years .\nthe probability of observing a transit during the mission has been convolved with the observed fraction of planets as a function of semimajor axis for @xmath8 and multiplied by the number of stars _ kepler _ will observe ( 100,000 ) to give the number of single transit systems and two transit systems that can be expected . above @xmath9 the observed sample of planets as a function of semi - major axis\nis expected to be incomplete .\nwe have extrapolated the number of expected single transit systems to larger semi - major axes by convolving the single transit probability distribution with the extrapolation of the fraction of planets as a function of semi - major axis by cumming et al .\n( 2008 ) that is constant in dn / dloga .\nthe total expected numbers are 4.0 one transit events ( 5.7 with the extrapolation ) and 5.6 two transit events .\n, width=384 ] is the time of transit center , @xmath10 is the duration of the full - width half - maximum depth of the transit , @xmath11 is duration of ingress or egress , and @xmath12 is the fractional depth of the transit .\n@xmath13@xmath14 are the points of contact .\n@xmath15 is the out - of - transit flux , and @xmath16 is the total duration of the observations .\n[ f2],width=384 ]\nwe derive exact expressions for the uncertainties in @xmath17 , @xmath10 , @xmath11 , and @xmath12 , by applying the fisher matrix formalism to the simple light curve model shown in fig .\n[ f2 ] ( gould 2003 ) . for the long - period transiting planets observed by _ kepler _ , @xmath18 ( the total duration of the observations ) ,\nso the largest uncertainty is for @xmath11 : @xmath19 here @xmath20 is approximately equal to the total signal - to - noise ratio of the transit , @xmath21 where @xmath22 is the photon collection rate . for a jupiter - sized planet with period equal to the mission lifetime and\ntransiting a @xmath23 solar - type star , @xmath24 , whereas for a neptune - sized planet , @xmath25 . assuming the planet is in a circular orbit , and the stellar density , @xmath26 , is known from spectroscopy , seager & malln - ornelas ( 2003 ) showed that the planet s period , @xmath27 , can be derived from a single observed transit , @xmath28 where @xmath29 is the velocity of the planet at the time of transit .\nfigure [ f6 ] shows contours of constant fractional uncertainty in the period as a function of @xmath27 and the planet radius , @xmath30 .\nthe model assumes a solar - type star with @xmath31 , an impact parameter b=0.2 , and a 10% uncertainty in @xmath26 .\nthe dashed line indicates how the fractional uncertainty in @xmath27 changes with @xmath32 ; it shows the 0.20 contour for @xmath33 and @xmath34 .\nthe diagonal dotted line shows the boundary below which the assumptions of the fisher matrix break down ( the ingress / egress time @xmath11 is roughly equal to the sampling rate of @xmath35 1 observation/30 mins ) .\nthe vertical dotted line shows the mission lifetime of _ kepler _ ( @xmath363.5 yrs ) .\nsolar system planets are indicated .\n: @xmath37 . *\nright * : contours of constant fractional uncertainty in @xmath38 : @xmath39 .\n[ f6],title=\"fig:\",width=254 ] : @xmath37 . *\nright * : contours of constant fractional uncertainty in @xmath38 : @xmath39 .\n[ f6],title=\"fig:\",width=254 ] where @xmath41 , and @xmath42 is the stellar radial velocity semi - amplitude and we assume @xmath43 .\nthe mass of the planet , @xmath38 , can be determined from the observables , @xmath44 , @xmath45 , @xmath10 , @xmath11 , and @xmath12 : figure [ f6 ] shows contours of constant uncertainty in @xmath38 as a function of @xmath38 and @xmath27 .\nthe model assumes a 10% uncertainty in @xmath44 .\nthe solid lines show the result for 20 radial velocity measurements with precision of 10 m s@xmath47 taken over a period of 3 months after the transit .\nthe dashed line shows the 0.50 contour for 40 observations taken over 6 months .\nthe dotted line indicates the mission lifetime of 3.5 years .\nthe positions of jupiter and saturn are indicated .\nif the eccentricity @xmath48 is non - zero , there is not a unique solution for @xmath27 and @xmath38 , however , we can still constrain them by adopting priors on @xmath48 and the argument of periastron @xmath49 .\nusing @xmath50 gives a range of inferred periods of 0.42.5 relative to the assumption of a circular orbit .\nthe range of inferred masses is 0.51.7 , which is _ smaller _ than the contribution from the uncertainty in @xmath45 . in this paper\n, we demonstrated that it will be possible to detect and characterize long - period planets using observations of single transits by the _ kepler _\nsatellite , combined with precise radial velocity measurements taken immediately after the transit .\nthese results can also be generalized to any transiting planet survey , and it may be particularly interesting to apply them to the _ convection , rotation & planetary transits ( corot ) _ mission ( baglin 2003 ) . additionally , estimates of the period of a transiting planet could allow the _ kepler _ mission to make targeted increases in the sampling rate at the time of a second transit occurring during the mission . , w. , koch , d. , boss , a. , dunham , e. , dupree , a. , geary , j. , gilliland , r. , howell , s. , jenkins , j. , kondo , y. , latham , d. , lissauer , j. , & reitsema , h. 2004 , in esa special publication , vol .\n538 , stellar structure and habitable planet finding , ed . f. favata , s. aigrain , & a. wilson , 177182", "role": "user" }, { "content": "_ kepler _ will monitor enough stars that it is likely to detect single transits of planets with periods longer than the mission lifetime . \n we show that by combining the _ kepler _ photometry of such transits with precise radial velocity ( rv ) observations taken over @xmath03 months , and assuming circular orbits , it is possible to estimate the periods of these transiting planets to better than 20% ( for planets with radii greater than that of neptune ) and the masses to within a factor of 2 ( for planet masses @xmath1 ) . \n we also explore the effects of eccentricity on our estimates of these uncertainties .", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.1455
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nClinical Effectiveness of Different Polishing Systems and Self-Etch Adhesives in Class V Composite Resin Restorations: Two- Year Randomized Controlled Clinical Trial The aim of this randomized controlled clinical trial was to compare the clinical effectiveness of different polishing systems and self-etch adhesives in class V composite resin restorations. A total of 164 noncarious cervical lesions (NCCLs) from 35 patients were randomly allocated to one of four experimental groups, each of which used a combination of polishing systems and adhesives. The two polishing systems used were Sof-Lex XT (Sof), a multistep abrasive disc, and Enhance/Pogo (EP), a simplified abrasive-impregnated rubber instrument. The adhesive systems were Clearfil SE bond (CS), a two-step self-etch adhesive, and Xeno V (XE), a one-step self-etch adhesive. All NCCLs were restored with light-cured microhybrid resin composites (Z250). Restorations were evaluated at baseline and at 6, 12, 18, and 24 months by two blinded independent examiners using modified FDI criteria. The Fisher exact test and generalized estimating equation analysis considering repeated measurements were performed to compare the outcomes between the polishing systems and adhesives. Three restorations were dislodged: two in CS / Sof and one in CS / EP. None of the restorations required any repair or retreatment except those showing retention loss. Sof was superior to EP with regard to surface luster, staining, and marginal adaptation (p<0.05). CS and XE did not show differences in any criteria (p>0.05). Sof is clinically superior to EP for polishing performance in class V composite resin restoration. XE demonstrates clinically equivalent bonding performance to CS.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"noncarious cervical lesions\", \"NCCLs\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"Polishing Systems\", \"EP\", \"simplified abrasive-impregnated rubber instrument\", \"Enhance/Pogo\", \"Clearfil SE bond\", \"CS\", \"Xeno V\", \"XE\", \"Sof\", \"multistep abrasive disc\", \"polishing systems\", \"Sof-Lex XT\"], \"ClinicalAttribute\": [\"retention\"], \"Eukaryote\": [], \"Finding\": [\"staining\"], \"Food\": [], \"HealthCareActivity\": [\"evaluated\", \"Restorations\", \"polishing\", \"bonding\", \"Composite Resin Restorations\", \"restorations\", \"marginal adaptation\", \"retreatment\", \"composite resin restoration\", \"repair\", \"composite resin restorations\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Class V\", \"class V\", \"Fisher exact test\"], \"MedicalDevice\": [\"Z250\", \"light-cured microhybrid resin composites\"], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"independent examiners\"], \"ResearchActivity\": [\"generalized estimating equation analysis\", \"experimental\", \"Randomized Controlled Clinical Trial\", \"blinded\", \"randomized controlled clinical trial\"], \"SpatialConcept\": [], \"Virus\": []}", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.245
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nThe results presented here are in general agreement with results previously reported for [3H]DHA (1, 9) and [14C]artemisinin (25) uptake.", "role": "user" }, { "content": "Result", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.3107
[ { "content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: The coronavirus that causes COVID-19 mainly spreads from person to person. \n\nAbstract:\nThe evolution of new and reemerging historic virulent strains of respiratory viruses from animal reservoirs is a significant threat to human health.\nInefficient human-to-human transmission of zoonotic strains may initially limit the spread of transmission, but an infection may be contracted by touching contaminated surfaces.\nEnveloped viruses are often susceptible to environmental stresses, but the human coronaviruses responsible for severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) have recently caused increasing concern of contact transmission during outbreaks.\nWe report here that pathogenic human coronavirus 229E remained infectious in a human lung cell culture model following at least 5 days of persistence on a range of common nonbiocidal surface materials, including polytetrafluoroethylene (Teflon; PTFE), polyvinyl chloride (PVC), ceramic tiles, glass, silicone rubber, and stainless steel.\nWe have shown previously that noroviruses are destroyed on copper alloy surfaces.\nIn this new study, human coronavirus 229E was rapidly inactivated on a range of copper alloys (within a few minutes for simulated fingertip contamination) and Cu/Zn brasses were very effective at lower copper concentration.\nExposure to copper destroyed the viral genomes and irreversibly affected virus morphology, including disintegration of envelope and dispersal of surface spikes.\nCu(I) and Cu(II) moieties were responsible for the inactivation, which was enhanced by reactive oxygen species generation on alloy surfaces, resulting in even faster inactivation than was seen with nonenveloped viruses on copper.\nConsequently, copper alloy surfaces could be employed in communal areas and at any mass gatherings to help reduce transmission of respiratory viruses from contaminated surfaces and protect the public health.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Inefficient human-to-human transmission of zoonotic strains may initially limit the spread of transmission, but an infection may be contracted by touching contaminated surfaces.\"]}", "role": "assistant" } ]
science.covidfact_entailment
science.covidfact_entailment.451
[ { "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Cdc study found geographic origin of sars-covid-19 in new york is from europe rather than china .\n\nAbstract:\nTo limit introduction of SARS-CoV-2, the virus that causes coronavirus disease 2019 (COVID-19), the United States restricted travel from China on February 2, 2020, and from Europe on March 13.\nTo determine whether local transmission of SARS-CoV-2 could be detected, the New York City (NYC) Department of Health and Mental Hygiene (DOHMH) conducted deidentified sentinel surveillance at six NYC hospital emergency departments (EDs) during March 1-20.\nOn March 8, while testing availability for SARS-CoV-2 was still limited, DOHMH announced sustained community transmission of SARS-CoV-2 (1).\nAt this time, twenty-six NYC residents had confirmed COVID-19, and ED visits for influenza-like illness* increased, despite decreased influenza virus circulation.† The following week, on March 15, when only seven of the 56 (13%) patients with known exposure histories had exposure outside of NYC, the level of community SARS-CoV-2 transmission status was elevated from sustained community transmission to widespread community transmission (2).\nThrough sentinel surveillance during March 1-20, DOHMH collected 544 specimens from patients with influenza-like symptoms (ILS)§ who had negative test results for influenza and, in some instances, other respiratory pathogens.¶ All 544 specimens were tested for SARS-CoV-2 at CDC; 36 (6.6%) tested positive.\nUsing genetic sequencing, CDC determined that the sequences of most SARS-CoV-2-positive specimens resembled those circulating in Europe, suggesting probable introductions of SARS-CoV-2 from Europe, from other U.S. locations, and local introductions from within New York.\nThese findings demonstrate that partnering with health care facilities and developing the systems needed for rapid implementation of sentinel surveillance, coupled with capacity for genetic sequencing before an outbreak, can help inform timely containment and mitigation strategies.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Using genetic sequencing, CDC determined that the sequences of most SARS-CoV-2-positive specimens resembled those circulating in Europe, suggesting probable introductions of SARS-CoV-2 from Europe, from other U.S. locations, and local introductions from within New York.\"]}", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.3313
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nHigh-Throughput Analytical Model to Evaluate Materials for Temperature Swing Adsorption Processes. In order for any material to be considered in a post-combustion carbon capture technology, it must have high working capacities of CO2 from flue gas and be regenerable using as little energy as possible. Shown here is an easy to use method to calculate both working capacities and regeneration energies and thereby predict optimal desorption conditions for any material.", "role": "user" }, { "content": "{\"Chemical\": []}", "role": "assistant" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.948
[ { "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nNone\n\nContext before the citation:\nThe problem of handling ill-formed input has been studied by Carbonell and Hayes (1983), Granger (1983), Jensen et al. (1983), Kwasny and Sondheimer (1981), Riesbeck and Schank (1976), Thompson (1980), Weischedel and Black (1980), and Weischedel and Sondheimer (1983). A detailed description of the kinds of expectation mechanisms appearing in these systems appears in Fink (1983). While some of these systems did exhibit expectation capabilities at the sentence level, none acquired dialogues of the kind described here for the sake of dialogue level expectation and error correction.\n\nCitation Sentence:\nThe problem of handling ill-formed input has been studied by Carbonell and Hayes ( 1983 ) , Granger ( 1983 ) , Jensen et al. ( 1983 ) , Kwasny and Sondheimer ( 1981 ) , Riesbeck and Schank ( 1976 ) , Thompson ( 1980 ) , Weischedel and Black ( 1980 ) , and Weischedel and Sondheimer ( 1983 ) .\n\nContext after the citation:\nA wide variety of techniques have been developed for addressing problems at the word, phrase, sentence, and in some cases, dialogue level. However, these methodologies have not used historical information at the dialogue level as described here. In most cases, the goal of these systems is to characterize the ill-formed input into classes of errors and to correct on that basis. The work described here makes no attempt to classify the errors, but treats them as random events that occur at any point in a sentence.", "role": "user" }, { "content": "CompareOrContrast", "role": "assistant" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.1624
[ { "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nNone\n\nContext before the citation:\nFor MT the most commonly used heuristic is called grow diagonal final (Och and Ney 2003). alignment. Typically this is achieved by a symmetrization heuristic that takes two directional alignments and produces a single\n\nCitation Sentence:\nFor MT the most commonly used heuristic is called grow diagonal final ( Och and Ney 2003 ) .\n\nContext after the citation:\nThis starts with the intersection of the sets of aligned points and adds points around the diagonal that are in the union of the two sets of aligned points. The alignment produced has high recall relative to the intersection and only slightly lower recall than the union. In syntax transfer the intersection heuristic is normally used, because one wants to have high precision links to transfer knowledge between languages. One pitfall of these symmetrization heuristics is that they can obfuscate the link between the original alignment and the ones used for a specific task, making errors more difficult to analyze.", "role": "user" }, { "content": "CompareOrContrast", "role": "assistant" } ]
science.covidfact_entailment
science.covidfact_entailment.699
[ { "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Structure-guided multivalent nanobodies block sars-cov-2 infection and suppress mutational escape.\n\nAbstract:\nWithout an effective prophylactic solution, infections from SARS-CoV-2 continue to rise worldwide with devastating health and economic costs.\nSARS-CoV-2 gains entry into host cells via an interaction between its Spike protein and the host cell receptor angiotensin converting enzyme 2 (ACE2).\nDisruption of this interaction confers potent neutralization of viral entry, providing an avenue for vaccine design and for therapeutic antibodies.\nHere, we develop single-domain antibodies (nanobodies) that potently disrupt the interaction between the SARS-CoV-2 Spike and ACE2.\nBy screening a yeast surface-displayed library of synthetic nanobody sequences, we identified a panel of nanobodies that bind to multiple epitopes on Spike and block ACE2 interaction via two distinct mechanisms.\nCryogenic electron microscopy (cryo-EM) revealed that one exceptionally stable nanobody, Nb6, binds Spike in a fully inactive conformation with its receptor binding domains (RBDs) locked into their inaccessible down-state, incapable of binding ACE2.\nAffinity maturation and structure-guided design of multivalency yielded a trivalent nanobody, mNb6-tri, with femtomolar affinity for SARS-CoV-2 Spike and picomolar neutralization of SARS-CoV-2 infection.\nmNb6-tri retains stability and function after aerosolization, lyophilization, and heat treatment.\nThese properties may enable aerosol-mediated delivery of this potent neutralizer directly to the airway epithelia, promising to yield a widely deployable, patient-friendly prophylactic and/or early infection therapeutic agent to stem the worst pandemic in a century.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Affinity maturation and structure-guided design of multivalency yielded a trivalent nanobody, mNb6-tri, with femtomolar affinity for SARS-CoV-2 Spike and picomolar neutralization of SARS-CoV-2 infection.\"]}", "role": "assistant" } ]
science.chia_ner
science.chia_ner.1313
[ { "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEnglish speaking\nbetween 18 and 75 years old\nAmerican Society of Anesthesiologists (ASA) 1-3 patients undergoing primary total hip arthroplasty", "role": "user" }, { "content": "{\"Condition\": [], \"Device\": [], \"Drug\": [], \"Measurement\": [\"American Society of Anesthesiologists\", \"ASA\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [\"old\"], \"Procedure\": [\"primary total hip arthroplasty\"], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [\"between 18 and 75 years\", \"1-3\"], \"Visit\": []}", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.1372
[ { "content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nSequence-to-sequence (seq2seq) neural models have been actively investigated for abstractive summarization.\nNevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding.\nIn this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content.\nA novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer.\nHuman evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.\nAutomatic text summarization holds the promise of alleviating the information overload problem BID13 .\nConsiderable progress has been made over decades, but existing summarization systems are still largely extractive-important sentences or phrases are identified from the original text for inclusion in the output BID22 .\nExtractive summaries thus unavoidably suffer from redundancy and incoherence, leading to the need for abstractive summarization methods.\nBuilt on the success of sequence-to-sequence (seq2seq) learning models BID35 , there has been a growing interest in utilizing a neural framework for abstractive summarization BID28 BID20 BID41 BID36 BID5 .Although\ncurrent state-of-the-art neural models naturally excel at generating grammatically correct sentences, the model structure and learning objectives have intrinsic difficulty in acquiring semantic interpretation of the input text, which is crucial for summarization. Importantly\n, the lack of semantic understanding causes existing systems to produce unfaithful generations. BID3 report\nthat about 30% of the summaries generated from a seq2seq model contain fabricated or nonsensical information.Furthermore, current neural summarization systems can be easily fooled by off-topic information. For instance\n, FIG0 shows one example where irrelevant sentences are added into an article about \"David Collenette's resignation\". Both the seq2seq\nattentional model BID20 and the popular pointer-generator model BID31 are particularly susceptible to unfaithful generation, partially because these models tend to rely on sentences at the beginning of the articles for summarization while being ignorant about their content. Therefore, we design\na novel adversarial evaluation metric to measure the robustness of each summarizer against small amounts of randomly inserted topic-irrelevant information. The intuition is that\nif a summarization system truly understands the salient entities and events, it would ignore unrelated content.32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.Article Snippet: For years Joe DiMaggio was always introduced at Yankee Stadium as \"baseball's greatest living player.\" But with his memory joining\nthose of Babe Ruth, Lou Gehrig, Mickey Mantle and Miller Huggins. Canada's Minister of Defense\nresigned today, a day after an army official testified that top military officials had altered documents to cover up responsibility for the beating death of a Somali teen-ager at the hands of Canadian peacekeeping troops in 1993. Defense minister David Collenette\ninsisted that his resignation had nothing to do with the Somalia scandal. Ted Williams was the first name to\ncome to mind, and he's the greatest living hitter. ... Seq2seq: George Vecsey sports\nof\nThe Times column on New York State's naming of late baseball legend Joe DiMaggio as \"baseball's greatest living player,\" but with his memory joining those of Babe Ruth, Lou Gehrig, Mickey Mantle and Miller dens. Pointer-generator: Joe DiMaggio\nis first name to come to mind, and Ted Williams is first name to come to mind, and he's greatest living hitter; he will be replaced by human resources minister, Doug Young, and will keep his Parliament seat for governing Liberal Party. Our Model: Former Canadian Defense\nMin David Collenette resigns day after army official testifies that top military officials altered documents to cover up responsibility for beating death of Somali teen-ager at hands of Canadian peacekeeping troops in 1993. To address the above issues, we propose\na novel semantic-aware abstractive summarization model, inspired by the human process of writing summaries-important events and entities are first identified, and then used for summary construction. Concretely, taking an article as input,\nour model first generates a set of summary-worthy semantic structures consisting of predicates and corresponding arguments (as in semantic parsing), then constructs a fluent summary reflecting the semantic information. Both tasks are learned under an encoder-decoder\narchitecture with new learning objectives. A dual attention mechanism for summary decoding\nis designed to consider information from both the input article and the generated predicate-argument structures. We further present a novel decoder with a segment-based\nreranking strategy to produce diverse hypotheses and reduce redundancy under the guidance of generated semantic information.Evaluation against adversarial samples shows that while performance by the seq2seq attentional model and the pointer-generator model is impacted severely by even a small addition of topic-irrelevant information to the input, our model is significantly more robust and consistently produces more on-topic summaries (i.e. higher ROUGE and METEOR scores for standard automatic evaluation). Our model also achieves significantly better ROUGE and\nMETEOR scores than both models on the benchmark dataset CNN/Daily Mail BID11 . Specifically, our model's summaries use substantially\nfewer and shorter extractive fragments than the comparisons and have less redundancy, alleviating another common problem for the seq2seq framework. Human evaluation demonstrates that our model generates\nmore informative and faithful summaries than the seq2seq model.\nUsage of Semantic Roles in Summaries.\nWe examine the utility of the generated semantic roles.\nAcross all models, approximately 44% of the generated predicates are part of the reference summary, indicating the adequacy of our semantic decoder.\nFurthermore, across all models, approximately 65% of the generated predicates are reused by the generated summary, and approximately 53% of the SRL structures are reused by the system using a strict matching constraint, in which the predicate and head words for all arguments must match in the summary.\nWhen gold-standard semantic roles are used for dual attention in place of our system generations, ROUGE scores increase by about half a point, indicating that improving semantic decoder in future work will further enhance the summaries.Coverage.\nWe also conduct experiments using a coverage mechanism similar to the one used in BID31 .\nWe apply our coverage in two places: (1) over the input to handle redundancy, and (2) over the generated semantics to promote its reuse in the summary.\nHowever, no significant difference is observed.\nOur proposed reranker handles both issues in a more explicit way, and does not require the additional training time used to learn coverage parameters.Alternative Semantic Representation.\nOur summarization model can be trained with other types of semantic information.\nFor example, in addition to using the salient semantic roles from the input article, we also explore using SRL parses of the reference abstracts as training signals, but the higher level of abstraction required for semantic generation hurts performance by two ROUGE points for almost all models, indicating the type of semantic structure matters greatly for the ultimate summarization task.For future work, other semantic representation along with novel model architecture will be explored.\nFor instance, other forms of semantic representation can be considered, such as frame semantics BID1 or Abstract Meaning Representation (AMR) BID2 .\nAlthough previous work by has shown that seq2seq models are able to successfully generate linearized tree structures, we may also consider generating semantic roles with a hierarchical semantic decoder BID34 .\nWe presented a novel semantic-aware neural abstractive summarization model that jointly learns summarization and semantic parsing.\nA novel dual attention mechanism was designed to better capture the semantic information for summarization.\nA reranking-based decoder was proposed to promote the content coverage.\nOur proposed adversarial evaluation demonstrated that our model was more adept at handling irrelevant information compared to popular neural summarization models.\nExperiments on two large-scale news corpora showed that our model yielded significantly more informative, less redundant, and less extractive summaries.\nHuman evaluation further confirmed that our summaries were more informative and faithful than comparisons.", "role": "user" }, { "content": "We propose a semantic-aware neural abstractive summarization model and a novel automatic summarization evaluation scheme that measures how well a model identifies off-topic information from adversarial samples.", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2262
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to implement a network to predict 3D semantic instance segmentation\n\nKeyphrases: 3d semantic instance segmentation rgb-d\n\nDataset description:\nSBD: The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images.\nScanNet: ScanNet is an instance-level indoor RGB-D dataset that includes both 2D and 3D data. It is a collection of labeled voxels rather than points or objects.\nSUNCG: SUNCG is a large-scale dataset of synthetic 3D scenes with dense volumetric annotations. The dataset is currently not available.\nSemanticPOSS: The SemanticPOSS dataset for 3D semantic segmentation contains 2988 various and complicated LiDAR scans with large quantity of dynamic instances. The data is collected in Peking University and uses the same data format as SemanticKITTI.\nA2D2: Audi Autonomous Driving Dataset (A2D2) consists of simultaneously recorded images and 3D point clouds, together with 3D bounding boxes, semantic segmentation, instance segmentation, and data extracted from the automotive bus.\n\nOptions:- A: SBD B: ScanNet C: SUNCG D: SemanticPOSS E: A2D2", "role": "user" }, { "content": "[\"ScanNet\", \"SUNCG\"]", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.148
[ { "content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: A more specialized mask, known as an N95 respirator, can protect against the new coronavirus, also called SARS-CoV-2.\n\nAbstract:\nThe COVID‐19 pandemic caused by the novel coronavirus SARS‐CoV‐2 has claimed many lives worldwide.\nWearing medical masks or N95 masks (namely N95 respirators) can slow the virus spread and reduce the infection risk.\nReuse of these masks can minimize waste, protect the environment, and help to solve the current imminent shortage of masks.\nDisinfection of used masks is needed for reuse of them with safety, but improper decontamination can damage the blocking structure of masks.\nIn this study, we demonstrated, using avian coronavirus of infectious bronchitis virus to mimic SARS‐CoV‐2, that medical masks and N95 masks remained their blocking efficacy after being steamed on boiling water even for 2 hours.\nWe also demonstrated that three brands of medical masks blocked over 99% viruses in aerosols.\nThe avian coronavirus was completely inactivated after being steamed for 5 minutes.\nTogether, this study suggested that medical masks are adequate for use on most social occasions, and both medical masks and N95 masks can be reused for a few days with steam decontamination between use.\nThis article is protected by copyright.\nAll rights reserved.", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Wearing medical masks or N95 masks (namely N95 respirators) can slow the virus spread and reduce the infection risk.\", \"We also demonstrated that three brands of medical masks blocked over 99% viruses in aerosols.\", \"The avian coronavirus was completely inactivated after being steamed for 5 minutes.\", \"Together, this study suggested that medical masks are adequate for use on most social occasions, and both medical masks and N95 masks can be reused for a few days with steam decontamination between use.\"]}", "role": "assistant" } ]
science.scierc_re
science.scierc_re.100
[ { "content": "You will be given an abstract from a computer science research paper. Given the abstract, your task is to extract all unique relationships between entities of the following, and ONLY the following, types:\n\n- \"USED-FOR\": For instance, \"Our method models user proficiency\" -> [\"method\", \"user proficiency\", \"USED-FOR\"].\n- \"EVALUATE-FOR\": For instance, \"We evaluate our method using F1-measure\" -> [\"F1-measure\", \"method\", \"EVALUATE-FOR\"].\n- \"FEATURE-OF\": For instance, \"prior knowledge of the model\" -> [\"prior knowledge\", \"model\", \"FEATURE-OF\"].\n- \"HYPONYM-OF\": For instance, \"TUIT is a software library\" -> [\"TUIT\", \"software library\", \"HYPONYM-OF\"].\n- \"PART-OF\": For instance, \"We incorporate NLU module into the system\" -> [\"NLU module\", \"system\", \"PART-OF\"].\n- \"COMPARE\": For instance, \"Unlike the quantitative prior, the qualitative prior is often ignored\" -> [\"quantitative prior\", \"qualitative prior\", \"COMPARE\"].\n- \"CONJUNCTION\": For instance, \"obtained from human expert or knowledge base\" -> [\"human expert\", \"knowledge base\", \"CONJUNCTION\"].\n\nFormat your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<Entity_B>\", \"<RELATION_A_B>\"]. An example output might look like:\n\n[[\"neural networks\", \"classification\", \"USED-FOR\"], [\"neuron\", \"neural network\", \"PART-OF\"]].\n\nIf no relations are found, please return an empty JSON array \"[]\".\n\nAbstract:\n\nWe consider the problem of question-focused sentence retrieval from complex news articles describing multi-event stories published over time. Annotators generated a list of questions central to understanding each story in our corpus. Because of the dynamic nature of the stories, many questions are time-sensitive (e.g. 'How many victims have been found?'). Judges found sentences providing an answer to each question. To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization. Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to the input question via IDF-weighted word overlap. In our experiments, the method achieves a TRDR score that is significantly higher than that of the baseline.", "role": "user" }, { "content": "[[\"multi-event stories\", \"news articles\", \"FEATURE-OF\"], [\"stochastic, graph-based method\", \"baseline\", \"COMPARE\"], [\"TRDR score\", \"baseline\", \"EVALUATE-FOR\"], [\"TRDR score\", \"stochastic, graph-based method\", \"EVALUATE-FOR\"], [\"news articles\", \"question-focused sentence retrieval\", \"USED-FOR\"], [\"stochastic, graph-based method\", \"generic summarization\", \"USED-FOR\"], [\"stochastic, graph-based method\", \"question-focused sentence retrieval\", \"USED-FOR\"], [\"baseline\", \"stochastic, graph-based method\", \"COMPARE\"]]", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.1782
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nOsteomyelitis of Humerus and Intramuscular Abscess Due to Melioidosis Melioidosis is a clinically diverse disease caused by gram negative bacterium Burkholderia pseudomallei. It is a potential bioterrorism agent. The high risk group includes the agricultural and construction workers whose contact with contaminated soil and water may expose them to bacteria. The clinical manifestations varies from asymptomatic infection to overwhelming sepsis. To diagnose melioidosis a high index of suspicion along with isolation and identification of the organism from the clinical samples is needed. Early diagnosis and treatment is essential for better outcome. We are reporting a case of melioidosis which presented as osteomyelitis of humerus with intramuscular abscess.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [\"bacteria\", \"gram negative bacterium\", \"Burkholderia pseudomallei\"], \"BiologicFunction\": [\"disease\", \"asymptomatic infection\", \"Abscess\", \"abscess\", \"sepsis\", \"Melioidosis\", \"melioidosis\", \"suspicion\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"bioterrorism agent\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"diagnose\"], \"Food\": [], \"HealthCareActivity\": [\"Early diagnosis\", \"reporting\", \"treatment\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [\"construction workers\"], \"ResearchActivity\": [], \"SpatialConcept\": [\"intramuscular\", \"Intramuscular\"], \"Virus\": []}", "role": "assistant" } ]
science.annotated_materials_syntheses_events
science.annotated_materials_syntheses_events.120
[ { "content": "You will be shown an abstract from a materials science research paper. Your task is to extract all synthesis operations described in the abstract into a structured JSON format. Your output should be a JSON array. Each entry in the array should be a JSON object describing a single synthesis operation. Your output JSON object should have the following two fields:\n\n- \"trigger\": A verbal predicate indicating a synthesis operation. This should be a string extracted directly from the abstract.\n- \"arguments\": An array of arguments specifying aspects of the synthesis operation. Each entry in the array should be a length-2 array formatted as [<argument_text>, <argument_type>]. The `argument_text` should be a string extracted directly from the abstract. The `argument_type` should be one of the following types: [\"Recipe_Precursor\", \"Solvent_Material\", \"Atmospheric_Material\", \"Participant_Material\", \"Recipe_Target\"].\n\nFor instance, suppose an excerpt from the abstract reads:\n\nTetraethoxysilane (TEOS, 98%) and cetyltrimethylammonium bromide (CTAB, 98%) were purchased from Fluka. The MTTZ-siloxy derivative (triethoxysilyl-propyl-5-mercapto-1-methyl-tetrazole) was synthesized as previously described [38].\n\nThis excerpt includes two operations, which could be annotated as follows:\n\n[{\"arguments\": [[\"Tetraethoxysilane\", \"Recipe_Precursor\"], [\"cetyltrimethylammonium bromide\", \"Recipe_Precursor\"]], \"trigger\": \"purchased\"}, {\"arguments\": [[\"MTTZ-siloxy derivative\", \"Recipe_Target\"]], \"trigger\": \"synthesized\"}]\n\nIn your response, please include the JSON and no other text.\n\nAbstract:\n10.1016/j.cattod.2013.04.013\nEthanol steam reforming over NiLaZr and NiCuLaZr mixed metal oxide catalysts\n\n\nNickel nitrate Ni(NO3)2*6H2O, copper nitrate (Cu(NO3)2*2.5H2O, lanthanum nitrate (La(NO3)3*5H2O, 70% (w/w) zirconium propoxide Zr(OCH2CH2CH3)4 in 1-propanol, and oxalic acid (HO2CCO2H) were purchased from Sigma-Aldrich. Ethanol was purchased from Biopack. All chemicals were reagent-grade and were used as received.\n\nNiLaZr and NiCuLaZr catalysts were prepared by co-precipitation following the addition of oxalic acid to an ethanolic solution of Cu(NO3)2, Ni(NO3)2, La(NO3)3 and Zr(OCH2CH2CH3)4. The precipitates were washed with ethanol, dried at 80 degC, and calcined for 2 h at different temperatures: 700 degC, 850 degC, 900 degC and 950 degC. The resulting NiLaZr catalysts contained 5% of Ni, and the NiCuLaZr catalysts contained 4% of Ni and 1% of Cu, on a weight basis. Where required, the final calcination temperature (degC) is indicated in the catalyst denotation.", "role": "user" }, { "content": "[{\"arguments\": [[\"lanthanum nitrate\", \"Recipe_Precursor\"], [\"copper nitrate\", \"Recipe_Precursor\"], [\"1-propanol\", \"Solvent_Material\"], [\"Nickel nitrate\", \"Recipe_Precursor\"], [\"oxalic acid\", \"Participant_Material\"], [\"zirconium propoxide\", \"Recipe_Precursor\"]], \"trigger\": \"purchased\"}, {\"arguments\": [[\"Ethanol\", \"Solvent_Material\"]], \"trigger\": \"purchased\"}, {\"arguments\": [[\"NiLaZr\", \"Recipe_Target\"], [\"NiCuLaZr\", \"Recipe_Target\"]], \"trigger\": \"prepared\"}, {\"arguments\": [[\"La(NO3)3\", \"Recipe_Precursor\"], [\"Ni(NO3)2\", \"Recipe_Precursor\"], [\"oxalic acid\", \"Participant_Material\"], [\"Cu(NO3)2\", \"Recipe_Precursor\"], [\"Zr(OCH2CH2CH3)4\", \"Recipe_Precursor\"]], \"trigger\": \"addition\"}, {\"arguments\": [[\"ethanol\", \"Solvent_Material\"], [\"precipitates\", \"Participant_Material\"]], \"trigger\": \"washed\"}]", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2006
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to train a supervised model for image classification with 8-bit (\n\nKeyphrases: 8-bit training images\n\nDataset description:\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\nICB: A carefully chosen set of high-resolution high-precision natural images suited for compression algorithm evaluation. The images historically used for compression research (lena, barbra, pepper etc...) have outlived their useful life and its about time they become a part of history only.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nBreakHis: The Breast Cancer Histopathological Image Classification (BreakHis) is composed of 9,109 microscopic images of breast tumor tissue collected from 82 patients using different magnifying factors (40X, 100X, 200X, and 400X). It contains 2,480 benign and 5,429 malignant samples (700X460 pixels, 3-channel RGB, 8-bit depth in each channel, PNG format).\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\n\nOptions:- A: ConvAI2 B: ICB C: SNIPS D: BreakHis E: COCO F: ImageNet", "role": "user" }, { "content": "[\"COCO\", \"ImageNet\"]", "role": "assistant" } ]
science.genia_ner
science.genia_ner.1373
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDifferential RNA display identifies novel genes associated with decreased vitamin D receptor expression. To characterize further the function of the intracellular vitamin D receptor (VDR), we have developed stable transfectant variants of a vitamin D-responsive cell line (U937) which express either decreased or increased numbers of VDR. In this study we have analyzed changes in gene expression associated with this variable VDR expression. Initial experiments indicated that a 50% decrease in VDR levels was associated with a 2-fold increase in cell proliferation and a similar rise in c-myc mRNA expression. Further studies were carried out using differential RNA display (DD). Sequence analysis of DD products revealed two cDNAs with identity to known gene products: the catalytic sub-unit of DNA-protein kinase (DNA-PK (CS)), and the peroxisomal enzyme 17beta-hydroxysteroid dehydrogenase type IV (17beta-HSD IV). Northern analysis confirmed that expression of both mRNAs was reduced in cells with decreased numbers of VDR. Down-regulation of 17beta-HSD IV mRNA expression was associated with enhanced estradiol inactivation by U937 cells, suggesting a link between estrogenic pathways and cell proliferation. Further Northern analyses indicated that there was no significant change in 17beta-HSD IV or DNA-PK (CS) mRNA levels following treatment with 1, 25 (OH) 2D3, although expression of both genes varied with changes in cell proliferation. These data suggest that, in addition to its established role as a hormone-dependent trans-activator, VDR may influence gene expression by ligand-independent mechanisms.", "role": "user" }, { "content": "{\"CellLine\": [\"stable transfectant variants\", \"vitamin D - responsive cell line\", \"U937 cells\", \"U937\"], \"CellType\": [], \"DNA\": [\"cDNAs\", \"novel genes\"], \"Protein\": [\"DD products\", \"vitamin D receptor\", \"DNA - PK (CS)\", \"VDR\", \"17beta - HSD IV\", \"catalytic sub - unit\", \"DNA - protein kinase\", \"peroxisomal enzyme 17beta - hydroxysteroid dehydrogenase type IV\", \"hormone - dependent trans - activator\", \"intracellular vitamin D receptor\", \"known gene products\", \"catalytic sub - unit of DNA - protein kinase\"], \"RNA\": [\"mRNAs\", \"DNA - PK (CS) mRNA\", \"c - myc mRNA\"]}", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.778
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nGlycosylation of inositol phosphorylceramide sphingolipids is required for normal growth and reproduction in Arabidopsis Sphingolipids are a major component of plant plasma membranes and endomembranes, and mediate a diverse range of biological processes. Study of the highly glycosylated glycosyl inositol phosphorylceramide (GIPC) sphingolipids has been slow as a result of challenges associated with the extractability of GIPCs, and their functions in the plant remain poorly characterized. We recently discovered an Arabidopsis GIPC glucuronosyltransferase, INOSITOL PHOSPHORYLCERAMIDE GLUCURONOSYLTRANSFERASE 1 (IPUT1), which is the first enzyme in the GIPC glycosylation pathway. Plants homozygous for the iput1 loss-of-function mutation were unobtainable, and so the developmental effects of reduced GIPC glucuronosylation could not be analyzed in planta. Using a pollen -specific rescue construct, we have here isolated homozygous iput1 mutants. The iput1 mutants show severe dwarfism, compromised pollen tube guidance, and constitutive activation of salicyclic acid -mediated defense pathways. The mutants also possess reduced GIPC s, increased ceramides, and an increased incorporation of short-chain fatty acids and dihydroxylated bases into inositol phosphorylceramides and GIPCs. The assignment of a direct role for GIPC glycan head groups in the impaired processes in iput1 mutants is complicated by the vast compensatory changes in the sphingolipidome; however, our results reveal that the glycosylation steps of GIPC biosynthesis are important regulated components of sphingolipid metabolism. This study corroborates previously suggested roles for GIPC glycans in plant growth and defense, suggests important roles for them in reproduction and demonstrates that the entire sphingolipidome is sensitive to their status.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"dwarfism\", \"mutants\", \"plasma membranes\", \"iput1\", \"iput1 mutants\", \"endomembranes\"], \"Bacterium\": [], \"BiologicFunction\": [\"pathway\", \"biological processes\", \"glycosylation\", \"sphingolipid metabolism\", \"pollen tube guidance\", \"glucuronosylation\", \"loss-of-function mutation\", \"pathways\", \"reproduction\", \"constitutive activation\", \"impaired processes\", \"growth\", \"biosynthesis\", \"plant growth\", \"Glycosylation\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"sphingolipids\", \"GIPCs\", \"dihydroxylated bases\", \"sphingolipidome\", \"salicyclic acid\", \"ceramides\", \"inositol phosphorylceramide\", \"glycosylated glycosyl inositol phosphorylceramide\", \"glucuronosyltransferase\", \"INOSITOL PHOSPHORYLCERAMIDE GLUCURONOSYLTRANSFERASE 1\", \"IPUT1\", \"short-chain fatty acids\", \"rescue construct\", \"glycan\", \"Sphingolipids\", \"GIPC\", \"inositol phosphorylceramides\", \"enzyme\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"planta\", \"Plants\", \"Arabidopsis\", \"plant\", \"pollen\"], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\", \"analyzed\", \"Study\"], \"SpatialConcept\": [], \"Virus\": []}", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.566.0
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Deep Learning for Automatic Tracking of Tongue Surface in Real-time Ultrasound Videos, Landmarks instead of Contours\n\nOne usage of medical ultrasound imaging is to visualize and characterize human tongue shape and motion during a real-time speech to study healthy or impaired speech production. Due to the low-contrast characteristic and noisy nature of ultrasound images, it might require expertise for non-expert users to recognize tongue gestures in applications such as visual training of a second language. Moreover, quantitative analysis of tongue motion needs the tongue dorsum contour to be extracted, tracked, and visualized. Manual tongue contour extraction is a cumbersome, subjective, and error-prone task. Furthermore, it is not a feasible solution for real-time applications. The growth of deep learning has been vigorously exploited in various computer vision tasks, including ultrasound tongue contour tracking. In the current methods, the process of tongue contour extraction comprises two steps of image segmentation and post-processing. This paper presents a new novel approach of automatic and real-time tongue contour tracking using deep neural networks. In the proposed method, instead of the two-step procedure, landmarks of the tongue surface are tracked. This novel idea enables researchers in this filed to benefits from available previously annotated databases to achieve high accuracy results. Our experiment disclosed the outstanding performances of the proposed technique in terms of generalization, performance, and accuracy.\n\nIntroduction\nUltrasound technology is a widespread technology in speech research for studying tongue movement and speech articulation BIBREF0 due to its attractive characteristics, such as imaging at a reasonably rapid frame rate, which empowers researchers to envision subtle and swift gestures of the tongue in real-time. Besides, ultrasound technology is portable, relatively affordable, and clinically safe and non-invasive BIBREF1. The mid-sagittal view is regularly adapted in ultrasound data as it displays relative backness, height, and the slope of various areas of the tongue. Quantitative analysis of tongue motion needs the tongue contour to be extracted, tracked, and visualized.\nManual frame-by-frame tongue contour extraction is a cumbersome, subjective, and error-prone task. Moreover, it is not a feasible solution for real-time applications. In conventional techniques, for extracting ultrasound tongue contours, a discrete set of vertices are first annotated near the lower part of the tongue dorsum defining initial deformable tongue contour BIBREF2. Then, through an iterative minimization process using features of the image, the annotated contour is regulated toward the tongue dorsum region. For instance, in active contour models technique (e.g., EdgeTrak software) BIBREF3, BIBREF4, two internal and external energy functions are minimized over the image gradient. The requirement of feature extraction for each image and accurate initialization are two main drawbacks for those classical techniques. Another alternative scenario is to use semi-supervised machine learning models for automatic segmentation of tongue contour regions. Then, tongue contours are extracted automatically using post-processing stages.\nSemi-supervised machine learning-based methods BIBREF5 are first utilized for ultrasound tongue contour segmentation in an study by BIBREF6, BIBREF7 while deep learning models emerge in this field through studies by BIBREF0, BIBREF8. They fine-tuned one pre-trained decoder part of a Deep Belief Network (DBN) model to infer tongue contours from new instances. End-to-end fashion supervised deep learning techniques, outperformed previous techniques in recent years. For example, U-net BIBREF9 has been used for automatic ultrasound tongue extraction BIBREF10, BIBREF11. After successful results of deep learning methods, the focus of advanced techniques for tongue contour extraction is more on generalization and real-time performance BIBREF12, BIBREF13, BIBREF14.\nAlthough deep learning methods have been utilized successfully in many studies, manual annotation of ultrasound tongue databases is still cumbersome, and the performance of supervised methods mostly depends on the accuracy of the annotated database as well as the number of available samples. Available databases in this field are annotated by linguistics experts for many years employing landmark points on the tongue contours. In this work, we proposed a new direction for the problem of ultrasound tongue contour extraction using a deep learning technique where instead of tracking the tongue surface, landmarks on the tongue are tracked. In this way, researchers can use previously available linguistics ultrasound tongue databases. Moreover, the whole process of tongue contour extraction is performed in one step, where it increases the performance speed without comprising accuracy or generalization ability of the previous techniques.\n\nMethodology\nSimilar to facial landmark detection methods BIBREF15, we considered the problem of tongue contour extraction as a simple landmark detection and tracking. For this reason, we first developed a customized annotator software that can extract equally separated and randomized markers from segmented tongue contours in different databases. Meanwhile, the same software could fit B-spline curves on the extracted markers to revert the process for evaluation purposes.\nTo track landmarks on the tongue surface, we designed a light-version deep convolutional neural network named TongueNet. Figure FIGREF1 shows TongueNet architecture. In each layer, convolutional operations followed by ReLU activations as a non-linearity function as well as Batch normalization layers to improve the regularization, convergence, and accuracy. For the sake of better generalization ability of the model, in the last two layers, fully connected layers are equipped with Drop-out layers of $50\\%$. To find the optimum number of required points in the output layer, we used the number of points ($\\#$ in Figure FIGREF1) from 5 to 100 (see Figure FIGREF2 for samples of this experiment with 5, 10, 15, 20, 25, and 30 points as the output).\n\nExperimental Results and Discussion\nThere is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\\%$ training, 5$\\%$ validation, and 5$\\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training.\nWe trained and tested TongueNet with a different number of points as the output size. We first evaluated the scenario of equally spaced landmarks on the tongue surface. In this method, we captured all the points in equal distances respect to their neighbors. Our results from selecting five points to the number of pixels in the horizontal axis of the image (image width) revealed that the results of equally spaced selected points are not significant. Figure FIGREF3 shows randomly selected frames from real-time tracking of TongueNet using ultrasound tongue landmarks. As can be seen from the figure, automatically selection of points as annotation, which is used in many previous studies (see BIBREF17 for an example), can not provide accurate results. In a separate study, we extract annotation points from the same database using a randomly spaced selection method on tongue contours. We add restrictions for points that they should be at a minimum distance from each other as well as omitting outliers from the database. We saw that the optimum number of points was ten points for our experiments. Figure FIGREF4 shows some randomly selected results from training TongueNet on a randomly selected point database. From the figure, better accuracy of the TongueNet can be seen qualitatively. Note that we didn't apply any image enhancement or cropping for databases.\nTo show the performance ability of TongueNet quantitatively, we first fitted B-spline curves using the OpenCV library on the instances of TongueNet. Then, we compared the value of the mean sum of distance (MSD) BIBREF8 for TongueNet, sUNET BIBREF11, UNET BIBREF18, BowNet BIBREF13, and IrisNet BIBREF14 deep learning models. From table TABREF6, it can be seen that TongueNet could reach to results similar to the state of the art deep learning models in this field. Note that there are some approximation errors for the curve-fitting procedure of the TongueNet and skeletonization process for extracting tongue contours from segmentation results of other deep learning models.\nWe even tested TongueNet on a new database from UBC (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, the difference of MSD values is not significant between models, while IrisNet could find better MSD value. However, in terms of speed, TongueNet outperforms other deep learning models while post-processing time is not considered for them.\nWe likewise tested TongueNet on a new database from the University of British Columbia (the same database used in BIBREF14) to evaluate the generalization ability of the landmark tracking technique. Although TongueNet has not trained on that database, it could predict favorable instances for that novel video frames with different data distributions. This shows the capability of TongueNet for managing of the over-fitting. From Table. TABREF6, although the difference of MSD values is not significant between models, IrisNet could find better MSD value. However, in terms of speed performance, TongueNet outperforms other deep learning models while post-processing time was not included for calculation of frame rates.\n\nConclusion and Discussion\nIn this paper, we presented TongueNet, a simple deep learning architecture, with a novel training approach for the problem of Tongue contour extraction and tracking. Unlike similar studies, we used several points to track on the tongue surface instead of the whole tongue contour region. In recent tongue contour tracking, for quantitative studies, a two-phase procedure is performed, including image segmentation and post-processing for extraction of tongue contour that this increase computational costs. Furthermore, available previously annotated tongue databases in articulation studies could not be utilized for deep learning techniques until now. Using TongueNet, we provided a new tool for this literature, while old databases now can be used for developing a better tongue contour tracking technique.\nAll landmark locations in this study are annotated automatically with two different approaches, and we experienced that data augmentation has a significant rule in the accuracy of TongueNet. From our experimental results, we can anticipate that if an extensive manually annotated database, might be a combination of several databases, is employed for training of a deep learning model such as TongueNet, the accuracy of the model would be boosted considerably. The materials of this study will help researchers in different fields such as linguistics to study tongue gestures in real-time easier, accessible, and with higher accuracy than previous methods. The current infant TongueNet technique needs to be developed, trained, and extended as a fast, accurate, real-time, automatic method applicable for available ultrasound tongue databases.\n\nFig. 1. An overview of network architecture. Output of the network is a vector comprises of spacial location of individual points on the tongue surface, where # indicates the number of points in the output\n\nFig. 2. Sample frames from the experiment of testing different number of points in the output of the TongueNet.\n\nFig. 3. Sample results from testing images that were annotated by points equally spaced on the tongue surface through the width of the image. In this image we separated the space by 10 equally disperse vertical lines.\n\nFig. 4. Instances generated by TongueNet using randomly selected landmarks on tongue surface, automatically.\n\nFig. 5. Randomly selected frames of applying TongueNet on a new database without training on that.\n\nTable 1. Results of comparison study for several deep learning models on the same database. Except for TongueNet, in order to calculate MSD values, tongue contours were extracted from segmented instances using post-processing method.\n\nQuestion: How big are datasets used in experiments?", "role": "user" }, { "content": "{\"answer\": [\"2000 images\"], \"evidence\": [\"There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\\\\%$ training, 5$\\\\%$ validation, and 5$\\\\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training.\"]}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.9992
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe ongoing pandemic spread of a new human coronavirus, SARS-CoV-2, which is associated with severe pneumonia/disease (COVID-19), has resulted in the generation of tens of thousands of virus genome sequences. The rate of genome generation is unprecedented, yet there is currently no coherent nor accepted scheme for naming the expanding phylogenetic diversity of SARS-CoV-2. Here, we present a rational and dynamic virus nomenclature that uses a phylogenetic framework to identify those lineages that contribute most to active spread. Our system is made tractable by constraining the number and depth of hierarchical lineage labels and by flagging and delabelling virus lineages that become unobserved and hence are probably inactive. By focusing on active virus lineages and those spreading to new locations, this nomenclature will assist in tracking and understanding the patterns and determinants of the global spread of SARS-CoV-2.", "role": "user" }, { "content": "[\"Mechanism\"]", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.1580
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nThe lower level of enrichment of Mrc1 may be partly due to instability of Mrc1 protein (41).", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.11603
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nA demonstrated positive effect of a hearing conservation program in the Swedish armed forces\nAbstract 1:\nAbstract Objective : A revised hearing conservation program ( HCP ) was implemented in the Swedish Armed Forces in 2002 . The aim of this study was to evaluate the incidence of significant threshold shifts ( STS ) in male conscripts heavily exposed to noise after the implementation of the new HCP , comparing the results to those of an earlier study from 1999/2000 . Design : The study was prospect i ve and longitudinal , covering the period from reporting to military service to discharge . The outcome measure was the incidence of STS . Statistics from the military insurance system was analysed . Study Sample : A total of 395 conscripts were included in the study ( mean age 19 years ) . The control group ( n : 839 ) consisted of men of the same age . Results : In 2004/2005 the incidence rate of STS was 2.3 % compared to 7.9 % in 1999/2000 and compared to 3.7 % among the controls . The number of cases of auditory complications reported from conscripts to the insurance system has decreased , from 16 to 5/100 000 days of military training , during the last decade . Conclusions : The new HCP apparently reduced the incidence rate of STS to one third compared to before the program was introduced and leveled it to the incidence rate in the control-group not exposed to military noise\n\n\nTitle 2:\nHearing conservation program for agricultural students: short-term outcomes from a cluster-randomized trial with planned long-term follow-up.\nAbstract 2:\nOBJECTIVES ( 1 ) To conduct a contemporary analysis of historical data on short-term efficacy of a 3-year hearing conservation program conducted from 1992 to 1996 in Wisconsin , USA , with 753 high school students actively involved in farm work ; ( 2 ) to establish procedures for assessment of hearing loss for use in a recently funded follow-up of this same hearing conservation program cohort . METHODS We analyzed a pragmatic cluster-r and omized controlled trial , with schools as the unit of r and omization . Thirty-four rural schools were recruited and r and omized to intervention or control . The intervention included classroom instruction , distribution of hearing protection devices , direct mailings , noise level assessment s , and yearly audiometric testing . The control group received the audiometric testing . RESULTS Students exposed to the hearing conservation program reported more frequent use of hearing protection devices , but there was no evidence of reduced levels of noise-induced hearing loss ( NIHL ) . CONCLUSION Our analysis suggests that , since NIHL is cumulative , a 3-year study was likely not long enough to evaluate the efficacy of this intervention . While improvements in reported use of hearing protection devices were noted , the lasting impact of these behaviors is unknown and the finding merits corroboration by longer term objective hearing tests . A follow-up study of the cohort has recently been started\n\n\nTitle 3:\nTraining in using earplugs or using earplugs with a higher than necessary noise reduction rating? A randomized clinical trial.\nAbstract 3:\nBACKGROUND Noise-induced hearing loss ( NIHL ) is one of the most common occupational diseases and the second most common cause of workers ' cl aims for occupational injuries . OBJECTIVE Due to high prevalence of NIHL and several reports of improper use of hearing protective devices ( HPDs ) , we conducted this study to compare the effect of face-to-face training in effective use of earplugs with appropriate NRR to overprotection of workers by using earplugs with higher than necessary noise reduction rating ( NRR ) . METHODS In a r and omized clinical trial , 150 workers referred to occupational medicine clinic were r and omly allocated to three arms --- a group wearing earplugs with an NRR of 25 with no training in appropriate use of the device ; a group wearing earplugs with an NRR of 25 with training ; another group wearing earplugs with an NRR of 30 , with no training . Hearing threshold was measured in the study groups by real ear attenuation at threshold ( REAT ) method . This trial is registered with Australian New Zeal and clinical trials Registry , number ACTRN00363175 . RESULTS The mean ± SD age of the participants was 28 ± 5 ( range : 19 - 39 ) years . 42 % of participants were female . The mean noise attenuation in the group with training was 13.88 dB , significantly higher than those observed in other groups . The highest attenuation was observed in high frequencies ( 4 , 6 , and 8 kHz ) in the group with training . CONCLUSION Training in appropriate use of earplugs significantly affects the efficacy of earplugs --- even more than using an earplug with higher NRR\n\n\nTitle 4:\nNoise attenuation and proper insertion of earplugs into ear canals.\nAbstract 4:\nOBJECTIVES The study was done to determine whether the noise attenuation attained with the use of earplugs can be improved by teaching the proper insertion of earplugs to users . METHODS Fifty-four r and omly selected male subjects were divided into an untrained group ( 25 persons ) and a trained group ( 29 persons ) . The trained group was given a lecture on earplug insertion and allowed to practise the insertion procedure , whereas the untrained group acted as controls . The untrained group received this training afterwards . The success of the training was measured by the MIRE ( microphone in real ear ) and REAT ( real ear at threshold ) methods , visual evaluation and an analysis of the properties of the subjects ' ear canals . RESULTS According to the MIRE method , the averaged A-weighted noise attenuation was 21 dB for the untrained group and 31 dB for the trained group . With the REAT method the attenuation at 1000 Hz was 24 dB for the untrained group and 30 dB for the trained group . The visual evaluation of the earplug fit was 1.9 for the untrained group and 2.6 for the trained group ( scales 0 - 3 ) . CONCLUSION The results indicate that training in earplug insertion is important for good attenuation and for diminishing poor attenuation to a minimum\n\n\nTitle 5:\nEarly prognosis of noise-induced hearing loss\nAbstract 5:\nObjective Occupationally acquired noise-induced hearing loss ( NIHL ) is the most prevalent occupational disease in Austria and among the most common in many other countries . Because of the wide variation in hearing loss after equivalent exposures it has long been assumed that some individuals are more vulnerable to occupational NIHL than others . Earlier attempts to define predictors of NIHL before starting occupational noise exposure have largely failed . We present results of a prospect i ve study evaluating the potential of temporary threshold shift ( TTS ) after a test exposure to predict NIHL . Methods Between 1982 and 1989 , overall 311 apprentices were included into a prospect i ve study during their initial health screening visit . At this occasion , a st and ardised noise exposure was applied ( 20 min , 200–500 Hz , 100 dBA ) and the TTS at 4 kHz was determined during at least 10 min after exposure . Hearing loss was monitored at follow-up visits every 3–5 years . Follow-up was 13 years on average . Results Permanent threshold shift was predicted by duration of noise exposure , frequency of wearing noise protectors and especially by the initial TTS at 4 kHz . Using 14 dB TTS as a cut-off had 82 % sensitivity and 53 % specificity to predict 20 dB or higher levels of NIHL . Conclusions The TTS model can be successfully applied as a method to detect individuals at greater risk of occupational NIHL . It is recommended to routinely include such a procedure into initial workers ’ examinations for suitability to work under occupational noise conditions and for counselling on the use of hearing protectors\n\n\nTitle 6:\nThe feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions.\nAbstract 6:\nOBJECTIVE : To test the feasibility of creating a valid and reliable checklist with the following features : appropriate for assessing both r and omised and non-r and omised studies ; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting , internal validity ( bias and confounding ) and power , but also for external validity . DESIGN : A pilot version was first developed , based on epidemiological principles , review s , and existing checklists for r and omised studies . Face and content validity were assessed by three experienced review ers and reliability was determined using two raters assessing 10 r and omised and 10 non-r and omised studies . Using different raters , the checklist was revised and tested for internal consistency ( Kuder-Richardson 20 ) , test-retest and inter-rater reliability ( Spearman correlation coefficient and sign rank test ; kappa statistics ) , criterion validity , and respondent burden . MAIN RESULTS : The performance of the checklist improved considerably after revision of a pilot version . The Quality Index had high internal consistency ( KR-20 : 0.89 ) as did the subscales apart from external validity ( KR-20 : 0.54 ) . Test-retest ( r 0.88 ) and inter-rater ( r 0.75 ) reliability of the Quality Index were good . Reliability of the subscales varied from good ( bias ) to poor ( external validity ) . The Quality Index correlated highly with an existing , established instrument for assessing r and omised studies ( r 0.90 ) . There was little difference between its performance with non-r and omised and with r and omised studies . Raters took about 20 minutes to assess each paper ( range 10 to 45 minutes ) . CONCLUSIONS : This study has shown that it is feasible to develop a checklist that can be used to assess the method ological quality not only of r and omised controlled trials but also non-r and omised studies . It has also shown that it is possible to produce a checklist that provides a profile of the paper , alerting review ers to its particular method ological strengths and weaknesses . Further work is required to improve the checklist and the training of raters in the assessment of external validity\n\n\nTitle 7:\nPrevention of noise-induced hearing loss.\nAbstract 7:\nOBJECTIVES To review the evidence suggesting that occupational hearing conservation programs prevent noise-induced hearing loss ; to describe the features , prevalence , and h and icap associated with noise-induced hearing loss ; and to describe the otolaryngologist 's role in prevention of noise-induced hearing loss . DATA SOURCES Recent statements from the American College of Occupational Medicine , the National Institutes of Health , and the International Organization for St and ardization are supplemented by published articles known to the author ( no formal literature search was done ) . STUDY SELECTION Articles that purported to measure the effects of occupational hearing conservation programs on preventing noise-induced hearing loss . DATA EXTRACTION Each article was separately critiqued without a priori assessment criteria . DATA SYNTHESIS Several studies suggest that occupational hearing conservation programs prevent noise-induced hearing loss , but none of these are conclusive . No r and omized clinical trial has been reported , of which I am aware , and previously reported data suffer from one or more of the following shortcomings : failure to match treatment and control groups for age , nonoccupational noise exposure , and /or prior hearing loss ; failure to control for audiometric learning effects ; and inclusion of workers who had already worked long careers without hearing protection ( and thus had little risk of additional noise-induced loss ) . CONCLUSION Although noise reduction for individuals obviously can prevent noise-induced hearing loss , to my knowledge , no single study offers convincing evidence of the efficacy of occupational hearing conservation programs , primarily due to method ologic flaws\n\n\nTitle 8:\nA study of hearing changes among military conscripts in the Swedish Army\nAbstract 8:\nThe aim of the study was to investigate the incidence and the relative risk of significant threshold shift ( STS , ≥15 dB deterioration at any ear and audiometric frequency ) during primary military service ( 7–9 months ) , and to investigate whether subjects with an initial slight hearing loss ( thresholds ≥ 25 dB HL at any audiometric frequency and ear ) were under increased risk . The investigation was made as a prospect i ve audiometric study and included 747 men . An age-matched group of 138 individuals served as an unexposed control group , whose incidence of STS was 2.9 % . In the exposed group the incidence was 7.9 % and the relative risk 2.7 risk ratio ( RR ) . In the subgroup of 95 persons , who already at reporting-for-training had a mild hearing loss , the incidence was 17 % . The relative risk for STS in this group compared to the control group was 6.8 ( RR ) , and compared to those with normal hearing at reporting was 3.1 ( RR ) . In spite of hearing conservation efforts , hearing deterioration still occurs , above all in the artillery . Those who already at reporting-for-training had a mild hearing loss were at higher risk for STS compared to those with initially normal hearing\n\n\nTitle 9:\nWhen are randomised trials unnecessary? Picking signal from noise\nAbstract 9:\nAlthough r and omised trials are widely accepted as the ideal way of obtaining unbiased estimates of treatment effects , some treatments have dramatic effects that are highly unlikely to reflect inadequately controlled biases . We compiled a list of historical examples of such effects and identified the features of convincing inferences about treatment effects from sources other than r and omised trials . A unifying principle is the size of the treatment effect ( signal ) relative to the expected prognosis ( noise ) of the condition . A treatment effect is inferred most confidently when the signal to noise ratio is large and its timing is rapid compared with the natural course of the condition . For the examples we considered in detail the rate ratio often exceeds 10 and thus is highly unlikely to reflect bias or factors other than a treatment effect . This model may help to reduce controversy about evidence for treatments whose effects are so dramatic that r and omised trials are unnecessary . The relation between a treatment and its effect is sometimes so dramatic that bias can be ruled out as an explanation . Paul Glasziou and colleagues suggest how to determine when observations speak for", "role": "user" }, { "content": "Earmuffs might perform better than earplugs in high noise levels but worse in low noise levels ( very low- quality evidence ) .\nBetter use of HPDs might reduce hearing loss but other components not ( very low- quality evidence ) .\nHearing loss prevention and interventions modestly reduce noise exposure and hearing loss .", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.28068
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nBy exploiting multipath fading channels as a source of common randomness, physical layer (PHY) based key generation protocols allow two terminals with correlated observations to generate secret keys with information-theoretical security. The state of the art, however, still suffers from major limitations, e.g., low key generation rate, lower entropy of key bits and a high reliance on node mobility. In this paper, a novel cooperative key generation protocol is developed to facilitate high-rate key generation in narrowband fading channels, where two keying nodes extract the phase randomness of the fading channel with the aid of relay node(s). For the first time, we explicitly consider the effect of estimation methods on the extraction of secret key bits from the underlying fading channels and focus on a popular statistical method--maximum likelihood estimation (MLE). The performance of the cooperative key generation scheme is extensively evaluated theoretically. We successfully establish both a theoretical upper bound on the maximum secret key rate from mutual information of correlated random sources and a more practical upper bound from Cramer-Rao bound (CRB) in estimation theory. Numerical examples and simulation studies are also presented to demonstrate the performance of the cooperative key generation system. The results show that the key rate can be improved by a couple of orders of magnitude compared to the existing approaches.\n\nCited Abstract(s):\n[1]: All information-theoretically secure key agreement protocols (e.g. based on quantum cryptography or on noisy channels) described in the literature are secure only against passive adversaries in the sense that they assume the existence of an authenticated public channel. The goal of this paper is to investigate information-theoretic security even against active adversaries with complete control over the communication channel connecting the two parties who want to agree on a secret key. Several impossibility results are proved and some scenarios are characterized in which secret-key agreement secure against active adversaries is possible. In particular, when each of the parties, including the adversary, can observe a sequence of random variables that are correlated between the parties, the rate at which key agreement against active adversaries is possible is characterized completely: it is either 0 or equal to the rate achievable against passive adversaries, and the condition for distinguishing between the two cases is given.\n\n[2]: This is the first part of a three-part paper on secret-key agreement secure against active adversaries. In all three parts, we address the question whether two parties, knowing some correlated pieces of information X and Y, respectively, can generate a string S about which an adversary, knowing some information Z and having read and write access to the communication channel used by the legitimate partners, is almost completely ignorant. Whether such key agreement is possible, and if yes at which rate, is an inherent property of the joint probability distribution PXYZ. In this part, we first prove a number of general impossibility results. We then consider the important special case where the legitimate partners as well as the adversary have access to the outcomes of many independent repetitions of a fixed tripartite random experiment. In this case, the result characterizing the possibility of secret-key agreement secure against active adversaries is of all-or-nothing nature: either a secret key can be generated at the same rate as in the (well-studied) passive-adversary case, or such secret-key agreement is completely impossible. The exact condition characterizing the two cases is presented.\n\n[3]: Secure wireless communications is a challenging problem due to the shared nature of the wireless medium. Most existing security protocols apply cryptographic techniques for bit scrambling at the application layer by exploiting a shared secret key between pairs of communicating nodes. However, more recent research argues that multipath propagation - a salient feature of wireless channels - provides a physical resource for secure communications. In this context, we propose a protocol that exploits the inherent randomness in multipath wireless channels for generating secret keys through channel estimation and quantization. Our approach is particularly attractive in wideband channels which exhibit a large number of statistically independent degrees of freedom (DoF), thereby enabling the generation of large, more-secure, keys. We show that the resulting keys are distinct for distinct pairwise links with a probability that increases exponentially with the key-size channel DoF. We also characterize the probability that the two users sharing a common link generate the same key. This characterization is used to analyze the energy consumption in successful acquisition of a secret key by the two users. For a given key size, our results show that there is an optimum transmit power, and an optimum quantization strategy, that minimizes the energy consumption. The proposed approach to secret key generation through channel quantization also obviates the problem of key pre-distribution inherent to many existing cryptographic approaches.\n\n[4]: Abstract Hassan, A. A., Stark, W. E., Hershey, J. E., and Chennakeshu, S., Cryptographic Key Agreement for Mobile Radio, Digital Signal Processing 6 (1996), 207–212. The problem of establishing a mutually held secret cryptographic key using a radio channel is addressed. The performance of a particular key distribution system is evaluated for a practical mobile radio communications system. The performance measure taken is probabilistic, and different from the Shannon measure of perfect secrecy. In particular, it is shown that by using a channel decoder, the probability of two users establishing a secret key is close to one, while the probability of an adversary generating the same key is close to zero. The number of possible keys is large enough that exhaustive search is impractical.\n\n[5]: THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.\n\n[6]: As the first part of a study of problems involving common randomness at distance locations, information-theoretic models of secret sharing (generating a common random key at two terminals, without letting an eavesdropper obtain information about this key) are considered. The concept of key-capacity is defined. Single-letter formulas of key-capacity are obtained for several models, and bounds to key-capacity are derived for other models. >", "role": "user" }, { "content": "The PHY based key generation can be traced back to the original information-theoretic formulation of secure communication due to [5] . Building on information theory, [1] [2] [6] characterized the fundamental bounds and showed the feasibility of generating keys using external random source-channel impulse response. To the best of our knowledge, the first key generation scheme suitable for wireless network was proposed in [4] . In [4] , the differential phase between two frequency tones is encoded for key generation. Error control coding techniques are used for enhancing the reliability of key generation. Similar to [4] , a technique of using random phase for extracting secret keys in an OFDM system through channel estimation and quantization was recently proposed in [3] . This paper characterized the probability of generating the same bit vector between two nodes as a function of signal-to-interference-and-noise (SINR) and quantization levels.", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.104269
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nrecent proposals for symanzik improvement of the staggered fermion action have produced encouraging results , particularly in an improved flavor and rotational symmetry @xcite .\nadding a few terms to the conventional kogut - susskind action , namely three - link , five - link and seven - link staples and a third - neighbor coupling , removes all tree - level @xmath1 errors @xcite .\nthe computational price for using this action is quark mass dependent , and is roughly a factor of 2.5 for the lightest masses we are using .\nhere we extend previous work with our preferred `` asqtad '' action , studying scaling in the quenched approximation , and exploring the effects of dynamical quarks on the light hadron spectrum and the heavy - quark potential .\nwe have accumulated a library of gauge configurations , both quenched ( one - loop symanzik improved gauge action ) and dynamical .\nour [ cols=\"<,<,^,<,<\",options=\"header \" , ] @xmath2 sample is currently being enlarged .\ndynamical fermion lattices are generated in the presence of two lighter flavors ( @xmath3 , @xmath4 ) and one heavier flavor ( @xmath5 ) .\nfor the dynamical fermion sample the gauge coupling is tuned so that the lattice spacing remains the same as the quark mass is varied , allowing an exploration of quark mass effects independent of scale .\nthe current parameter set is shown in the table above .\nto measure the degree of residual flavor symmetry breaking , we compute the mass of the local non - goldstone pion ( @xmath6 ) as a function of lattice spacing at a fixed value of the goldstone pion mass ( @xmath7 ) on a series of quenched lattices @xcite . in particular , we fix @xmath8 ( goldstone @xmath9 ) with the scale @xmath10 set by the force @xmath11 between static quarks @xmath12 , a variant of sommer s scale@xcite .\nthe value of @xmath13 depends on the quark masses , varying from 1.376(2 ) for the quenched theory to 1.44(1 ) for physical quark masses .\nthe effect of improving the fermion action is shown in fig .\n[ fig : pi05_vs_m_2 ] .\nit is expected that the curves for all actions extrapolate to the goldstone pion mass .\nscaling of the @xmath14 and nucleon masses is shown in figs .\n[ fig : nuc_vs_m_2 ] and [ fig : rho_vs_m_2 ] . results for variants of the wilson - clover action are also shown @xcite .\nfor these masses , the asqtad action gives the best result .\nnonetheless , it is encouraging that staggered fermion and wilson - clover simulations agree in the continuum limit .\nto determine the effect of dynamical quarks on the light hadron spectrum , we ran a series of quenched and @xmath15 dynamical fermion simulations with gauge couplings tuned to fix the lattice spacing @xmath16 fm , using @xmath17 lattices for both the quenched and dynamical runs .\nthe plotted points are from the runs described in table i with light quark masses down to @xmath18 and from the quenched run at @xmath19 . the resulting edinburgh plot in fig .\n[ fig : edin_both ] shows no discernible change when dynamical quarks are introduced .\nthis @xmath15 flavor @xmath0 fm result appears to be in conflict with our previous claims , based on the conventional action with two sea quark flavors in the @xmath20 limit @xcite .\nwe are currently investigating whether the difference comes from the number of flavors , the continuum extrapolation , or some other source of systematic error .\ndynamical quark loops modify the heavy quark potential @xcite , _ e.g. _ , by opening the two - meson decay channel and by altering the running of the coulomb coupling . in fig .\n[ fig : potmatch_phys.b800b685l20 ] we plot the heavy quark potential with and without @xmath21 flavors of sea quarks of mass @xmath22 .\nthe vertical scale is adjusted to give agreement at @xmath10 , and since the force at @xmath10 is used to set the scale , the two potentials have the same slope there . to the extent the potential changes shape under the influence of sea quarks , setting the lattice scale from the heavy quark potential clearly has limitations .\nthis point is illustrated in fig .\n[ fig : r1_sq_sigma ] , where we plot the ratio of two possible heavy - quark - potential - inspired scales as a function of the ratio of the pion to rho mass .\nas the pion mass drops to zero with decreasing quark mass , one expects to see evidence for the decay of heavier mesons , such as @xmath23 and @xmath24 .\nthe mixing between a resonant state and a two - meson decay state should result in avoided level crossing in the spectrum of the resonant state .\nwhile such decays are natural when dynamical quarks are present , quenched simulations may also show hints of decay in some cases , through `` hairpin '' diagrams , although at different rates .\nprevious attempts to observe effects of meson decay in the @xmath14 spectrum were unsuccessful @xcite . here\nwe report results of a preliminary attempt to observe effects of the decay @xmath25 .\nthis threshold is easier to approach than the @xmath26 threshold because the final pseudoscalars do nt have to carry away angular momentum , so they can both be at zero momentum .\nhowever , it is difficult , because with staggered quarks the @xmath27 appears in the same propagators as the pions , so we must extract the alternating exponential from a propagator containing a much larger simple exponential .\nwe measure the @xmath27 mass ( _ i.e. _ lowest energy in the @xmath27 ( @xmath28 ) channel ) as a function of quark mass .\nthis is done on quenched and dynamical fermion configurations .\npreliminary results , shown in fig . [ fig : a0_decay_clean ] , suggest that the lowest @xmath28 mass drops more rapidly with decreasing quark mass in the dynamical fermion simulation .\nthe drop appears to follow the threshold energy of the decay channel , also shown .\nsuch a result would be expected from avoided level crossing .\nin fact , for the lighter quark masses it is very difficult to extract masses for the quenched @xmath27 .\nthis may be a signal of the non - unitarized ( because of the quenching ) couplings to two particle states .\nwe continue studies using a recently proposed improved staggered fermion action .\nwe find that the light hadron spectrum in our currently preferred asqtad action scales very well .\nwe have begun a systematic study of quark loop effects with @xmath15 flavors .\nour present sample of gauge configurations has been constructed so that the lattice spacing is kept constant as the quark masses are varied .\nwe see no quark loop effects in the light hadron spectrum with @xmath15 flavors at a lattice spacing of @xmath0 fm . however , sea quarks evidently modify the heavy quark potential both at short and long distance , suggesting caution in using this potential to set a precision lattice scale at unphysical quark masses .\npreliminary determinations of the mass of the @xmath27 as a function of quark mass suggest evidence for the decay to @xmath29 .\nlage and d.k .\nsinclair , nucl .\n( proc . suppl . )\n* 63 * ( 1998 ) 892 ; phys .\nd * 59 * ( 1999 ) 014511 .\nk. orginos and d. toussaint , phys .\nd * 59 * ( 1999 ) 014501 ; nucl .\nb ( proc . suppl . )\n* 73 * ( 1999 ) 909 .", "role": "user" }, { "content": "we have been studying effects of dynamical quarks on various hadronic observables , using our recently formulated improvement for staggered fermions . to illustrate improvement , we show that the light hadron spectrum in the quenched approximation gives remarkably good scaling . \n we highlight three new results : ( 1 ) we find no apparent quark loop effects in the edinburgh plot with 2 + 1 flavors of dynamical quarks at @xmath0 fm . \n ( 2 ) we show that dynamical quarks modify the shape of the heavy quark potential . \n ( 3 ) we present results hinting at meson decay effects in light hadron spectroscopy .", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.1356
[ { "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a JSON array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the JSON array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Effects of fenofibrate and its combination with lovastatin on the expression of genes involved in skeletal muscle atrophy , including FoxO1 and its targets . Fibrates and statins have been widely used to reduce triglyceride and cholesterol levels, respectively. Besides its lipid-lowering effect, the side effect of muscle atrophy after fibrate administration to humans has been demonstrated in some studies. Combination therapy with fibrates and statins also increases the risk of rhabdomyolysis. FoxO1, a member of the FoxO forkhead type transcription factor family, is markedly upregulated in skeletal muscle in energy-deprived states and induces muscle atrophy via the expression of E3-ubiquitine ligases. In this study, we investigated the changes in FoxO1 and its targets in murine skeletal muscle with fenofibrate treatment. High doses of fenofibrate (greater than 0.5% (wt/wt)) over one week increased the expression of FoxO1 and its targets in the skeletal muscles of mice and decreased skeletal muscle weight. These fenofibrate-induced changes were diminished in the PPARα knockout mice. When the effect of combination treatment with fenofibrate and lovastatin was investigated, a significant increase in FoxO1 protein levels was observed despite the lack of deterioration of muscle atrophy. Collectively, our findings suggest that a high dose of fenofibrate over one week causes skeletal muscle atrophy via enhancement of FoxO1, and combination treatment with fenofibrate and lovastatin may further increase FoxO1 protein level.\n\nsentence: Effects of fenofibrate and its combination with lovastatin on the expression of genes involved in skeletal muscle atrophy , including FoxO1 and its targets .", "role": "user" }, { "content": "[[\"fenofibrate\", \"lovastatin\", \"NEG\"]]", "role": "assistant" } ]
science.genia_ner
science.genia_ner.12
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPositive and negative regulation of granulocyte-macrophage colony-stimulating factor promoter activity by AML1-related transcription factor, PEBP2. The granulocyte-macrophage colony-stimulating factor (GM-CSF) gene promoter contains a consensus sequence for the polyomavirus enhancer binding-protein 2 (PEBP2) transcription factor, which consists of alpha and beta subunits. There are at least two genes, alpha A and alpha B, encoding the alpha subunit. alpha B is the mouse homologue of human AML1 gene detected at the breakpoints of t (8; 21) and t (3; 21) myeloid leukemias. We examined alpha A1 (an alpha A-gene product) and alpha B1 and alpha B2 (two alpha B-encoded isomers) for their effects on the GM-CSF promoter. PEBP2 alpha A1, alpha B1, and alpha B2 proteins bound the PEBP2 site within the mouse GM-CSF promoter. PEBP2 alpha A1 and alpha B1 enhanced the expression of the GM-CSF promoter-driven reporter plasmid in unstimulated and 12-O-tetradecanoylphorbol 13-acetate / phytohemagglutinin-stimulated human Jurkat T cells. In contrast, the promoter activity was suppressed by alpha B2. Coexpression of alpha B1 and alpha B2 showed that the promoter activity could be determined by the alpha B1 / alpha B2 ratio. Jurkat cell extract contained PEBP2 site-binding protein (s) that cross-reacted with antimouse alpha A1 antibodies. Northern blot analysis indicated the expression of human PEBP2 alpha A, alpha B (AML1), and beta genes in Jurkat cells. Although further studies are required to determine the precise role of PEBP2 in the GM-CSF promoter activity, the present findings suggested the importance of the relative ratio of different PEBP2 isoforms in regulating the levels of the promoter activity.", "role": "user" }, { "content": "{\"CellLine\": [\"unstimulated and 12 - O - tetradecanoylphorbol 13 - acetate / phytohemagglutinin - stimulated human Jurkat T cells\", \"Jurkat cells\"], \"CellType\": [], \"DNA\": [\"mouse homologue\", \"human AML1 gene\", \"mouse GM - CSF promoter\", \"GM - CSF promoter\", \"alpha A\", \"granulocyte - macrophage colony - stimulating factor (GM - CSF) gene promoter\", \"GM - CSF promoter - driven reporter plasmid\", \"beta genes\", \"AML1\", \"PEBP2 site\", \"alpha B\"], \"Protein\": [\"PEBP2 alpha A1\", \"alpha subunit\", \"polyomavirus enhancer binding - protein 2 (PEBP2) transcription factor\", \"alpha B - encoded isomers\", \"PEBP2 alpha A\", \"polyomavirus enhancer binding - protein 2\", \"AML1 - related transcription factor\", \"alpha B2\", \"PEBP2 site - binding protein\", \"alpha B2 proteins\", \"/ alpha B2\", \"alpha B1\", \"GM - CSF\", \"alpha and beta subunits\", \"antimouse alpha A1 antibodies\", \"granulocyte - macrophage colony - stimulating factor\", \"alpha A - gene product\", \"PEBP2 isoforms\", \"PEBP2\", \"alpha A1\", \"alpha B\"], \"RNA\": []}", "role": "assistant" } ]
science.chia_ner
science.chia_ner.1038
[ { "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nExisting sacral pressure ulcer, undergoing a cardiac procedure, or inability to provide informed consent.", "role": "user" }, { "content": "{\"Condition\": [\"inability to provide informed consent\", \"sacral pressure ulcer\"], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [\"cardiac procedure\"], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [], \"Visit\": []}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.110948
[ { "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\nectodermal dysplasias ( ed ) as defined by the national foundation for ed are a group of genetic disorders in which there are congenital birth defects involving two or more ectodermal structures .\neds are mainly divided into two groups : the hidrotic form known as clouston syndrome and anhidrotic or hypohidrotic form known as christ \nsiemens touraine syndrome . the diagnosis of christ siemens touraine syndrome is mainly clinical , manifesting as absent or reduced sweating , sparse body hair , absent to abnormally shaped tooth along with specific facial features . here\n, we report a case of christ siemens touraine syndrome with palmoplantar keratoderma , which is more common in hidrotic form of ed .\na 22-year - old male presented with a history of generalized dryness , absence of sweating with episodes of hyperpyrexia since early childhood .\nthe patient also complained of loss of body hair , dental abnormalities , nonprogressive hoarseness of voice , and thickness of the palms and soles .\nthere was a family history of similar illness in his younger brother [ figure 1 ] .\ndepressed nasal bridge , increased intercanthal distance , periorificial hyperpigmentation and wrinkled skin , flat and wide nose , thick everted lips , malar hypoplasia , a bulging forehead , and pointed ears [ figure 2 ] .\nthese findings were consistent with the diagnosis of anhidrotic ed . photograph showing characteristic facial features including frontal bossing , sparse scalp and eyelash hair , prominent supraorbital ridge , depressed nasal bridge , and periorbital hyperpigmentation photograph showing abnormally shaped conical teeth photograph showing palmoplantar keratoderma photograph showing pedigree analysis\nthe eds are a group of inherited disorders that share in common developmental defects involving at least two of the major ectodermal structures ; hair , teeth , nails , or sweat glands . the first case of ed\nwas reported by thurman in the year 1848 , but the term ed was later coined by weeck in 1929 .\ndifferent subgroups are created according to the presence or absence of the four primary ed defects : ed1 : trichodysplasia ; ed2 : dental dysplasia ; ed3 : onychodysplasia ; ed4 : dyshidrosis . based on the above , the 192 different types of eds are categorized into one of the following subgroups made up from the primary ed defects .\nclinically , eds are broadly divided into hypohidrotic or anhidrotic form , termed as christ \nsiemens touraine syndrome , subgroup 1 - 2 - 3 - 4 , which is x - linked recessive and characterized by the classical triad of hypodontia , hypotrichosis , and hypohidrosis . the other hidrotic form described by clouston\n, subgroup 1 - 2 - 3 , usually spares the sweat glands and can affect teeth , nails and hair .\ndiagnosis of eds is based mainly on the clinical grounds and other modalities include : sweat pore counts , pilocarpine iontophoresis ; radiographic evaluation of various anomalies ; prenatal diagnosis using the genetic mutation analysis ; and biopsy of the mucus membranes and skin .\nanhidrotic ed has been mapped to xq12-q13 , causing mutation in a novel transmembrane protein , ectodysplasin a , mainly involving ectodermal structures such as epidermis and its annexes ( hair and nails ) , although nonectodermal tissue may also become involved .\nthe most common manifestation is the reduction or complete absence of eccrine sweat glands resulting in decreased or absent sweating , heat intolerance , and recurrent hyperpyrexia .\nscalp hair is sparse , fine , lightly pigmented , and grows slowly with scant or absent eyebrows .\nteeth abnormalities range from complete absence of teeth ( anodontia ) to sparse ; abnormally shaped teeth .\ncraniofacial features are square forehead with frontal bossing , prominent supraorbital ridge , concave midface , saddle nose , everted lips , and periorbital hyperpigmentation and fine wrinkling around the eyes . in our case\nbesides these , palmoplantar keratoderma was also present in our case , which is usually observed in hidrotic forms of ed . to the best of our knowledge\n, there are only two case reports of association of anhidrotic ed and palmoplantar keratoderma .\nneonates may demonstrate marked scaling or peeling simulating a collodion baby and in later age , eczema and xerosis are common especially over the flexures having a chronic course .\nother rare systemic manifestations include atrophic rhinitis , recurrent sinusitis and upper and lower respiratory tract infections , infantile feeding problems , hoarseness of voice , xerostomia , and impacted cerumen .\nthese manifestations are due to diminished or absent mucous glands of the respective areas . however , no such manifestations were observed in our case .\na multidisciplinary approach is usually required for early diagnosis and prevention and management of several complications , some of which are fatal .\nthe presence of palmoplantar keratoderma in a case of ed supports the diagnosis of hidrotic eds .\nbut , in our case of anhidrotic ed , palmoplantar keratoderma was present , which is an unusual association .\ntill date , there are only two case reports of this association making our case an interesting one .\nthe authors certify that they have obtained all appropriate patient consent forms . in the form the patient(s )\nhas / have given his / her / their consent for his / her / their images and other clinical information to be reported in the journal .\nthe patients understand that their names and initials will not be published and due efforts will be made to conceal their identity , but anonymity can not be guaranteed .\nthe authors certify that they have obtained all appropriate patient consent forms . in the form the patient(s )\nhas / have given his / her / their consent for his / her / their images and other clinical information to be reported in the journal .\nthe patients understand that their names and initials will not be published and due efforts will be made to conceal their identity , but anonymity can not be guaranteed .", "role": "user" }, { "content": "christ siemens touraine syndrome is a form of anhidrotic ectodermal dysplasia ( ed ) characterized by triad of hypodontia , hypotrichosis , and hypohidrosis . \n palmoplantar keratoderma is a characteristic feature of hidrotic forms of ed . till date , \n only two cases have been reported of christ siemens touraine syndrome with palmoplantar keratoderma ; here we report a similar case emphasizing this rare association .", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.6979
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nThis beneficial outcome of Nigella Sativa came in accordance with the results of Hildrum et al. ( 2007); Najmi et al. (2008) and Parhizkar et al. (2011).", "role": "user" }, { "content": "Result", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.876
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThioredoxin-mimetic peptides (TXM) reverse auranofin induced apoptosis and restore insulin secretion in insulinoma cells.\nThe thioredoxin reductase/thioredoxin system (TrxR/Trx1) plays a major role in protecting cells from oxidative stress. Disruption of the TrxR-Trx1 system keeps Trx1 in the oxidized state leading to cell death through activation of the ASK1-Trx1 apoptotic pathway. The potential mechanism and ability of tri- and tetra-oligopeptides derived from the canonical -CxxC- motif of the Trx1-active site to mimic and enhance Trx1 cellular activity was examined. The Trx mimetics peptides (TXM) protected insulinoma INS 832/13 cells from oxidative stress induced by selectively inhibiting TrxR with auranofin (AuF). TXM reversed the AuF-effects preventing apoptosis, and increasing cell-viability. The TXM peptides were effective in inhibiting AuF-induced MAPK, JNK and p38(MAPK) phosphorylation, in correlation with preventing caspase-3 cleavage and thereby PARP-1 dissociation. The ability to form a disulfide-bridge-like conformation was estimated from molecular dynamics simulations. The TXM peptides restored insulin secretion and displayed Trx1 denitrosylase activity. Their potency was 10-100-fold higher than redox reagents like NAC, AD4, or ascorbic acid. Unable to reverse ERK1/2 phosphorylation, TXM-CB3 (NAc-Cys-Pro-Cys amide) appeared to function in part, through inhibiting ASK1-Trx dissociation. These highly effective anti-apoptotic effects of Trx1 mimetic peptides exhibited in INS 832/13 cells could become valuable in treating adverse oxidative-stress related disorders such as diabetes.", "role": "user" }, { "content": "{\"Chemical\": [\"disulfide\", \"auranofin\", \"AD4\", \"NAC\", \"NAc-Cys-Pro-Cys amide\", \"ascorbic acid\", \"AuF\"], \"Protein\": [\"ASK1\", \"p38\", \"caspase-3\", \"CxxC- motif\", \"PARP-1\", \"ERK1/2\", \"JNK\", \"thioredoxin reductase\", \"thioredoxin\", \"insulin\", \"Trx\", \"denitrosylase\", \"MAPK\", \"TrxR\", \"Trx1\", \"Thioredoxin\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.16819
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nMachine learning (ML) over relational data is a booming area of the database industry and academia. While several projects aim to build scalable and fast ML systems, little work has addressed the pains of sourcing data and features for ML tasks. Real-world relational databases typically have many tables (often, dozens) and data scientists often struggle to even obtain and join all possible tables that provide features for ML. In this context, showed recently that key-foreign key dependencies (KFKDs) between tables often lets us avoid such joins without significantly affecting prediction accuracy--an idea they called avoiding joins safely. While initially controversial, this idea has since been used by multiple companies to reduce the burden of data sourcing for ML. But their work applied only to linear classifiers. In this work, we verify if their results hold for three popular complex classifiers: decision trees, SVMs, and ANNs. We conduct an extensive experimental study using both real-world datasets and simulations to analyze the effects of avoiding KFK joins on such models. Our results show that these high-capacity classifiers are surprisingly and counter-intuitively more robust to avoiding KFK joins compared to linear classifiers, refuting an intuition from the prior work's analysis. We explain this behavior intuitively and identify open questions at the intersection of data management and ML theoretical research. All of our code and datasets are available for download from this http URL.\n\nCited Abstract(s):\n[1]: \n\n[2]: \n\n[3]: Since the emergence of extensive multimedia data, feature fusion has been more and more important for image and video retrieval, indexing and annotation. Existing feature fusion techniques simply concatenate a pair of different features or use canonical correlation analysis based methods for joint dimensionality reduction in the feature space. However, how to fuse multiple features in a generalized way is still an open problem. In this paper, we reformulate the multiple feature fusion as a general subspace learning problem. The objective of the framework is to find a general linear subspace in which the cumulative pairwise canonical correlation between every pair of feature sets is maximized after the dimension normalization and subspace projection. The learned subspace couples dimensionality reduction and feature fusion together, which can be applied to both unsupervised and supervised learning cases. In the supervised case, the pairwise canonical correlations of feature sets within the same classes are also counted in the objective function for maximization. To better model the high-order feature structure and overcome the computational difficulty, the features extracted from the same pattern source are represented by a single 2D tensor. The tensor-based dimensionality reduction methods are used to further extract low-dimensional discriminative features from the fused feature ensemble. Extensive experiments on visual data classification demonstrate the effectiveness and robustness of the proposed methods.\n\n[4]: \n\n[5]: The Big Data era is upon us: data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Since the value of data explodes when it can be linked and fused with other data, addressing the big data integration (BDI) challenge is critical to realizing the promise of Big Data. BDI differs from traditional data integration in many dimensions: (i) the number of data sources, even for a single domain, has grown to be in the tens of thousands, (ii) many of the data sources are very dynamic, as a huge amount of newly collected data are continuously made available, (iii) the data sources are extremely heterogeneous in their structure, with considerable variety even for substantially similar entities, and (iv) the data sources are of widely differing qualities, with significant differences in the coverage, accuracy and timeliness of data provided. This seminar explores the progress that has been made by the data integration community on the topics of schema mapping, record linkage and data fusion in addressing these novel challenges faced by big data integration, and identifies a range of open problems for the community.\n\n[6]: We demonstrate that, with the availability of distributed computation platforms such as Amazon Web Services and open-source tools, it is possible for a small engineering team to build, launch and maintain a cost-effective, large-scale visual search system with widely available tools. We also demonstrate, through a comprehensive set of live experiments at Pinterest, that content recommendation powered by visual search improve user engagement. By sharing our implementation details and the experiences learned from launching a commercial visual search engines from scratch, we hope visual search are more widely incorporated into today's commercial applications.\n\n[7]: There has been an ever-increasing interest in multi-disciplinary research on multisensor data fusion technology, driven by its versatility and diverse areas of application. Therefore, there seems to be a real need for an analytical review of recent developments in the data fusion domain. This paper proposes a comprehensive review of the data fusion state of the art, exploring its conceptualizations, benefits, and challenging aspects, as well as existing methodologies. In addition, several future directions of research in the data fusion community are highlighted and described.\n\n[8]: How do you approach answering queries when your data is stored in multiple databases that were designed independently by different people? This is first comprehensive book on data integration and is written by three of the most respected experts in the field. This book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts. Data integration is the problem of answering queries that span multiple data sources (e.g., databases, web pages). Data integration problems surface in multiple contexts, including enterprise information integration, query processing on the Web, coordination between government agencies and collaboration between scientists. In some cases, data integration is the key bottleneck to making progress in a field. The authors provide a working knowledge of data integration concepts and techniques, giving you the tools you need to develop a complete and concise package of algorithms and applications. *Offers a range of data integration solutions enabling you to focus on what is most relevant to the problem at hand. *Enables you to build your own algorithms and implement your own data integration applications *Companion website with numerous project-based exercises and solutions and slides. Links to commercially available software allowing readers to build their own algorithms and implement their own data integration applications. Facebook page for reader input during and after publication.", "role": "user" }, { "content": "* Data Integration Integrating data and features from different sources for ML and data mining algorithms often requires applying and adapting techniques from the data integration literature [2] [8] . These include integrating features from different data types in recommendation systems [6] , sensor fusion [7] , dimensionality reduction during feature fusion [3] , and techniques to control data quality during data fusion [5] . Avoiding joins safely can be seen as one schema-based mechanism to reduce the integration burden by predicting a priori if a source table is unlikely to improve ML accuracy. It is a major open challenge to devise similar mechanisms can be devised for other types of data sources, say, using other forms of schema constraints, ontology information, and sampling. There is also a growing interest in making data discovery and other forms of metadata management easier [1] [4] . Our work can be seen as a mechanism to verify the potential utility of some of the discovered data sources using their metadata. We hope our work spurs more research in this direction of exploiting ideas from data integration and data discovery to reduce the data sourcing burden for ML tasks.", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.1166
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nKetogenic diets improve behaviors associated with autism spectrum disorder in a sex - specific manner in the EL mouse The core symptoms of autism spectrum disorder are poorly treated with current medications. Symptoms of autism spectrum disorder are frequently comorbid with a diagnosis of epilepsy and vice versa. Medically - supervised ketogenic diets are remarkably effective nonpharmacological treatments for epilepsy, even in drug -refractory cases. There is accumulating evidence that supports the efficacy of ketogenic diets in treating the core symptoms of autism spectrum disorders in animal models as well as limited reports of benefits in patients. This study tests the behavioral effects of ketogenic diet feeding in the EL mouse, a model with behavioral characteristics of autism spectrum disorder and comorbid epilepsy. Male and female EL mice were fed control diet or one of two ketogenic diet formulas ad libitum starting at 5weeks of age. Beginning at 8weeks of age, diet protocols continued and performance of each group on tests of sociability and repetitive behavior was assessed. A ketogenic diet improved behavioral characteristics of autism spectrum disorder in a sex - and test - specific manner; ketogenic diet never worsened relevant behaviors. Ketogenic diet feeding improved multiple measures of sociability and reduced repetitive behavior in female mice, with limited effects in males. Additional experiments in female mice showed that a less strict, more clinically - relevant diet formula was equally effective in improving sociability and reducing repetitive behavior. Taken together these results add to the growing number of studies suggesting that ketogenic and related diets may provide significant relief from the core symptoms of autism spectrum disorder, and suggest that in some cases there may be increased efficacy in females.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"epilepsy\", \"autism spectrum disorder\", \"autism spectrum disorders\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"drug\", \"medications\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"females\", \"EL mouse\", \"EL mice\", \"model\", \"female mice\", \"males\", \"animal models\"], \"Finding\": [\"Ketogenic diets\", \"Symptoms\", \"ketogenic diet\", \"ketogenic diets\", \"improved\", \"worsened\", \"repetitive behavior\", \"ketogenic\", \"symptoms\", \"Ketogenic diet\"], \"Food\": [\"diet\"], \"HealthCareActivity\": [\"diagnosis\", \"treatments\", \"formulas\", \"related diets\", \"diet formula\", \"reports\", \"treated with\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"protocols\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\", \"studies\", \"experiments\"], \"SpatialConcept\": [\"manner\"], \"Virus\": []}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.419
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nGames are often designed to shape player behavior in a desired way; however, it can be unclear how design decisions affect the space of behaviors in a game. Designers usually explore this space through human playtesting, which can be time-consuming and of limited effectiveness in exhausting the space of possible behaviors. In this paper, we propose the use of automated planning agents to simulate humans of varying skill levels to generate game playthroughs. Metrics can then be gathered from these playthroughs to evaluate the current game design and identify its potential flaws. We demonstrate this technique in two games: the popular word game Scrabble and a collectible card game of our own design named Cardonomicon. Using these case studies, we show how using simulated agents to model humans of varying skill levels allows us to extract metrics to describe game balance (in the case of Scrabble) and highlight potential design flaws (in the case of Cardonomicon).\n\nCited Abstract(s):\n[1]: \n\n[2]: Monte Carlo Tree Search (MCTS) is applied to control the player character in a clone of the popular platform game Super Mario Bros. Standard MCTS is applied through search in state space with the goal of moving the furthest to the right as quickly as possible. Despite parameter tuning, only moderate success is reached. Several modifications to the algorithm are then introduced specifically to deal with the behavioural pathologies that were observed. Two of the modifications are to our best knowledge novel. A combination of these modifications is found to lead to almost perfect play on linear levels. Furthermore, when adding noise to the benchmark, MCTS outperforms the best known algorithm for these levels. The analysis and algorithmic innovations in this paper are likely to be useful when applying MCTS to other video games.\n\n[3]: \n\n[4]: A general game playing system is one that can accept a formal description of a game and play the game effectively without human intervention. Unlike specialized game players, such as Deep Blue, general game players do not rely on algorithms designed in advance for specific games; and, unlike Deep Blue, they are able to play different kinds of games. In order to promote work in this area, the AAAI is sponsoring an open competition at this summer's Twentieth National Conference on Artificial Intelligence. This article is an overview of the technical issues and logistics associated with this summer's competition, as well as the relevance of general game playing to the long range-goals of artificial intelligence.\n\n[5]: In this paper, we examine the use of Monte Carlo tree search (MCTS) for a variant of one of the most popular and profitable games in the world: the card game Magic: The Gathering (M:TG). The game tree for M:TG has a range of distinctive features, which we discuss here; it has incomplete information through the opponent's hidden cards and randomness through card drawing from a shuffled deck. We investigate a wide range of approaches that use determinization, where all hidden and random information is assumed known to all players, alongside MCTS. We consider a number of variations to the rollout strategy using a range of levels of sophistication and expert knowledge, and decaying reward to encourage play urgency. We examine the effect of utilizing various pruning strategies in order to increase the information gained from each determinization, alongside methods that increase the relevance of random choices. Additionally, we deconstruct the move generation procedure into a binary yes no decision tree and apply MCTS to this finer grained decision process. We compare our modifications to a basic MCTS approach for M:TG using fixed decks, and show that significant improvements in playing strength can be obtained.\n\n[6]: We demonstrate generating complete and playable card games using evolutionary algorithms. Card games are represented in a previously devised card game description language, a context-free grammar. The syntax of this language allows us to use grammar-guided genetic programming. Candidate card games are evaluated through a cascading evaluation function, a multi-step process where games with undesired properties are progressively weeded out. Three representa- tive examples of generated games are analysed. We observed that these games are reasonably balanced and have skill ele- ments, they are not yet entertaining for human players. The particular shortcomings of the examples are discussed in re- gard to the generative process to be able to generate quality games\n\n[7]: In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea through investigating the relative performance of different general game-playing algorithms. Seven game-playing algorithms was used to play several hand-designed, mutated and randomly generated VGDL game descriptions. Results discussed appear to support the conjecture that well-designed games have, in average, a higher performance difference between better and worse game-playing algorithms.\n\n[8]: We present EvoMCTS, a genetic programming method for enhancing level of play in games. Our work focuses on the zero-sum, deterministic, perfect-information board game of Reversi. Expanding on our previous work on evolving board-state evaluation functions for alpha-beta search algorithm variants, we now evolve evaluation functions that augment the MTCS algorithm. We use strongly typed genetic programming, explicitly defined introns, and a selective directional crossover method. Our system regularly evolves players that outperform MCTS players that use the same amount of search. Our results prove scalable and EvoMCTS players whose search is increased offline still outperform MCTS counterparts. To demonstrate the generality of our method we apply EvoMCTS successfully to the game of Dodgem.\n\n[9]: \n\n[10]: © 1997-2012 IEEE. This paper presents a method for generating complex problems that allow multiple nonobvious solutions for the physical traveling salesman problem (PTSP). PTSP is a single-player game adaptation of the classical traveling salesman problem that makes use of a simple physics model: the player has to visit a number of waypoints as quickly as possible by navigating a ship in real time across an obstacle-filled 2-D map. The difficulty of this game depends on the distribution of waypoints and obstacles across the 2-D plane. Due to the physics of the game, the shortest route is not necessarily the fastest, as the ship's momentum makes it difficult to turn sharply at high speed. This paper proposes an evolutionary approach to obtaining maps where the optimal solution is not immediately obvious. In particular, any optimal route for these maps should differ distinctively from: 1) the optimal distance-based TSP route and 2) the route that corresponds to always approaching the nearest waypoint first. To achieve this, the evolutionary algorithm covariance matrix adaptation-evolutionary strategy (CMA-ES) is employed, where maps, indirectly represented as vectors of real numbers, are evolved to differentiate maximally between a game-playing agent that follows two or more different routes. The results presented in this paper show that CMA-ES is able to generate maps that fulfil the desired conditions.\n\n[11]: Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.\n\n[12]: The Strategy Game Description Game Language (SGDL) is intended to become a complete description of all aspects of strategy games, including rules, parameters, scenarios, maps, and unit types. Our aim is to be able to model a wide variety of strategy games, simple ones as well as complex commercially available titles. In our previous work [1] we introduced the basic concepts of modelling game rules in a tree structure and evaluating them through simulated playthrough. In this paper we present some additions to the language and discuss and compare three methods to evaluate the quality of a set of game rules in two different scenarios. We find that the proposed evaluation measures are complementary, and depend on the artificial agent used.", "role": "user" }, { "content": "Evaluating potential player behavior for an arbitrary game design is a central concern of general game playing research [4] [3] . Recently, Monte Carlo Tree Search (MCTS) has emerged as a popular technique in general game playing after the successful application of MCTS to the game of Go [8]0 . Game applications of MCTS include: card selection and play in Magic: The Gathering [5] [9] ; platformer level completion [2] [1] ; simulations for fitness function heuristics in strategy [8]7 , card [6] , abstract real-time planning [10] , and general arcade games [7] ; and high-level play in board games including Reversi and Hex [8] . MCTS offers the advantages of being game-agnostic, having tunable computational cost, and guaranteeing (eventual) complete exploration of the search space. Unlike previous uses for (near-) optimal agent play, we use MCTS to sample playtraces in a game while varying agent computational bounds as a proxy for player skill. Our approach trades exhaustively exploring a small or abstracted design space for sampling larger, non-deterministic game domains, complementing prior work in this area.", "role": "assistant" } ]
science.ddi_ner
science.ddi_ner.502
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAntiacid, clarithromycin, Didanosine, Fluconazole, Fluoxetine, Indanavir, Ketoconazole, Phenytoin, Phenobarbitol, carbamazepine, Rifabutin, Rifampin, Ritanovir, Saquinavir.", "role": "user" }, { "content": "{\"Brand\": [], \"Drug\": [\"Rifabutin\", \"Ketoconazole\", \"clarithromycin\", \"Phenytoin\", \"Fluoxetine\", \"Didanosine\", \"Fluconazole\", \"Rifampin\", \"Saquinavir\", \"carbamazepine\"], \"DrugGroup\": [], \"UnapprovedDrug\": []}", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.258.1
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Multi-Task Bidirectional Transformer Representations for Irony Detection\n\nSupervised deep learning requires large amounts of training data. In the context of the FIRE2019 Arabic irony detection shared task (IDAT@FIRE2019), we show how we mitigate this need by fine-tuning the pre-trained bidirectional encoders from transformers (BERT) on gold data in a multi-task setting. We further improve our models by by further pre-training BERT on `in-domain' data, thus alleviating an issue of dialect mismatch in the Google-released BERT model. Our best model acquires 82.4 macro F1 score, and has the unique advantage of being feature-engineering free (i.e., based exclusively on deep learning).\n\nIntroduction\nThe proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning.\" A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data.\nDeep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection.\nAnother problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions:\nIn the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup.\nWe view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data).\n\nMethods ::: GRU\nFor our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following:\nwhere the update state $\\textbf {\\textit {z}}^{(t)}$ decides how much the unit updates its content:\nwhere W and U are weight matrices. The candidate activation makes use of a reset gate $\\textbf {\\textit {r}}^{(t)}$:\nwhere $\\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\\textbf {\\textit {r}}^{(t)}$ is computed as follows:\n\nMethods ::: BERT\nBERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1.\n\nMethods ::: Multi-task Learning\nIn multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data.\n\nModels ::: Single-Task BERT\nWe use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline.\n\nModels ::: Multi-Task BERT\nWe follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model.\n\nModels ::: IDAT@FIRE2019 Submission\nFor the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task.\n\nRelated Work\nMulti-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18.\nIrony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism.\nIrony in Arabic. Although Arabic is a widely spoken collection of languages ($\\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35.\n\nConclusion\nIn this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods.\n\nAcknowledgement\nWe acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).\n\nTable 1. Model Performance\n\nQuestion: What in-domain data is used to continue pre-training?", "role": "user" }, { "content": "{\"answer\": [\"dialectal tweet data\"], \"evidence\": [\"We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain\\u0027 training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data).\"]}", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.368
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nPhysicochemical parameters that influence carotenoids bioaccessibility from a tomato juice. In vitro digestion models have been developed to estimate carotenoid bioavailability but most do not consider that their diffusion from fruit matrix to the lipid phase of the bolus could be a limiting step. Therefore we designed a model in which tomato juice is mixed with oil or oil/water emulsions, and the carotenoids diffusing to oil are measured by spectrometry. Temperature, pH and tomato juice/peanut oil ratio were evaluated for their influence on carotenoid diffusion. When oil/tomato ratio was between 0.11 and 1, extraction of lycopene was limited by the saturation of the oil phase. With a large excess of oil, diffusion was also limited, as only 31 ± 1% of lycopene could be extracted from the juice. Diffusion did not vary significantly with pH but doubled when temperature rose from 10 ° C to 37 ° C. When the juice was mixed in an emulsion stabilised with bovine serum albumin or phospholipids the maximum extraction decreased to 14.5 ± 0.2% and 18.5 ± 1.5% respectively, indicating that in addition to the saturation of the oil phase at low oil/tomato ratio and in addition to intrinsic properties of the tomato juice in non-saturating conditions, lycopene diffusion was limited by the structure of the interface in emulsions.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.2470
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nAn open trial of naproxen in rheumatoid arthritis patients with significant esophageal, gastric, and duodenal lesions.\nAbstract 1:\nTo confirm the reported lack of major gastrointestinal side effects of naproxen, we gave 58 patients with active rheumatoid arthritis and significant gastrointestinal disease therapeutic doses of naproxen while closely monitoring them for signs and symptoms of gastrointestinal dysfunction. All patients underwent upper gastrointestinal x-ray examinations at the start of the trail, and, when indicated, during the course of the study. Endoscopies were also performed when indicated. Forty patients had hiatus hernia and 35 had peptic ulcer (23 duodenal ulcer and 12 gastric ulcer). Twenty-six patients had a combination of hiatus hernia with either type of peptic ulcer. After 262 patient visits over a period of 52 weeks, 35 patients remained in the study, all having had more than six months of naproxen therapy in dosages ranging from 500 to 750 mg daily. In 33 of the 35, the response to naproxen had generally been good to excellent. Only seven patients dropped out of the trial because of complaints referable to side effects. There were no major related upper gastrointestinal side effects as monitored by continuing clinical evaluation, stool occult blood, comprehensive laboratory examination, and, where indicated, upper gastrointestinal x-ray studies. Approximately 70 per cent of the patients demonstrated efficacy on long-term naproxen therapy by subjective and objective parameters. Naproxen appears to be an efficacious and remarkably safe drug in the long-term therapy of rheumatoid arthritis, even in the presence of significant upper gastrointestinal symptomatology.", "role": "user" }, { "content": "On the basis of the current review, there is scant evidence to guide clinicians about how gastrointestinal or liver comorbidities should influence the choice of pain treatment in patients with rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis or other spondyloarthritis. Based upon additional studies that included a mixed population of participants with a range of rheumatic conditions, NSAIDs should be used cautiously in patients with inflammatory arthritis and a history of gastrointestinaI comorbidity as there is consistent evidence that they may be at increased risk.", "role": "assistant" } ]
science.ncbi_ner
science.ncbi_ner.532
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\": [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIncidence and characteristics of glucose-6-phosphate dehydrogenase variants in Japan. A total of 3000 men living in Yamaguchi were screened for glucose-6-phosphate dehydrogenase (G6PD) deficiency using Beutlers spot test and three types of starch gel electrophoresis. These electrophoresis used a phosphate buffer system at pH 7. 0, a TRIS-EDTA-borate buffer system at pH 8. 6, and a TRIS-hydrochloride buffer system at pH 8.. Fifteen G6PD-deficient variants were found at the rate of 0. 5% and classified into four groups. As new variants, G6PD Konan, Kamiube, and Kiwa were identified. These three variants had a mild to moderate G6PD deficiency and were not associated with any clinical signs. G6PD Konan had fast electrophoretic mobility as compared with normal levels, G6PD Kiwa had slightly elevated electrophoretic mobility, and G6PD Kamiube had normal electrophoretic mobility. These three variants had normal levels of Km G6P, Km NADP, and Ki NADPH, normal utilizations of both 2-deoxy-G6P and deamino-NAPD, normal heat stability, and a normal pH curve. The other variant was G6PD Ube, which we had previously found in Yamaguchi (Nakashima et al., 1977). One boy with G6PD Ube was Korean", "role": "user" }, { "content": "{\"Disease\": [\"glucose-6-phosphate dehydrogenase (G6PD) deficiency\", \"G6PD-deficient\", \"G6PD deficiency\"]}", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.1093
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Can radiosurgery be used for the DNET tumors?\n\nContext: Salvage gamma knife radiosurgery in the management of dysembryoplastic neuroepithelial tumors: Long-term outcome in a single-institution case series.\nBACKGROUND: Dysembryoplastic neuroepithelial tumors (DNT/DNET) are rare epileptogenic tumors. Microsurgery remains the best treatment option, although case reports exist on the use of gamma knife radiosurgery (GKRS) in selected cases. We investigated the long-term outcome of GKRS-treated DNTs at our institution in the context of current diagnostic and treatment options.\nLong-term seizure control was obtained after GKRS of two separate residual DNT components along the surgical margin (2005 and 2010). A 27-year-old male undergoing gross total resection of the contrast-enhancing portion of a DNT (1999) resulted in temporary control of intractable epilepsy despite AEDs; lasting clinical control of seizures was achieved in 2002 after GKRS of a small, recurrent DNT component. A 28-year-old male underwent STR of DNT (1994 and 2004) resulting in temporary control of intractable epilepsy. Lasting seizure control was gained after GKRS of a residual tumor (2005).CONCLUSION: GKRS as performed in our series was effective in terms of tumor and seizure control.\nProspective studies are warranted to establish the role of GKRS in the treatment of DNTs.\nTwo rare cases of intractable epilepsy caused by Dysembryoplastic Neuroepithelial Tumours (DNET) are reported and their different management discussed. The first case required vagal nerve stimulation and radiosurgery while the later was operated with the help of neuronavigation.\nSalvage gamma knife radiosurgery in the management of dysembryoplastic neuroepithelial tumors: Long-term outcome in a single-institution case series", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.covidfact_entailment
science.covidfact_entailment.75
[ { "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Covid-19 seroprevalence among healthcare workers of a large covid hospital in rome reveals strengths and limits of all different serological tests.\n\nAbstract:\nIn several hospitals worldwide, healthcare workers are currently at the forefront against coronavirus disease 2019 (COVID-19).\nSince Fondazione Policlinico Universitario A. Gemelli (FPG) IRCCS has been enlisted as a COVID hospital, healthcare workers deployed to COVID wards were separated from those with limited or no exposure, whereas administrative staff was destined to work-from-home.\nBetween June 4 and July 3 2020, an investigation was carried out to evaluate seroprevalence of SARS-CoV-2 IgG antibodies among employees of the FPG using point-of-care (POC) and venous blood tests.\nSensitivity, specificity and predictive values were determined with reverse-transcription polymerase chain reaction (RT-PCR) on nasal/oropharyngeal swabs as gold standard.\nFour thousand, seven hundred seventy-seven participants were enrolled.\nSeroprevalence was 3.66% using the POC test and 1.19% using venous blood test, with a significant difference between the two (p < 0.05).\nPOC sensitivity and specificity were, respectively, 63.64% (95% confidence interval (CI): 62.20% to 65.04%) and 96.64% (95% CI: 96.05% to 97.13%), while those of the venous blood test were, respectively, 78.79% (95% CI: 77.58% to 79.94%) and 99.36% (95% CI: 99.07% to 99.55%).\nAmong low-risk population, point-of-care's predictive values were 58.33% (positive) and 98.23% (negative) whereas venous blood test's were 92.86% (positive) and 98.53% (negative).\nIn conclusion, point-of-care tests have low diagnostic accuracy, while venous blood tests seem to show an overall poor reliability.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Since Fondazione Policlinico Universitario A. Gemelli (FPG) IRCCS has been enlisted as a COVID hospital, healthcare workers deployed to COVID wards were separated from those with limited or no exposure, whereas administrative staff was destined to work-from-home.\"]}", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1442
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to train a supervised model for image classification.\n\nKeyphrases: image classification\n\nDataset description:\nDCASE 2014: DCASE2014 is an audio classification benchmark.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nI-HAZE: The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\n\nOptions:- A: DCASE 2014 B: ImageNet C: I-HAZE D: SNIPS E: CIFAR-10", "role": "user" }, { "content": "[\"ImageNet\", \"CIFAR-10\"]", "role": "assistant" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.333
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Crystal structure of an HIV assembly and maturation switch\nArticle:\nAbstract:\nVirus assembly and maturation proceed through the programmed operation of molecular switches, which trigger both local and global structural rearrangements to produce infectious particles. HIV-1 contains an assembly and maturation switch that spans the C-terminal domain( CTD) of the capsid( CA) region and the first spacer peptide( SP1) of the precursor structural protein, Gag. The crystal structure of the CTD-SP1 Gag fragment is a goblet-shaped hexamer in which the cup comprises the CTD and an ensuing type II β-turn, and the stem comprises a 6-helix bundle. The β-turn is critical for immature virus assembly and the 6-helix bundle regulates proteolysis during maturation. This bipartite character explains why the SP1 spacer is a critical element of HIV-1 Gag but is not a universal property of retroviruses. Our results also indicate that HIV-1 maturation inhibitors suppress unfolding of the CA-SP1 junction and thereby delay access of the viral protease to its substrate.\nIntroduction:\nAssembly and maturation of HIV-1 and other retroviruses is driven by the viral structural polyprotein, Gag, which consists of a series of independently folding domains, called MA( matrix), CA( capsid), and NC( nucleocapsid)( reviewed in Bush and Vogt, 2014; Ganser-Pornillos et al., 2012; Lingappa et al., 2014). During assembly, Gag makes a spherical, immature capsid shell that packages the viral genome, directs interactions with the plasma membrane to promote envelopment, and recruits cellular machinery to release the new virus particle from the host cell. The Gag shell has hexagonal paracrystalline lattice symmetry, wherein the central CA region makes two layers of interactions through its two domains( NTD and CTD, for N-terminal and C-terminal domain). During maturation, Gag is cleaved by the viral protease at specific sites. This results in disassembly of the Gag shell and release of new structural proteins, which then rearrange into the mature, infectious virion. In the mature virion, the genome is repackaged inside a cone-shaped capsid made up of the CA protein, with the NTD forming hexameric or pentameric rings that are connected by the CTD. As with other animal viruses, the large-scale capsid rearrangements that occur during retroviral maturation are associated with the operation of structural switches that undergo dramatic conformational changes. In HIV-1, two such putative switches flank the CA region of Gag. One switch spans the MA-CA junction, which is in an extended configuration in the immature virion( Kelly et al., 2006; Tang et al., 2002). Upon proteolysis, the new CA N-terminus folds into a β-hairpin stabilized by a buried salt bridge between the new N-terminal proline and a conserved aspartate residue and promotes assembly of the mature capsid( Gitti et al., 1996; von Schwedler et al., 1998). A second putative switch spans the junction between CA and the first spacer peptide( SP1)( Gross et al., 2000). The CA-SP1 junction is predicted to fold into an α-helix( Accola et al., 1998; Liang et al., 2002), and forms the binding site of so-called 'maturation inhibitors, ' as exemplified by 3-O-( 3’, 3’-dimethylsuccinyl) betulinic acid( bevirimat)( reviewed in Adamson et al., 2009). These inhibitors delay proteolysis of the CA-SP1 junction and induce aberrant maturation. The importance of the CA-SP1 junction as a regulator of proteolysis during maturation is well established( Kräusslich et al., 1995; Pettit et al., 1994; Wiegers et al., 1998), but the molecular basis of this function has been unknown. Mutagenesis studies also indicate that the CA-SP1 junction is important for assembly of the immature HIV-1 Gag shell( Accola et al., 1998; Al-Mawsawi et al., 2014; Datta et al., 2011; Guo et al., 2005; Kräusslich et al., 1995; Liang et al., 2002; Liang et al., 2003; Melamed et al., 2004; Pettit et al., 1994; Rihn et al., 2013; von Schwedler et al., 2003). Low resolution electron microscopy studies indicate that the junction forms what appears to be a pillar of density or helical bundle, and so is likely to stabilize the Gag lattice( Schur et al., 2015a; Schur et al., 2015b; Wright et al., 2007). However, neither the SP1 spacer nor the putative helical bundle is a universal feature of retroviral Gag shells( Bharat et al., 2012). This suggests that the SP1 spacer is not generally important for Gag assembly, or that perhaps the CA-SP1 junction has another, as yet structurally undefined assembly determinant. Structural studies of the HIV-1 CA-SP1 junction in context of longer Gag fragments( CTD-SP1 and CTD-SP1-NC) have shown that the junction is primarily disordered( Newman et al., 2004; Worthylake et al., 1999). Nevertheless, the backbone NMR chemical shifts of junction residues deviate from expected random coil values, indicating a small propensity towards an α-helical conformation( Newman et al., 2004). Indeed, the isolated SP1 peptide displays concentration dependent secondary structure in aqueous solution, and folds into an α-helix at the millimolar protein concentrations found in virions( Datta et al., 2011). Nucleic acids( both RNA and DNA) can promote Gag assembly in vitro, presumably because of proximity-induced interactions between multiple copies of Gag that bind to the same nucleic acid molecule( Campbell and Rein, 1999; Campbell and Vogt, 1995; Gross et al., 2000). It therefore seems that nucleic acid-induced Gag clustering might also promote folding of the SP1 helix. To learn how the CA-SP1 junction functions as a molecular switch during HIV-1 assembly and maturation, we determined the structure of its immature, assembled form by X-ray crystallography at a resolution of 3. 27 Å. Our analysis elucidates how local conformational changes in the CA-SP1 switch promote Gag assembly, drive large-scale capsid rearrangements, and regulate proteolysis during maturation. We also gained new insights on the mechanism of action of maturation inhibitors.\nDiscussion:\nIn a concurrent publication, Schur et al. report a cryoEM structure of the immature HIV-1 Gag lattice at near-atomic resolution( Schur et al., 2016). They also conclude that the CA-SP1 boundary folds into a 6-helix bundle that is important for assembly of Gag and its proteolytic maturation.", "role": "user" }, { "content": "Viruses like HIV must undergo a process called maturation in order to successfully infect cells . Maturation involves a dramatic rearrangement in the architecture of the virus . That is to say , the virus’s internal protein coat – called the capsid – must change from an immature sphere into a mature cone-shaped coat . Notably , this maturation process is what is disrupted by the protease inhibitors that are a major component of anti-HIV drug cocktails . Structural changes in small portions of the capsid protein , termed molecular switches , commonly trigger the viral capsids to reorganize . The HIV capsid has two of these switches , and Wagner , Zadrozny et al . set out to understand how one of them – called the CA-SP1 switch – works . Solving the three-dimensional structure of the immature form of the CA-SP1 switch revealed that it forms a well-structured bundle of six helices . This helical bundle captures another section of the capsid protein that would otherwise be cut by a viral protease . The CA-SP1 switch therefore controls how quickly the protein is cut and the start of the maturation process . Wagner , Zadrozny et al . then discovered that other small molecule inhibitors of HIV , called maturation inhibitors , work by binding to and disrupting the transformation of the CA-SP1 switch . Finally , further experiments showed that the formation of the CA-SP1 helical bundle controls when the immature capsid shell forms and coordinates the process with the capsid gaining the genetic material of the virus . The new structure means that researchers now know what the HIV capsid looks like at the start and end of maturation . The next challenge will be to figure out exactly how the capsid changes from one form to the next as HIV matures .", "role": "assistant" } ]
science.pico_ner
science.pico_ner.4301
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCefprozil versus penicillin V in treatment of streptococcal tonsillopharyngitis . In a randomized multicenter study , the efficacy and safety of cefprozil were compared with those of penicillin in the treatment of group A streptococcal tonsillopharyngitis in children . Of the 409 patients enrolled , 323 were evaluable for their clinical and bacteriological responses ; of these 323 children , 172 received cefprozil and 151 received penicillin V. The clinical responses in patients treated with cefprozil were significantly better than those in patients who received penicillin ( 95.3 versus 88.1 % ; P = 0.023 ) . Eradication of the original serotype of group A streptococci was achieved in 91.3 % of patients treated with cefprozil and 87.4 % of patients treated with penicillin , the difference not being statistically significant ( P = 0.125 ) . However , there were significantly more symptomatic patients among the bacteriological failures in the penicillin group ( 68.4 % ) than in the cefprozil group ( 26.7 % ) . beta-Lactamase-producing Staphylococcus aureus was more frequently isolated from the throat flora during penicillin therapy than during cefprozil treatment . No difference in the incidence of adverse events probably related or of unknown relationship to the study drugs was observed in the two treatment groups ( 5.2 % of those treated with cefprozil and 6.0 % of those treated with penicillin ) . Cefprozil can be considered a safe and reliable drug for the treatment of streptococcal pharyngitis in children .", "role": "user" }, { "content": "{\"Intervention\": [\"penicillin\", \"penicillin V.\", \"penicillin therapy\", \"penicillin V\", \"Cefprozil\", \"cefprozil\"], \"Outcome\": [\"isolated from the throat flora\", \"clinical responses\", \"Eradication of the original serotype of group A streptococci\", \"efficacy and safety\", \"incidence of adverse events\"], \"Participant\": [\"streptococcal tonsillopharyngitis\", \"streptococcal pharyngitis\", \"409\", \"323\"]}", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.879
[ { "content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nNeural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms.\nWe address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. \nBy considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data.\nFor smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization.\nFurthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss.\nFor imperceptible perturbations, our method matches or outperforms heuristic approaches.\n\nConsider the classical supervised learning problem, in which we minimize an expected loss E P0 [ (θ; Z)] over a parameter θ ∈ Θ, where Z ∼ P 0 , P 0 is a distribution on a space Z, and is a loss function.\nIn many systems, robustness to changes in the data-generating distribution P 0 is desirable, whether they be from covariate shifts, changes in the underlying domain BID2 , or adversarial attacks BID22 BID29 .\nAs deep networks become prevalent in modern performance-critical systems (perception for self-driving cars, automated detection of tumors), model failure is increasingly costly; in these situations, it is irresponsible to deploy models whose robustness and failure modes we do not understand or cannot certify.Recent work shows that neural networks are vulnerable to adversarial examples; seemingly imperceptible perturbations to data can lead to misbehavior of the model, such as misclassification of the output BID22 BID40 BID29 BID36 .\nConsequently, researchers have proposed adversarial attack and defense mechanisms BID41 BID53 BID47 BID12 BID23 BID33 BID51 .\nThese works provide an initial foundation for adversarial training, but it is challenging to rigorously identify the classes of attacks against which they can defend (or if they exist).\nAlternative approaches that provide formal verification of deep networks BID24 BID26 are NP-hard in general; they require prohibitive computational expense even on small networks.\nRecently, researchers have proposed convex relaxations of the NP-hard verification problem with some success BID28 BID45 , though they may be difficult to scale to large networks.\nIn this context, our work is situated between these agendas: we develop efficient procedures with rigorous guarantees for small to moderate amounts of robustness.We take the perspective of distributionally robust optimization and provide an adversarial training procedure with provable guarantees on its computational and statistical performance.\nWe postulate a class P of distributions around the data-generating distribution P 0 and consider the problem minimize DISPLAYFORM0 The choice of P influences robustness guarantees and computability; we develop robustness sets P with computationally efficient relaxations that apply even when the loss is non-convex.\nWe provide an adversarial training procedure that, for smooth , enjoys convergence guarantees similar to non-robust approaches while certifying performance even for the worst-case population loss sup P ∈P E P [ (θ; Z)].\nOn a simple implementation in Tensorflow, our method takes 5-10× as long as stochastic gradient methods for empirical risk minimization (ERM), matching runtimes for other adversarial training procedures BID22 BID29 BID33 .\nWe show that our procedure-which learns to protect against adversarial perturbations in the training dataset-generalizes, allowing us to train a model that prevents attacks to the test dataset.We briefly overview our approach.\nLet c : Z × Z → R + ∪ {∞}, where c(z, z 0 ) is the \"cost\" for an adversary to perturb z 0 to z (we typically use c(z, z 0 ) = z − z 0 2 p with p ≥ 1).\nWe consider the robustness region P = {P : W c (P, P 0 ) ≤ ρ}, a ρ-neighborhood of the distribution P 0 under the Wasserstein metric W c (·, ·) (see Section 2 for a formal definition).\nFor deep networks and other complex models, this formulation of problem FORMULA0 is intractable with arbitrary ρ.\nInstead, we consider its Lagrangian relaxation for a fixed penalty parameter γ ≥ 0, resulting in the reformulation minimize θ∈Θ F (θ) := sup DISPLAYFORM1 where φ γ (θ; z 0 ) := sup z∈Z { (θ; z) − γc(z, z 0 )} .(See\nProposition 1 for a rigorous statement of these equalities.) Here\n, we have replaced the usual loss (θ; Z) by the robust surrogate φ γ (θ; Z); this surrogate (2b) allows adversarial perturbations of the data z, modulated by the penalty γ. We\ntypically solve the penalty problem (2) with P 0 replaced by the empirical distribution P n , as P 0 is unknown (we refer to this as the penalty problem below).The\nkey feature of the penalty problem (2) is that moderate levels of robustness-in particular, defense against imperceptible adversarial perturbations-are achievable at essentially no computational or statistical cost for smooth losses . Specifically\n, for large enough penalty γ (by duality, small enough robustness ρ), the function z → (θ; z) − γc(z, z 0 ) in the robust surrogate (2b) is strongly concave and hence easy to optimize if (θ, z) is smooth in z. Consequently\n, stochastic gradient methods applied to problem (2) have similar convergence guarantees as for non-robust methods (ERM). In Section\n3, we provide a certificate of robustness for any ρ; we give an efficiently computable data-dependent upper bound on the worst-case loss sup P :Wc(P,P0)≤ρ E P [ (θ; Z)]. That is, the\nworst-case performance of the output of our principled adversarial training procedure is guaranteed to be no worse than this certificate. Our bound is\ntight when ρ = ρ n , the achieved robustness for the empirical objective. These results\nsuggest advantages of networks with smooth activations rather than ReLU's. We experimentally\nverify our results in Section 4 and show that we match or achieve state-of-the-art performance on a variety of adversarial attacks.Robust optimization and adversarial training The standard robust-optimization approach minimizes losses of the form sup u∈U (θ; z + u) for some uncertainty set U BID46 BID3 BID54 . Unfortunately, this\napproach is intractable except for specially structured losses, such as the composition of a linear and simple convex function BID3 BID54 BID55 . Nevertheless, this\nrobust approach underlies recent advances in adversarial training BID49 BID22 BID42 BID12 BID33 , which heuristically perturb data during a stochastic optimization procedure.One such heuristic uses a locally linearized loss function (proposed with p = ∞ as the \"fast gradient sign method\" BID22 ): DISPLAYFORM2 One form of adversarial training trains on the losses (θ; (x i + ∆ xi (θ), y i )) BID22 BID29 , while others perform iterated variants BID42 BID12 BID33 BID51 . BID33 observe that\nthese procedures attempt to optimize the objective E P0 [sup u p ≤ (θ; Z + u)], a constrained version of the penalty problem (2). This notion of robustness\nis typically intractable: the inner supremum is generally non-concave in u, so it is unclear whether model-fitting with these techniques converges, and there are possibly worst-case perturbations these techniques do not find. Indeed, it is NP-hard to\nfind worst-case perturbations when deep networks use ReLU activations, suggesting difficulties for fast and iterated heuristics (see Lemma 2 in Appendix B). Smoothness, which can be\nobtained in standard deep architectures with exponential linear units (ELU's) BID15 , allows us to find Lagrangian worst-case perturbations with low computational cost.Distributionally robust optimization To situate the current work, we review some of the substantial body of work on robustness and learning. The choice of P in the robust\nobjective (1) affects both the richness of the uncertainty set we wish to consider as well as the tractability of the resulting optimization problem. Previous approaches to distributional\nrobustness have considered finitedimensional parametrizations for P, such as constraint sets for moments, support, or directional deviations BID13 BID16 BID21 , as well as non-parametric distances for probability measures such as f -divergences BID4 BID5 BID30 BID34 , and Wasserstein distances BID48 BID7 . In constrast to f -divergences (e.g.\nχ 2 -or Kullback-Leibler divergences) which are effective when the support of the distribution P 0 is fixed, a Wasserstein ball around P 0 includes distributions Q with different support and allows (in a sense) robustness to unseen data.Many authors have studied tractable classes of uncertainty sets P and losses . For example, BID4 and BID38 use convex\noptimization approaches for fdivergence balls. For worst-case regions P formed by Wasserstein\nballs, , BID48 , and BID7 show how to convert the saddle-point problem (1) to a regularized ERM problem, but this is possible only for a limited class of convex losses and costs c. In this work, we treat a much larger class of\nlosses and costs and provide direct solution methods for a Lagrangian relaxation of the saddle-point problem (1). One natural application area is in domain adaptation\nBID31 ; concurrently with this work, Lee & Raginsky provide guarantees similar to ours for the empirical minimizer of the robust saddle-point problem (1) and give specialized bounds for domain adaptation problems. In contrast, our approach is to use the distributionally\nrobust approach to both defend against imperceptible adversarial perturbations and develop efficient optimization procedures.\nExplicit distributional robustness of the form (5) is intractable except in limited cases.\nWe provide a principled method for efficiently guaranteeing distributional robustness with a simple form of adversarial data perturbation.\nUsing only assumptions about the smoothness of the loss function , we prove that our method enjoys strong statistical guarantees and fast optimization rates for a large class of problems.\nThe NP-hardness of certifying robustness for ReLU networks, coupled with our empirical success and theoretical certificates for smooth networks in deep learning, suggest that using smooth networks may be preferable if we wish to guarantee robustness.\nEmpirical evaluations indicate that our methods are in fact robust to perturbations in the data, and they match or outperform less-principled adversarial training techniques.\nThe major benefit of our approach is its simplicity and wide applicability across many models and machine-learning scenarios.There remain many avenues for future investigation.\nOur optimization result (Theorem 2) applies only for small values of robustness ρ and to a limited class of Wasserstein costs.\nFurthermore, our statistical guarantees (Theorems 3 and 4) use · ∞ -covering numbers as a measure of model complexity, which can become prohibitively large for deep networks.\nIn a learning-theoretic context, where the goal is to provide insight into convergence behavior as well as comfort that a procedure will \"work\" given enough data, such guarantees are satisfactory, but this may not be enough in security-essential contexts.\nThis problem currently persists for most learning-theoretic guarantees in deep learning, and the recent works of BID1 , BID18 , and BID39 attempt to mitigate this shortcoming.\nReplacing our current covering number arguments with more intricate notions such as margin-based bounds BID1 would extend the scope and usefulness of our theoretical guarantees.\nOf course, the certificate (15) still holds regardless.More broadly, this work focuses on small-perturbation attacks, and our theoretical guarantees show that it is possible to efficiently build models that provably guard against such attacks.\nOur method becomes another heuristic for protection against attacks with large adversarial budgets.\nIndeed, in the large-perturbation regime, efficiently training certifiably secure systems remains an important open question.\nWe believe that conventional · ∞ -defense heuristics developed for image classification do not offer much comfort in the large-perturbation/perceptible-attack setting: · ∞ -attacks with a large budget can render images indiscernible to human eyes, while, for example, · 1 -attacks allow a concerted perturbation to critical regions of the image.\nCertainly · ∞ -attack and defense models have been fruitful in building a foundation for security research in deep learning, but moving beyond them may be necessary for more advances in the large-perturbation regime.", "role": "user" }, { "content": "We provide a fast, principled adversarial training procedure with computational and statistical performance guarantees.", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1398
[ { "content": "You will be shown an excerpt from a computer science research paper, followed by a question about the paper. Please answer the question given the context. Do not include any text in your response other than the answer.\n\nContext: The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification (truncated before any classification layers), which we will call the base network222We use the VGG-16 network as a base, but other networks should also produce good results.. We then add auxiliary structure to the network to produce detections with the following key features:\n\nQuestion: What do you mean by NMS(Non-Maximal Suppression) ?", "role": "user" }, { "content": "NMS is used in SSD to get the final predictions. The definition and process of NMS cannot be answered in this paper.", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.2496
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPigment production by a new thermotolerant microalga Coelastrella sp. F50. Microalgae are good crops to produce natural pigments because of their high growth rates. Tropical zones are better locations than temperate areas for microalgal cultivation because they have longer duration of daylight and more stable temperatures throughout the year, but the high temperatures pose a challenge to microalgal cultivation. A newly isolated thermotolerant microalga produces reddish pigments under environmental stress. Morphological and molecular evidence including meridional ribs on the cell wall, pigment production, and its 18S rDNA sequence suggests that this microalga belongs to the genus Coelastrella. Salt stress and high light intensity accelerated biosynthesis of the pigments, and significant quantities of oil accumulated as the cells experienced stress due to nutrient deficiency. This microalga could withstand temperature of 50°C for more than 8h, which is a necessary trait for outdoor cultivation in tropical areas. The pigments contain astaxanthin, lutein, canthaxanthin, and β-carotene as analysed by using HPLC.", "role": "user" }, { "content": "{\"Chemical\": [\"\\u03b2-carotene\", \"astaxanthin\", \"lutein\", \"canthaxanthin\"]}", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.156
[ { "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a JSON array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the JSON array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Neoadjuvant chemotherapy (MVAC) in locally invasive bladder cancer. In order to evaluate the efficacy of neoadjuvant chemotherapy in invasive urothelial carcinoma of the bladder a retrospective analysis was performed. 54 patients without distant metastases ( T2-T3b , N0-X , M0 ) received 3 cycles of neoadjuvant chemotherapy according to the MVAC protocol ( methotrexate , vinblastine , doxorubicin and cisplatin ) after transurethral resection ( TUR ) followed by cystectomy . 52 patients had previously undergone cystectomy immediately after TUR. Complete histopathological remission was observed in 9 patients (17.3%) after TUR and in 17 patients (31.5%) after TUR+MVAC. Neoadjuvant MVAC resulted, therefore, in a 14% higher rate of complete remissions. The overall response to TUR was significantly improved by MVAC therapy. Downstaging by neoadjuvant chemotherapy was more readily achieved in initially low-stage tumours (T2: 44.4% and 30.8%, T3a: 47.1% and 19%, T3b: 5.3% and 5.5% in patients receiving TUR+MVAC and TUR alone, respectively). Overall survival did not differ significantly between both groups. Patients who were successfully downstaged to pT0 had a significantly better prognosis, and patients resistant to chemotherapy had the poorest prognosis, showing the shortest survival. In conclusion, histopathological response at cystectomy was improved by neoadjuvant MVAC chemotherapy after TUR and can be expected to be prognostically relevant in those patients who can be downstaged to T0, although overall survival failed to be significantly increased in this relatively small patient sample.\n\nsentence: 54 patients without distant metastases ( T2-T3b , N0-X , M0 ) received 3 cycles of neoadjuvant chemotherapy according to the MVAC protocol ( methotrexate , vinblastine , doxorubicin and cisplatin ) after transurethral resection ( TUR ) followed by cystectomy .", "role": "user" }, { "content": "[[\"methotrexate\", \"vinblastine\", \"doxorubicin\", \"cisplatin\", \"POS\"]]", "role": "assistant" } ]