{
"ED-Copilot: Reduce Emergency Department Wait Time with Language Model Diagnostic Assistance": {
"paper_title": "ED-Copilot: Reduce Emergency Department Wait Time with Language Model Diagnostic Assistance",
"arxiv_id": "2402.13448",
"authors": [
"Liwen Sun",
"Abhineet Agarwal",
"Aaron E. Kornblith",
"Bin Yu",
"Chenyan Xiong"
],
"year": 2024,
"venue": "International Conference on Machine Learning",
"abstract": "In the emergency department (ED), patients undergo triage and multiple laboratory tests before diagnosis. This time-consuming process causes ED crowding which impacts patient mortality, medical errors, staff burnout, etc. This work proposes (time) cost-effective diagnostic assistance that leverages artificial intelligence systems to help ED clinicians make efficient and accurate diagnoses. In collaboration with ED clinicians, we use public patient data to curate MIMIC-ED-Assist, a benchmark for AI systems to suggest laboratory tests that minimize wait time while accurately predicting critical outcomes such as death. With MIMIC-ED-Assist, we develop ED-Copilot which sequentially suggests patient-specific laboratory tests and makes diagnostic predictions. ED-Copilot employs a pre-trained bio-medical language model to encode patient information and uses reinforcement learning to minimize ED wait time and maximize prediction accuracy. On MIMIC-ED-Assist, ED-Copilot improves prediction accuracy over baselines while halving average wait time from four hours to two hours. ED-Copilot can also effectively personalize treatment recommendations based on patient severity, further highlighting its potential as a diagnostic assistant. Since MIMIC-ED-Assist is a retrospective benchmark, ED-Copilot is restricted to recommend only observed tests. We show ED-Copilot achieves competitive performance without this restriction as the maximum allowed time increases. Our code is available at https://github.com/cxcscmu/ED-Copilot.",
"references": [
{
"title": "Language models are weak learners",
"abstract": "A central notion in practical and theoretical machine learning is that of a $\\textit{weak learner}$, classifiers that achieve better-than-random performance (on any given distribution over data), even by a small margin. Such weak learners form the practical basis for canonical machine learning methods such as boosting. In this work, we illustrate that prompt-based large language models can operate effectively as said weak learners. Specifically, we illustrate the use of a large language model (LLM) as a weak learner in a boosting algorithm applied to tabular data. We show that by providing (properly sampled according to the distribution of interest) text descriptions of tabular data samples, LLMs can produce a summary of the samples that serves as a template for classification and achieves the aim of acting as a weak learner on this task. We incorporate these models into a boosting approach, which in some settings can leverage the knowledge within the LLM to outperform traditional tree-based boosting. The model outperforms both few-shot learning and occasionally even more involved fine-tuning procedures, particularly for tasks involving small numbers of data points. The results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Hariharan Manikandan",
"Yiding Jiang",
"J. Z. Kolter"
],
"externalIds": {
"DBLP": "conf/nips/ManikandanJK23",
"ArXiv": "2306.14101",
"DOI": "10.48550/arXiv.2306.14101",
"CorpusId": 259252065
},
"url": "https://www.semanticscholar.org/paper/7d87fbdfbf5038a4e0ff09801b6d3b8a2e0c613a",
"referenceCount": 0,
"citationCount": 8,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Towards Expert-Level Medical Question Answering with Large Language Models",
"abstract": "Recent artificial intelligence (AI) systems have reached milestones in\"grand challenges\"ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a\"passing\"score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p<0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p<0.001) on newly introduced datasets of 240 long-form\"adversarial\"questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"K. Singhal",
"Tao Tu",
"Juraj Gottweis",
"R. Sayres",
"Ellery Wulczyn",
"Le Hou",
"Kevin Clark",
"S. Pfohl",
"H. Cole-Lewis",
"Darlene Neal",
"M. Schaekermann",
"Amy Wang",
"Mohamed Amin",
"S. Lachgar",
"P. A. Mansfield",
"Sushant Prakash",
"Bradley Green",
"Ewa Dominowska",
"B. A. Y. Arcas",
"Nenad Tomašev",
"Yun Liu",
"Renee C Wong",
"Christopher Semturs",
"S. S. Mahdavi",
"J. Barral",
"D. Webster",
"G. Corrado",
"Yossi Matias",
"Shekoofeh Azizi",
"A. Karthikesalingam",
"Vivek Natarajan"
],
"externalIds": {
"DBLP": "journals/corr/abs-2305-09617",
"ArXiv": "2305.09617",
"DOI": "10.48550/arXiv.2305.09617",
"CorpusId": 258715226
},
"url": "https://www.semanticscholar.org/paper/7ed0faa6720cd176d57badbc0455af31a03f080c",
"referenceCount": 51,
"citationCount": 370,
"influentialCitationCount": 44,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling",
"abstract": "How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introduce \\textit{Pythia}, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intend \\textit{Pythia} to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found at \\url{https://github.com/EleutherAI/pythia}.",
"year": 2023,
"venue": "International Conference on Machine Learning",
"authors": [
"Stella Biderman",
"Hailey Schoelkopf",
"Quentin G. Anthony",
"Herbie Bradley",
"Kyle O'Brien",
"Eric Hallahan",
"Mohammad Aflah Khan",
"Shivanshu Purohit",
"USVSN Sai Prashanth",
"Edward Raff",
"Aviya Skowron",
"Lintang Sutawika",
"Oskar van der Wal"
],
"externalIds": {
"DBLP": "conf/icml/BidermanSABOHKP23",
"ArXiv": "2304.01373",
"DOI": "10.48550/arXiv.2304.01373",
"CorpusId": 257921893
},
"url": "https://www.semanticscholar.org/paper/be55e8ec4213868db08f2c3168ae666001bea4b8",
"referenceCount": 101,
"citationCount": 784,
"influentialCitationCount": 111,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "LLaMA: Open and Efficient Foundation Language Models",
"abstract": "We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Thibaut Lavril",
"Gautier Izacard",
"Xavier Martinet",
"Marie-Anne Lachaux",
"Timothée Lacroix",
"Baptiste Rozière",
"Naman Goyal",
"Eric Hambro",
"Faisal Azhar",
"Aurelien Rodriguez",
"Armand Joulin",
"Edouard Grave",
"Guillaume Lample"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-13971",
"ArXiv": "2302.13971",
"CorpusId": 257219404
},
"url": "https://www.semanticscholar.org/paper/57e849d0de13ed5f91d086936296721d4ff75a75",
"referenceCount": 80,
"citationCount": 8037,
"influentialCitationCount": 1074,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Deep Reinforcement Learning for Cost-Effective Medical Diagnosis",
"abstract": "Dynamic diagnosis is desirable when medical tests are costly or time-consuming. In this work, we use reinforcement learning (RL) to find a dynamic policy that selects lab test panels sequentially based on previous observations, ensuring accurate testing at a low cost. Clinical diagnostic data are often highly imbalanced; therefore, we aim to maximize the $F_1$ score instead of the error rate. However, optimizing the non-concave $F_1$ score is not a classic RL problem, thus invalidates standard RL methods. To remedy this issue, we develop a reward shaping approach, leveraging properties of the $F_1$ score and duality of policy optimization, to provably find the set of all Pareto-optimal policies for budget-constrained $F_1$ score maximization. To handle the combinatorially complex state space, we propose a Semi-Model-based Deep Diagnosis Policy Optimization (SM-DDPO) framework that is compatible with end-to-end training and online learning. SM-DDPO is tested on diverse clinical tasks: ferritin abnormality detection, sepsis mortality prediction, and acute kidney injury diagnosis. Experiments with real-world data validate that SM-DDPO trains efficiently and identifies all Pareto-front solutions. Across all tasks, SM-DDPO is able to achieve state-of-the-art diagnosis accuracy (in some cases higher than conventional methods) with up to $85\\%$ reduction in testing cost. The code is available at [https://github.com/Zheng321/Deep-Reinforcement-Learning-for-Cost-Effective-Medical-Diagnosis].",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"Zheng Yu",
"Yikuan Li",
"Joseph Kim",
"Kai Huang",
"Yuan Luo",
"Mengdi Wang"
],
"externalIds": {
"ArXiv": "2302.10261",
"DBLP": "journals/corr/abs-2302-10261",
"DOI": "10.48550/arXiv.2302.10261",
"CorpusId": 257050266
},
"url": "https://www.semanticscholar.org/paper/00f742790f41f52b1bf5ab49d802293ecf3c211e",
"referenceCount": 58,
"citationCount": 9,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MIMIC-IV, a freely accessible electronic health record dataset",
"abstract": null,
"year": 2023,
"venue": "Scientific Data",
"authors": [
"Alistair E. W. Johnson",
"Lucas Bulgarelli",
"Lu Shen",
"Alvin Gayles",
"Ayad Shammout",
"S. Horng",
"T. Pollard",
"Benjamin Moody",
"Brian Gow",
"Li-wei H. Lehman",
"L. Celi",
"R. Mark"
],
"externalIds": {
"PubMedCentral": "9810617",
"DOI": "10.1038/s41597-022-01899-x",
"CorpusId": 255439889,
"PubMed": "36596836"
},
"url": "https://www.semanticscholar.org/paper/5425de16356015c2f26d2a50684c6c46d6998f51",
"referenceCount": 27,
"citationCount": 544,
"influentialCitationCount": 53,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Benchmarking emergency department prediction models with machine learning and public electronic health records",
"abstract": null,
"year": 2022,
"venue": "Scientific Data",
"authors": [
"F. Xie",
"Jun Zhou",
"Jin Wee Lee",
"Mingrui Tan",
"Siqi Li",
"Logasan SO Rajnthern",
"M. Chee",
"Bibhas Chakraborty",
"A. Wong",
"Alon Dagan",
"M. Ong",
"Fei Gao",
"Nan Liu"
],
"externalIds": {
"PubMedCentral": "9610299",
"DOI": "10.1038/s41597-022-01782-9",
"CorpusId": 253162824,
"PubMed": "36302776"
},
"url": "https://www.semanticscholar.org/paper/aa8a6211bef10290d517a18f214e2aff92346e7e",
"referenceCount": 75,
"citationCount": 29,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "TabLLM: Few-shot Classification of Tabular Data with Large Language Models",
"abstract": "We study the application of large language models to zero-shot and few-shot classification of tabular data. We prompt the large language model with a serialization of the tabular data to a natural-language string, together with a short description of the classification problem. In the few-shot setting, we fine-tune the large language model using some labeled examples. We evaluate several serialization methods including templates, table-to-text models, and large language models. Despite its simplicity, we find that this technique outperforms prior deep-learning-based tabular classification methods on several benchmark datasets. In most cases, even zero-shot classification obtains non-trivial performance, illustrating the method's ability to exploit prior knowledge encoded in large language models. Unlike many deep learning methods for tabular datasets, this approach is also competitive with strong traditional baselines like gradient-boosted trees, especially in the very-few-shot setting.",
"year": 2022,
"venue": "International Conference on Artificial Intelligence and Statistics",
"authors": [
"S. Hegselmann",
"Alejandro Buendia",
"Hunter Lang",
"Monica Agrawal",
"Xiaoyi Jiang",
"D. Sontag"
],
"externalIds": {
"DBLP": "journals/corr/abs-2210-10723",
"ArXiv": "2210.10723",
"DOI": "10.48550/arXiv.2210.10723",
"CorpusId": 252992811
},
"url": "https://www.semanticscholar.org/paper/9dcee248452d84b6bf26911ba6726ae5ce1a46f3",
"referenceCount": 62,
"citationCount": 131,
"influentialCitationCount": 21,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining",
"abstract": "Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.",
"year": 2022,
"venue": "Briefings Bioinform.",
"authors": [
"Renqian Luo",
"Liai Sun",
"Yingce Xia",
"Tao Qin",
"Sheng Zhang",
"Hoifung Poon",
"Tie-Yan Liu"
],
"externalIds": {
"ArXiv": "2210.10341",
"DBLP": "journals/bib/LuoSXQZPL22",
"DOI": "10.1093/bib/bbac409",
"CorpusId": 252542956,
"PubMed": "36156661"
},
"url": "https://www.semanticscholar.org/paper/44279244407a64431810f982be6d0c7da4429dd7",
"referenceCount": 59,
"citationCount": 549,
"influentialCitationCount": 52,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Overcrowding in Emergency Department: Causes, Consequences, and Solutions—A Narrative Review",
"abstract": "Overcrowding in Emergency Departments (EDs) is a phenomenon that is now widespread globally and causes a significant negative impact that goes on to affect the entire hospital. This contributes to a number of consequences that can affect both the number of resources available and the quality of care. Overcrowding is due to a number of factors that in most cases lead to an increase in the number of people within the ED, an increase in mortality and morbidity, and a decrease in the ability to provide critical services in a timely manner to patients suffering from medical emergencies. This phenomenon results in the Emergency Department reaching, and in some cases exceeding, its optimal capacity. In this review, the main causes and consequences involving this phenomenon were collected, including the effect caused by the SARS-CoV-2 virus in recent years. Finally, special attention was paid to the main operational strategies that have been developed over the years, strategies that can be applied both at the ED level (microlevel strategies) and at the hospital level (macrolevel strategies).",
"year": 2022,
"venue": "Healthcare",
"authors": [
"M. Sartini",
"Alessio Carbone",
"Alice Demartini",
"Luana Giribone",
"Martino Oliva",
"A. M. Spagnolo",
"P. Cremonesi",
"Francesco Canale",
"M. L. Cristina"
],
"externalIds": {
"PubMedCentral": "9498666",
"DOI": "10.3390/healthcare10091625",
"CorpusId": 251876789,
"PubMed": "36141237"
},
"url": "https://www.semanticscholar.org/paper/601fdf27ba80a54c92f5e7ff05d6067ee851f7b7",
"referenceCount": 60,
"citationCount": 52,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Predictability and stability testing to assess clinical decision instrument performance for children after blunt torso trauma",
"abstract": "Objective The Pediatric Emergency Care Applied Research Network (PECARN) has developed a clinical-decision instrument (CDI) to identify children at very low risk of intra-abdominal injury. However, the CDI has not been externally validated. We sought to vet the PECARN CDI with the Predictability Computability Stability (PCS) data science framework, potentially increasing its chance of a successful external validation. Materials & Methods We performed a secondary analysis of two prospectively collected datasets: PECARN (12,044 children from 20 emergency departments) and an independent external validation dataset from the Pediatric Surgical Research Collaborative (PedSRC; 2,188 children from 14 emergency departments). We used PCS to reanalyze the original PECARN CDI along with new interpretable PCS CDIs we developed using the PECARN dataset. External validation was then measured on the PedSRC dataset. Results Three predictor variables (abdominal wall trauma, Glasgow Coma Scale Score <14, and abdominal tenderness) were found to be stable. Using only these variables, we developed a PCS CDI which had a lower sensitivity than the original PECARN CDI on internal PECARN validation but performed the same on external PedSRC validation (sensitivity 96.8% and specificity 44%). Conclusion The PCS data science framework vetted the PECARN CDI and its constituent predictor variables prior to external validation. In this case, the PECARN CDI with 7 predictors, and our PCS-based CDI with 3 stable predictors, had identical performance on independent external validation. This suggests that both CDIs will generalize well to new populations, offering a potential strategy to increase the chance of a successful (costly) prospective validation.",
"year": 2022,
"venue": "medRxiv",
"authors": [
"Aaron E. Kornblith",
"Chandan Singh",
"G. Devlin",
"N. Addo",
"C. Streck",
"J. Holmes",
"N. Kuppermann",
"J. Grupp‐Phelan",
"J. Fineman",
"A. Butte",
"Bin Yu"
],
"externalIds": {
"PubMedCentral": "9931266",
"DOI": "10.1371/journal.pdig.0000076",
"CorpusId": 247291442,
"PubMed": "36812570"
},
"url": "https://www.semanticscholar.org/paper/cceffcc041eb044866d14919d72ef2aa39b1d373",
"referenceCount": 43,
"citationCount": 9,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods",
"abstract": "Tree-based models such as decision trees and random forests (RF) are a cornerstone of modern machine-learning practice. To mitigate overfitting, trees are typically regularized by a variety of techniques that modify their structure (e.g. pruning). We introduce Hierarchical Shrinkage (HS), a post-hoc algorithm that does not modify the tree structure, and instead regularizes the tree by shrinking the prediction over each node towards the sample means of its ancestors. The amount of shrinkage is controlled by a single regularization parameter and the number of data points in each ancestor. Since HS is a post-hoc method, it is extremely fast, compatible with any tree growing algorithm, and can be used synergistically with other regularization techniques. Extensive experiments over a wide variety of real-world datasets show that HS substantially increases the predictive performance of decision trees, even when used in conjunction with other regularization techniques. Moreover, we find that applying HS to each tree in an RF often improves accuracy, as well as its interpretability by simplifying and stabilizing its decision boundaries and SHAP values. We further explain the success of HS in improving prediction performance by showing its equivalence to ridge regression on a (supervised) basis constructed of decision stumps associated with the internal nodes of a tree. All code and models are released in a full-fledged package available on Github (github.com/csinva/imodels)",
"year": 2022,
"venue": "International Conference on Machine Learning",
"authors": [
"Abhineet Agarwal",
"Yan Shuo Tan",
"O. Ronen",
"Chandan Singh",
"Bin Yu"
],
"externalIds": {
"DBLP": "conf/icml/AgarwalTRS022",
"ArXiv": "2202.00858",
"CorpusId": 246473164
},
"url": "https://www.semanticscholar.org/paper/a855949d557fe140de41cb33bae26d51219f6560",
"referenceCount": 61,
"citationCount": 23,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "A large language model for electronic health records",
"abstract": null,
"year": 2022,
"venue": "npj Digital Medicine",
"authors": [
"Xi Yang",
"Aokun Chen",
"Nima M. Pournejatian",
"Hoo-Chang Shin",
"Kaleb E. Smith",
"Christopher Parisien",
"Colin B. Compas",
"Cheryl Martin",
"Anthony B Costa",
"Mona G. Flores",
"Ying Zhang",
"Tanja Magoc",
"C. Harle",
"Gloria P. Lipori",
"Duane A. Mitchell",
"W. Hogan",
"E. Shenkman",
"Jiang Bian",
"Yonghui Wu"
],
"externalIds": {
"DBLP": "journals/npjdm/0015CPSSPC0CFZM22",
"ArXiv": "2203.03540",
"PubMedCentral": "9792464",
"DOI": "10.1038/s41746-022-00742-2",
"CorpusId": 255175535,
"PubMed": "36572766"
},
"url": "https://www.semanticscholar.org/paper/332dc8b2ca9d49fad607c7282f3360bb2a9aacf3",
"referenceCount": 86,
"citationCount": 330,
"influentialCitationCount": 12,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Emergency Department Overcrowding: Understanding the Factors to Find Corresponding Solutions",
"abstract": "It is certain and established that overcrowding represents one of the main problems that has been affecting global health and the functioning of the healthcare system in the last decades, and this is especially true for the emergency department (ED). Since 1980, overcrowding has been identified as one of the main factors limiting correct, timely, and efficient hospital care. The more recent COVID-19 pandemic contributed to the accentuation of this phenomenon, which was already well known and of international interest. Considering what would appear to be a trivial definition of overcrowding, it may seem simple for the reader to hypothesize solutions for what seems to be one of the most avoidable problems affecting the hospital system. However, proposing solutions to overcrowding, as well as their implementation, cannot be separated from a correct and precise definition of the issue, which must consider the main causes and aggravating factors. In light of the need of finding solutions that can put an end to hospital overcrowding, this review aims, through a review of the literature, to summarize the triggering factors, as well as the possible solutions that can be proposed.",
"year": 2022,
"venue": "Journal of Personalized Medicine",
"authors": [
"G. Savioli",
"I. Ceresa",
"N. Gri",
"Gaia Bavestrello Piccini",
"Y. Longhitano",
"C. Zanza",
"A. Piccioni",
"Ciro Esposito",
"G. Ricevuti",
"M. Bressan"
],
"externalIds": {
"PubMedCentral": "8877301",
"DOI": "10.3390/jpm12020279",
"CorpusId": 246887480,
"PubMed": "35207769"
},
"url": "https://www.semanticscholar.org/paper/31e061e9232f128b30fc34d7bc4f3c2e8ebbd4ab",
"referenceCount": 92,
"citationCount": 111,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Clinical Relation Extraction Using Transformer-based Models",
"abstract": "The newly emerged transformer technology has a tremendous impact on NLP research. In the general English domain, transformer-based models have achieved state-of-the-art performances on various NLP benchmarks. In the clinical domain, researchers also have investigated transformer models for clinical applications. The goal of this study is to systematically explore three widely used transformer-based models (i.e., BERT, RoBERTa, and XLNet) for clinical relation extraction and develop an open-source package with clinical pre-trained transformer-based models to facilitate information extraction in the clinical domain. We developed a series of clinical RE models based on three transformer architectures, namely BERT, RoBERTa, and XLNet. We evaluated these models using 2 publicly available datasets from 2018 MADE1.0 and 2018 n2c2 challenges. We compared two classification strategies (binary vs. multi-class classification) and investigated two approaches to generate candidate relations in different experimental settings. In this study, we compared three transformer-based (BERT, RoBERTa, and XLNet) models for relation extraction. We demonstrated that the RoBERTa-clinical RE model achieved the best performance on the 2018 MADE1.0 dataset with an F1-score of 0.8958. On the 2018 n2c2 dataset, the XLNet-clinical model achieved the best F1-score of 0.9610. Our results indicated that the binary classification strategy consistently outperformed the multi-class classification strategy for clinical relation extraction. Our methods and models are publicly available at this https URL. We believe this work will improve current practice on clinical relation extraction and other related NLP tasks in the biomedical domain.",
"year": 2021,
"venue": "arXiv.org",
"authors": [
"Xi Yang",
"Zehao Yu",
"Yi Guo",
"J. Bian",
"Yonghui Wu"
],
"externalIds": {
"MAG": "3184140580",
"ArXiv": "2107.08957",
"DBLP": "journals/corr/abs-2107-08957",
"CorpusId": 236087310
},
"url": "https://www.semanticscholar.org/paper/7d04408f674c8b6e98831b0c66d8ccfe8e3fa179",
"referenceCount": 75,
"citationCount": 19,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Predicting Progression to Septic Shock in the Emergency Department using an Externally Generalizable Machine Learning Algorithm",
"abstract": "Objective: Machine-learning (ML) algorithms allow for improved prediction of sepsis syndromes in the ED using data from electronic medical records. Transfer learning, a new subfield of ML, allows for generalizability of an algorithm across clinical sites. We aimed to validate the Artificial Intelligence Sepsis Expert (AISE) for the prediction of delayed septic shock in a cohort of patients treated in the ED and demonstrate the feasibility of transfer learning to improve external validity at a second site. Methods: Observational cohort study utilizing data from over 180,000 patients from two academic medical centers between 2014 and 2019 using multiple definitions of sepsis. The AISE algorithm was trained using 40 input variables at the development site to predict delayed septic shock (occurring greater than 4 hours after ED triage) at varying prediction windows. We then validated the AISE algorithm at a second site using transfer learning to demonstrate generalizability of the algorithm. Results: We identified 9354 patients with severe sepsis of which 723 developed septic shock at least 4 hours after triage. The AISE algorithm demonstrated excellent area under the receiver operating curve (>0.8) at 8 and 12 hours for the prediction of delayed septic shock. Transfer learning significantly improved the test characteristics of the AISE algorithm and yielded comparable performance at the validation site. Conclusions: The AISE algorithm accurately predicted the development of delayed septic shock. The use of transfer learning allowed for significantly improved external validity and generalizability at a second site. Future prospective studies are indicated to evaluate the clinical utility of this model.",
"year": 2020,
"venue": "medRxiv",
"authors": [
"Gabriel Wardi",
"M. Carlile",
"Andre Holder",
"S. Shashikumar",
"Stephen R Hayden",
"S. Nemati"
],
"externalIds": {
"MAG": "3095528344",
"DOI": "10.1101/2020.11.02.20224931",
"CorpusId": 226245577,
"PubMed": "33173889"
},
"url": "https://www.semanticscholar.org/paper/4fd4bd2e9f4f978723cd651877a9c0c46a274631",
"referenceCount": 44,
"citationCount": 23,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Predicting 30-day mortality of patients with pneumonia in an emergency department setting using machine-learning models",
"abstract": "Objective This study aimed to confirm the accuracy of a machine-learning-based model in predicting the 30-day mortality of patients with pneumonia and evaluating whether they were required to be admitted to the intensive care unit (ICU). Methods The study conducted a retrospective analysis of pneumonia patients at an emergency department (ED) in Seoul, Korea, from January 1, 2016 to December 31, 2017. Patients aged 18 years or older with a pneumonia registry designation on their electronic medical record were enrolled. We collected their demographic information, mental status, and laboratory findings. Three models were used: the pre-existing CURB-65 model, and the CURB-RF and Extensive CURB-RF models, which were machine-learning models that used a random forest algorithm. The primary outcomes were ICU admission from the ED or 30-day mortality. Receiver operating characteristic curves were constructed for the models, and the areas under these curves were compared. Results Out of the 1,974 pneumonia patients, 1,732 patients were eligible to be included in the study; from these, 473 patients died within 30 days or were initially admitted to the ICU from the ED. The area under receiver operating characteristic curves of CURB-65, CURB-RF, and extensive-CURB-RF were 0.615 (0.614–0.616), 0.701 (0.700–0.702), and 0.844 (0.843–0.845), respectively. Conclusion The proposed machine-learning models could predict the mortality of patients with pneumonia more accurately than the pre-existing CURB-65 model and can help decide whether the patient should be admitted to the ICU.",
"year": 2020,
"venue": "Clinical and Experimental Emergency Medicine",
"authors": [
"S. Kang",
"W. Cha",
"Junsang Yoo",
"Taerim Kim",
"J. Park",
"Hee Yoon",
"S. Hwang",
"M. Sim",
"I. Jo",
"T. Shin"
],
"externalIds": {
"MAG": "3091740529",
"PubMedCentral": "7550804",
"DOI": "10.15441/ceem.19.052",
"CorpusId": 222216758,
"PubMed": "33028063"
},
"url": "https://www.semanticscholar.org/paper/22c10eaea358caa1cfa5fa1671414a0c2a3dd81e",
"referenceCount": 35,
"citationCount": 16,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "MIMIC-Extract: a data extraction, preprocessing, and representation pipeline for MIMIC-III",
"abstract": "Machine learning for healthcare researchers face challenges to progress and reproducibility due to a lack of standardized processing frameworks for public datasets. We present MIMIC-Extract, an open source pipeline for transforming the raw electronic health record (EHR) data of critical care patients from the publicly-available MIMIC-III database into data structures that are directly usable in common time-series prediction pipelines. MIMIC-Extract addresses three challenges in making complex EHR data accessible to the broader machine learning community. First, MIMIC-Extract transforms raw vital sign and laboratory measurements into usable hourly time series, performing essential steps such as unit conversion, outlier handling, and aggregation of semantically similar features to reduce missingness and improve robustness. Second, MIMIC-Extract extracts and makes prediction of clinically-relevant targets possible, including outcomes such as mortality and length-of-stay as well as comprehensive hourly intervention signals for ventilators, vasopressors, and fluid therapies. Finally, the pipeline emphasizes reproducibility and extensibility to future research questions. We demonstrate the pipeline's effectiveness by developing several benchmark tasks for outcome and intervention forecasting and assessing the performance of competitive models.",
"year": 2019,
"venue": "ACM Conference on Health, Inference, and Learning",
"authors": [
"Shirly Wang",
"Matthew B. A. McDermott",
"Geeticka Chauhan",
"Michael C. Hughes",
"Tristan Naumann",
"M. Ghassemi"
],
"externalIds": {
"DBLP": "journals/corr/abs-1907-08322",
"ArXiv": "1907.08322",
"MAG": "2962968929",
"DOI": "10.1145/3368555.3384469",
"CorpusId": 197935541
},
"url": "https://www.semanticscholar.org/paper/ee1ddb3fe85bdc1650c4d32a00e5051d57e71ebb",
"referenceCount": 31,
"citationCount": 172,
"influentialCitationCount": 11,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Stop the Bottleneck: Improving Patient Throughput in the Emergency Department",
"abstract": null,
"year": 2018,
"venue": "Journal of Emergency Nursing",
"authors": [
"Ray DeAnda"
],
"externalIds": {
"MAG": "2809475017",
"DOI": "10.1016/j.jen.2018.05.002",
"CorpusId": 49413575,
"PubMed": "29935944"
},
"url": "https://www.semanticscholar.org/paper/4ba206f8060f6640fc6ab5de8ba67ec361e82540",
"referenceCount": 10,
"citationCount": 13,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care",
"abstract": null,
"year": 2018,
"venue": "Nature Network Boston",
"authors": [
"M. Komorowski",
"L. Celi",
"Omar Badawi",
"A. Gordon",
"Aldo A. Faisal"
],
"externalIds": {
"MAG": "2896893468",
"DOI": "10.1038/s41591-018-0213-5",
"CorpusId": 53025350,
"PubMed": "30349085"
},
"url": "https://www.semanticscholar.org/paper/cc07fc48ce2a381e7f39235cef5fd10b939182c4",
"referenceCount": 44,
"citationCount": 804,
"influentialCitationCount": 57,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Benchmarking deep learning models on large healthcare datasets",
"abstract": null,
"year": 2018,
"venue": "Journal of Biomedical Informatics",
"authors": [
"S. Purushotham",
"Chuizheng Meng",
"Zhengping Che",
"Yan Liu"
],
"externalIds": {
"MAG": "2806031239",
"DBLP": "journals/jbi/PurushothamMCL18",
"DOI": "10.1016/j.jbi.2018.04.007",
"CorpusId": 46962892,
"PubMed": "29879470"
},
"url": "https://www.semanticscholar.org/paper/1701a367d06fda7d5c17f0345c06dcd093dac7ef",
"referenceCount": 35,
"citationCount": 291,
"influentialCitationCount": 34,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer",
"abstract": "Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.",
"year": 2017,
"venue": "Journal of the American Medical Association (JAMA)",
"authors": [
"Babak Ehteshami Bejnordi",
"M. Veta",
"Paul Johannes van Diest",
"B. van Ginneken",
"N. Karssemeijer",
"G. Litjens",
"J. A. van der Laak",
"M. Hermsen",
"Quirine F. Manson",
"M. Balkenhol",
"Oscar G. F. Geessink",
"Nikolaos Stathonikos",
"M. V. van Dijk",
"P. Bult",
"F. Beca",
"Andrew H. Beck",
"Dayong Wang",
"A. Khosla",
"Rishab Gargeya",
"H. Irshad",
"Aoxiao Zhong",
"Q. Dou",
"Quanzheng Li",
"Hao Chen",
"Huangjing Lin",
"P. Heng",
"C. Hass",
"Elia Bruni",
"Q. Wong",
"U. Halici",
"Mustafa Ümit Öner",
"R. Cetin-Atalay",
"Matt Berseth",
"V. Khvatkov",
"A. Vylegzhanin",
"Oren Z. Kraus",
"M. Shaban",
"N. Rajpoot",
"Ruqayya Awan",
"K. Sirinukunwattana",
"Talha Qaiser",
"Y. Tsang",
"David Tellez",
"Jonas Annuscheit",
"P. Hufnagl",
"Mira Valkonen",
"K. Kartasalo",
"Leena Latonen",
"P. Ruusuvuori",
"Kaisa Liimatainen",
"Shadi Albarqouni",
"Bharti Mungal",
"A. George",
"S. Demirci",
"N. Navab",
"Seiryo Watanabe",
"S. Seno",
"Y. Takenaka",
"H. Matsuda",
"Hady Ahmady Phoulady",
"V. Kovalev",
"A. Kalinovsky",
"V. Liauchuk",
"G. Bueno",
"M. Fernández-Carrobles",
"I. Serrano",
"O. Deniz",
"Daniel Racoceanu",
"Rui Venâncio"
],
"externalIds": {
"MAG": "2772723798",
"DOI": "10.1001/jama.2017.14585",
"CorpusId": 205086555,
"PubMed": "29234806"
},
"url": "https://www.semanticscholar.org/paper/ba913e2c03ece1c75f0af4d16dd11c7ffbc6e3ba",
"referenceCount": 71,
"citationCount": 2380,
"influentialCitationCount": 86,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "LightGBM: A Highly Efficient Gradient Boosting Decision Tree",
"abstract": "Gradient Boosting Decision Tree (GBDT) is a popular machine learning algorithm, and has quite a few effective implementations such as XGBoost and pGBRT. Although many engineering optimizations have been adopted in these implementations, the efficiency and scalability are still unsatisfactory when the feature dimension is high and data size is large. A major reason is that for each feature, they need to scan all the data instances to estimate the information gain of all possible split points, which is very time consuming. To tackle this problem, we propose two novel techniques: \\emph{Gradient-based One-Side Sampling} (GOSS) and \\emph{Exclusive Feature Bundling} (EFB). With GOSS, we exclude a significant proportion of data instances with small gradients, and only use the rest to estimate the information gain. We prove that, since the data instances with larger gradients play a more important role in the computation of information gain, GOSS can obtain quite accurate estimation of the information gain with a much smaller data size. With EFB, we bundle mutually exclusive features (i.e., they rarely take nonzero values simultaneously), to reduce the number of features. We prove that finding the optimal bundling of exclusive features is NP-hard, but a greedy algorithm can achieve quite good approximation ratio (and thus can effectively reduce the number of features without hurting the accuracy of split point determination by much). We call our new GBDT implementation with GOSS and EFB \\emph{LightGBM}. Our experiments on multiple public datasets show that, LightGBM speeds up the training process of conventional GBDT by up to over 20 times while achieving almost the same accuracy.",
"year": 2017,
"venue": "Neural Information Processing Systems",
"authors": [
"Guolin Ke",
"Qi Meng",
"Thomas Finley",
"Taifeng Wang",
"Wei Chen",
"Weidong Ma",
"Qiwei Ye",
"Tie-Yan Liu"
],
"externalIds": {
"DBLP": "conf/nips/KeMFWCMYL17",
"MAG": "2753094203",
"CorpusId": 3815895
},
"url": "https://www.semanticscholar.org/paper/497e4b08279d69513e4d2313a7fd9a55dfb73273",
"referenceCount": 32,
"citationCount": 8514,
"influentialCitationCount": 931,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Machine‐Learning‐Based Electronic Triage More Accurately Differentiates Patients With Respect to Clinical Outcomes Compared With the Emergency Severity Index",
"abstract": null,
"year": 2017,
"venue": "Annals of Emergency Medicine",
"authors": [
"S. Levin",
"Matthew F. Toerper",
"Eric Hamrock",
"J. Hinson",
"S. Barnes",
"H. Gardner",
"A. Dugas",
"Bob Linton",
"Tom Kirsch",
"G. Kelen"
],
"externalIds": {
"MAG": "2795322922",
"DOI": "10.1016/j.annemergmed.2017.08.005",
"CorpusId": 5046204,
"PubMed": "28888332"
},
"url": "https://www.semanticscholar.org/paper/b30b9344b1bacc32a01e0928ffe6c0a2a62e3b73",
"referenceCount": 58,
"citationCount": 254,
"influentialCitationCount": 7,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Proximal Policy Optimization Algorithms",
"abstract": "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.",
"year": 2017,
"venue": "arXiv.org",
"authors": [
"John Schulman",
"Filip Wolski",
"Prafulla Dhariwal",
"Alec Radford",
"Oleg Klimov"
],
"externalIds": {
"MAG": "2736601468",
"ArXiv": "1707.06347",
"DBLP": "journals/corr/SchulmanWDRK17",
"CorpusId": 28695052
},
"url": "https://www.semanticscholar.org/paper/dce6f9d4017b1785979e7520fd0834ef8cf02f4b",
"referenceCount": 14,
"citationCount": 14880,
"influentialCitationCount": 3163,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Multitask learning and benchmarking with clinical time series data",
"abstract": null,
"year": 2017,
"venue": "Scientific Data",
"authors": [
"Hrayr Harutyunyan",
"Hrant Khachatrian",
"David C. Kale",
"A. Galstyan"
],
"externalIds": {
"MAG": "2963036860",
"PubMedCentral": "6572845",
"ArXiv": "1703.07771",
"DBLP": "journals/corr/HarutyunyanKKG17",
"DOI": "10.1038/s41597-019-0103-9",
"CorpusId": 14502739,
"PubMed": "31209213"
},
"url": "https://www.semanticscholar.org/paper/9ae9d2b060e50094be7e473e449f192403019225",
"referenceCount": 96,
"citationCount": 757,
"influentialCitationCount": 118,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Mathematics",
"Computer Science",
"Biology"
]
},
{
"title": "Improving emergency department patient flow",
"abstract": "Emergency departments (ED) face significant challenges in delivering high quality and timely patient care on an ever-present background of increasing patient numbers and limited hospital resources. A mismatch between patient demand and the ED’s capacity to deliver care often leads to poor patient flow and departmental crowding. These are associated with reduction in the quality of the care delivered and poor patient outcomes. A literature review was performed to identify evidence-based strategies to reduce the amount of time patients spend in the ED in order to improve patient flow and reduce crowding in the ED. The use of doctor triage, rapid assessment, streaming and the co-location of a primary care clinician in the ED have all been shown to improve patient flow. In addition, when used effectively point of care testing has been shown to reduce patient time in the ED. Patient flow and departmental crowding can be improved by implementing new patterns of working and introducing new technologies such as point of care testing in the ED.",
"year": 2016,
"venue": "Clinical and Experimental Emergency Medicine",
"authors": [
"P. Jarvis"
],
"externalIds": {
"MAG": "2473057391",
"PubMedCentral": "5051606",
"DOI": "10.15441/ceem.16.127",
"CorpusId": 15788583,
"PubMed": "27752619"
},
"url": "https://www.semanticscholar.org/paper/996b1635c33883190b34b284efdd0ac2ca6297ce",
"referenceCount": 45,
"citationCount": 107,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "XGBoost: A Scalable Tree Boosting System",
"abstract": "Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.",
"year": 2016,
"venue": "Knowledge Discovery and Data Mining",
"authors": [
"Tianqi Chen",
"Carlos Guestrin"
],
"externalIds": {
"ArXiv": "1603.02754",
"DBLP": "conf/kdd/ChenG16",
"MAG": "3102476541",
"DOI": "10.1145/2939672.2939785",
"CorpusId": 4650265
},
"url": "https://www.semanticscholar.org/paper/26bc9195c6343e4d7f434dd65b4ad67efe2be27a",
"referenceCount": 26,
"citationCount": 30775,
"influentialCitationCount": 2876,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The Effect of Laboratory Testing on Emergency Department Length of Stay: A Multihospital Longitudinal Study Applying a Cross‐classified Random‐effect Modeling Approach",
"abstract": "Abstract Objectives The objective was to examine the relationship between laboratory testing (including test volume and turnaround time [TAT]) and emergency department (ED) length of stay (LOS), using linked patient‐level data from four hospitals across 4 years. Methods This was a retrospective, multisite cohort study of patients presenting to any one of four EDs in New South Wales, Australia, during a 2‐month period (August and September) in 2008, 2009, 2010, and 2011. Data from ED information systems were linked to laboratory test data. A cross‐classified random‐effect modeling approach was applied to identify factors affecting ED LOS, taking into account the correlation between patients' presentations at the same hospital and/or in the same calendar year. Number of test order episodes (tests ordered at one point in time during the ED stay) and TAT (time from laboratory order receipt to result available) were examined. Results As the number of test order episodes increased, so did the duration of patient ED LOS (p < 0.0001). For every five additional tests ordered per test order episode, the median ED LOS increased by 10 minutes (2.9%, p < 0.0001); each 30‐minute increase in TAT was, on average, associated with a 5.1% (17 minutes; p < 0.0001) increase in ED LOS, after adjustment for other factors. Patients presenting to the ED at night (7 p.m. to 7 a.m.) had longer stays than those presenting during the daytime, although the median TATs at nights were shorter than those during the daytime. Conclusions Laboratory testing has a direct effect on patients' LOS in ED. Laboratory TAT, number of testing episodes, and test volume influence ED LOS. Targeted increases of ED resources and staffing after‐hours may also contribute to reductions in ED LOS.",
"year": 2015,
"venue": "Academic Emergency Medicine",
"authors": [
"Ling Li",
"A. Georgiou",
"E. Vecellio",
"Alex Eigenstetter",
"G. Toouli",
"R. Wilson",
"J. Westbrook"
],
"externalIds": {
"MAG": "1970955942",
"PubMedCentral": "6680199",
"DOI": "10.1111/acem.12565",
"CorpusId": 1693766,
"PubMed": "25565488"
},
"url": "https://www.semanticscholar.org/paper/03148cc730a8a88cdf22300e310eb576cc995d04",
"referenceCount": 24,
"citationCount": 87,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy",
"abstract": "Aims: To identify factors contributing to laboratory overutilisation in an academic medical department, and to assess the effect of an educational feedback strategy on inappropriate test-ordering behaviour. Methods: The records of 426 patients admitted during a 6-month period were reviewed. The usefulness of 25 investigations (haematology, basic biochemistry and arterial blood gases) was assessed according to implicit criteria. Trainees’ acquaintance with investigation costs was assessed via a multiple-choice questionnaire. The medical staff was informed about their test-ordering behaviour, cost awareness and the factors associated with overuse of diagnostic tests. The test-ordering behaviour of the same doctors was reassessed on 214 patients managed during 6 months after the intervention. Results: Overall, 24 482 laboratory tests were ordered before the intervention (mean 2.96 tests/patient/day). Among those, 67.9% were not considered to have contributed towards management of patients (mean avoidable 2.01 tests/patient/day). Patient age ⩾65 years, hospitalisation beyond 7 days and increased case difficulty (death or inability to establish a diagnosis) were factors independently associated with overuse of laboratory tests. Senior trainees ordered more laboratory examinations, but the percentage of avoidable tests requested by junior trainees was higher. A moderate and disparate level of trainees’ awareness about the cost of common laboratory examinations was disclosed. The avoidable tests/patient/day were significantly decreased after the intervention (mean 1.58, p = 0.002), but containment of unnecessary ordering of tests gradually waned during the semester after the intervention. Conclusion: Repeated audit, continuous education and alertness of doctors, on the basis of assessment of factors contributing to laboratory overutilisation, result in restraining the redundant ordering of tests in the hospital setting.",
"year": 2006,
"venue": "Postgraduate medical journal",
"authors": [
"S. Miyakis",
"Georgios Karamanof",
"M. Liontos",
"T. Mountokalakis"
],
"externalIds": {
"MAG": "2073762434",
"DOI": "10.1136/pgmj.2006.049551",
"CorpusId": 12651031,
"PubMed": "17148707"
},
"url": "https://www.semanticscholar.org/paper/26659217db3a1b1148a663aa48a7c9b0cb09522f",
"referenceCount": 31,
"citationCount": 250,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Mimic-iv-ed (version 2.2)",
"abstract": null,
"year": 2023,
"venue": "Phys-ioNet",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Fast Interpretable Greedy-Tree Sums (FIGS)",
"abstract": "Modern machine learning has achieved impressive prediction performance, but often sacrifices interpretability, a critical consideration in many problems. Here, we propose Fast Interpretable Greedy-Tree Sums (FIGS), an algorithm for fit-ting concise rule-based models.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Yan Shuo Tan",
"Chandan Singh",
"Keyan Nasseri",
"Abhineet Agarwal",
"Bin Yu"
],
"externalIds": {
"DBLP": "journals/corr/abs-2201-11931",
"CorpusId": 246411594
},
"url": "https://www.semanticscholar.org/paper/86e1e19a086082a7c2b8c8a866cda6d7ca22bfcf",
"referenceCount": 49,
"citationCount": 23,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Stable-Baselines3: Reliable Reinforcement Learning Implementations",
"abstract": "Stable-Baselines3 provides open-source implementations of deep reinforcement learning (RL) algorithms in Python. The implementations have been benchmarked against reference codebases, and automated unit tests cover 95% of the code. The algorithms follow a consistent interface and are accompanied by extensive documentation, making it simple to train and compare different RL algorithms. Our documentation, examples, and source-code are available at https://github.com/DLR-RM/stable-baselines3 .",
"year": 2021,
"venue": "Journal of machine learning research",
"authors": [
"A. Raffin",
"Ashley Hill",
"A. Gleave",
"A. Kanervisto",
"M. Ernestus",
"Noah Dormann"
],
"externalIds": {
"DBLP": "journals/jmlr/RaffinHGKED21",
"CorpusId": 246432496
},
"url": "https://www.semanticscholar.org/paper/e3fc5b5627af62ee6981a02090cf6bae368202d7",
"referenceCount": 33,
"citationCount": 1302,
"influentialCitationCount": 82,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "of artificial",
"abstract": "Every year our life is becoming more dynamic. The development of advanced technologies is changing the format of civil-law relations in their classical sense (used by lawyers or ordinary citizens). Today, the norm for us is the use of artificial intelligence technologies in assessing credit risks, implementing them in the courts, legal departments of large companies, public authorities and management, in manufacturing, in the conclusion of smart contracts and, ultimately, in our homes (in the use of smart home technology). We don’t even think about ower using artificial intelligence programs almost every day. For example, email spam filters, face recognition, search recommendations, smart personal assistants (Siri), shared apps (Uber), etc. Such changes are positive, as modern artificial intelligence technologies are designed to simplify a person’s life, assist him or her in work or in accessing public services. However, they are accompanied by several unknown legal doctrines that need detailed scrutiny. In general, the legal profession is one of those that most felt the ambiguous effect of modern technology being on the verge of upheaval1. This is due to the introduction of artificial intelligence technologies into the work of lawyers when such technologies replace them2.",
"year": 2021,
"venue": "",
"authors": [
"Nataliia Martsenko"
],
"externalIds": {
"CorpusId": 245633924
},
"url": "https://www.semanticscholar.org/paper/029deee19f59c8106d69753e2fb58d42ab2484ce",
"referenceCount": 40,
"citationCount": 9,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "Language Models are Unsupervised Multitask Learners",
"abstract": "Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.",
"year": 2019,
"venue": "",
"authors": [
"Alec Radford",
"Jeff Wu",
"R. Child",
"D. Luan",
"Dario Amodei",
"I. Sutskever"
],
"externalIds": {
"MAG": "2955855238",
"CorpusId": 160025533
},
"url": "https://www.semanticscholar.org/paper/9405cc0d6169988371b2755e573cc28650d14dfe",
"referenceCount": 75,
"citationCount": 18460,
"influentialCitationCount": 3039,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Random Forests",
"abstract": null,
"year": 2001,
"venue": "Machine-mediated learning",
"authors": [
"L. Breiman"
],
"externalIds": {
"MAG": "2911964244",
"DBLP": "reference/ml/X17sy",
"DOI": "10.1023/A:1010933404324",
"CorpusId": 89141
},
"url": "https://www.semanticscholar.org/paper/8e0be569ea77b8cb29bb0e8b031887630fe7a96c",
"referenceCount": 25,
"citationCount": 89808,
"influentialCitationCount": 5835,
"isOpenAccess": true,
"fieldsOfStudy": [
"Mathematics",
"Computer Science"
]
},
{
"title": ": A flexible random forest-based feature importance",
"abstract": null,
"year": null,
"venue": "Mdi+",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Exploring language models for medical question answering",
"abstract": null,
"year": null,
"venue": "Medlm:",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "We develop ED-Copilot, an RL-trained language model that enhances ED workflow by suggesting informative laboratory groups and flagging high-risk patients using a pre-trained language model backbone",
"abstract": null,
"year": null,
"venue": "BioGPT",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Outlining the Borders for LLM Applications in Patient Education: Developing an Expert-in-the-Loop LLM-Powered Chatbot for Prostate Cancer Patient Education": {
"paper_title": "Outlining the Borders for LLM Applications in Patient Education: Developing an Expert-in-the-Loop LLM-Powered Chatbot for Prostate Cancer Patient Education",
"arxiv_id": "2409.19100",
"authors": [
"Yuexing Hao",
"J. Holmes",
"M. Waddle",
"Nathan Yu",
"Kirstin Vickers",
"Heather Preston",
"Drew Margolin",
"Corinna E. Lockenhoff",
"Aditya Vashistha",
"Marzyeh Ghassemi",
"Saleh Kalantari",
"Wei Liu"
],
"year": 2024,
"venue": "",
"abstract": "Cancer patients often struggle to transition swiftly to treatment due to limited institutional resources, lack of sophisticated professional guidance, and low health literacy. The emergence of Large Language Models (LLMs) offers new opportunities for such patients to access the wealth of existing patient education materials. The current paper presents the development process for an LLM-based chatbot focused on prostate cancer education, including needs assessment, co-design, and usability studies. The resulting application, MedEduChat, integrates with patients' electronic health record data and features a closed-domain, semi-structured, patient-centered approach to address real-world needs. This paper contributes to the growing field of patient-LLM interaction by demonstrating the potential of LLM-based chatbots to enhance prostate cancer patient education and by offering co-design guidelines for future LLM-based healthcare downstream applications.",
"references": []
},
"What is the Role of Large Language Models in the Evolution of Astronomy Research?": {
"paper_title": "What is the Role of Large Language Models in the Evolution of Astronomy Research?",
"arxiv_id": "2409.20252",
"authors": [
"Morgan Fouesneau",
"I. Momcheva",
"U. Chadayammuri",
"Mariia Demianenko",
"Antoine Dumont",
"R. E. Hviding",
"K. A. Kahle",
"Nadiia Pulatova",
"Bhavesh Rajpoot",
"Marten Scheuck",
"R. Seeburger",
"Dmitry Semenov",
"Jaime I. Villasenor"
],
"year": 2024,
"venue": "",
"abstract": "ChatGPT and other state-of-the-art large language models (LLMs) are rapidly transforming multiple fields, offering powerful tools for a wide range of applications. These models, commonly trained on vast datasets, exhibit human-like text generation capabilities, making them useful for research tasks such as ideation, literature review, coding, drafting, and outreach. We conducted a study involving 13 astronomers at different career stages and research fields to explore LLM applications across diverse tasks over several months and to evaluate their performance in research-related activities. This work was accompanied by an anonymous survey assessing participants' experiences and attitudes towards LLMs. We provide a detailed analysis of the tasks attempted and the survey answers, along with specific output examples. Our findings highlight both the potential and limitations of LLMs in supporting research while also addressing general and research-specific ethical considerations. We conclude with a series of recommendations, emphasizing the need for researchers to complement LLMs with critical thinking and domain expertise, ensuring these tools serve as aids rather than substitutes for rigorous scientific inquiry.",
"references": []
},
"Nanosecond hardware regression trees in FPGA at the LHC": {
"paper_title": "Nanosecond hardware regression trees in FPGA at the LHC",
"arxiv_id": "2409.20506",
"authors": [
"Pavel Serhiayenka",
"Stephen Roche",
"Benjamin Carlson",
"Tae Min Hong"
],
"year": 2024,
"venue": "",
"abstract": "We present a generic parallel implementation of the decision tree-based machine learning (ML) method in hardware description language (HDL) on field programmable gate arrays (FPGA). A regression problem in high energy physics at the Large Hadron Collider is considered: the estimation of the magnitude of missing transverse momentum using boosted decision trees (BDT). A forest of twenty decision trees each with a maximum depth of ten using eight input variables of 16-bit precision is executed with a latency of less than 10 ns using O(0.1%) resources on Xilinx UltraScale+ VU9P -- approximately ten times faster and five times smaller compared to similar designs using high level synthesis (HLS) -- without the use of digital signal processors (DSP) while eliminating the use of block RAM (BRAM). We also demonstrate a potential application in the estimation of muon momentum for ATLAS RPC at HL-LHC.",
"references": [
{
"title": "Search for Nearly Mass-Degenerate Higgsinos Using Low-Momentum Mildly Displaced Tracks in pp Collisions at sqrt[s]=13 TeV with the ATLAS Detector.",
"abstract": "Higgsinos with masses near the electroweak scale can solve the hierarchy problem and provide a dark matter candidate, while detecting them at the LHC remains challenging if their mass splitting is O(1 GeV). This Letter presents a novel search for nearly mass-degenerate Higgsinos in events with an energetic jet, missing transverse momentum, and a low-momentum track with a significant transverse impact parameter using 140 fb^{-1} of proton-proton collision data at sqrt[s]=13 TeV collected by the ATLAS experiment. For the first time since LEP, a range of mass splittings between the lightest charged and neutral Higgsinos from 0.3 to 0.9 GeV is excluded at 95% confidence level, with a maximum reach of approximately 170 GeV in the Higgsino mass.",
"year": 2024,
"venue": "Physical Review Letters",
"authors": [
"G. Aad",
"E. Aakvaag",
"B. Abbott",
"K. Abeling",
"N. J. Abicht",
"S. Abidi",
"M. Aboelela",
"A. Aboulhorma",
"H. Abramowicz",
"H. Abreu",
"Y. Abulaiti",
"B. Acharya",
"A. Ackermann",
"C. Adam Bourdarios",
"L. Adamczyk",
"S. Addepalli",
"M. Addison",
"J. Adelman",
"A. Adiguzel",
"T. Adye",
"A. Affolder",
"Y. Afik",
"M. N. Agaras",
"J. Agarwala",
"A. Aggarwal",
"C. Agheorghiesei",
"A. Ahmad",
"F. Ahmadov",
"W. Ahmed",
"S. Ahuja",
"X. Ai",
"G. Aielli",
"A. Aikot",
"M. Ait Tamlihat",
"B. Aitbenchikh",
"I. Aizenberg",
"M. Akbiyik",
"T. Åkesson",
"A. V. Akimov",
"D. Akiyama",
"N. N. Akolkar",
"S. Aktas",
"K. Al Khoury",
"G. Alberghi",
"J. Albert",
"P. Albicocco",
"G. L. Albouy",
"S. Alderweireldt",
"Z. L. Alegria",
"M. Aleksa",
"I. Aleksandrov",
"C. Alexa",
"T. Alexopoulos",
"F. Alfonsi",
"M. Algren",
"M. Alhroob",
"B. Ali",
"H. Ali",
"S. Ali",
"S. W. Alibocus",
"M. Aliev",
"G. Alimonti",
"W. Alkakhi",
"C. Allaire",
"B. Allbrooke",
"J. F. Allen",
"C. Allendes Flores",
"P. Allport",
"A. Aloisio",
"F. Alonso",
"C. Alpigiani",
"M. Alvarez Estevez",
"A. Álvarez Fernández",
"M. Alves Cardoso",
"M. Alviggi",
"M. Aly",
"Y. Amaral Coutinho",
"A. Ambler",
"C. Amelung",
"M. Amerl",
"C. Ames",
"D. Amidei",
"K. Amirie",
"S. Amor Dos Santos",
"K. R. Amos",
"S. An",
"V. Ananiev",
"C. Anastopoulos",
"T. Andeen",
"J. Anders",
"S. Y. Andrean",
"A. Andreazza",
"S. Angelidakis",
"A. Angerami",
"A. Anisenkov",
"A. Annovi",
"C. Antel",
"M. T. Anthony",
"E. Antipov",
"M. Antonelli",
"F. Anulli",
"M. Aoki",
"T. Aoki",
"J. Aparisi Pozo",
"M. Aparo",
"L. Aperio Bella",
"C. Appelt",
"A. Apyan",
"S. Arbiol Val",
"C. Arcangeletti",
"A. Arce",
"E. Arena",
"J. Arguin",
"S. Argyropoulos",
"J. Arling",
"O. Arnaez",
"H. Arnold",
"G. Artoni",
"H. Asada",
"K. Asai",
"S. Asai",
"N. Asbah",
"K. Assamagan",
"R. Astalos",
"K. Astrand",
"S. Atashi",
"R. Atkin",
"M. Atkinson",
"H. Atmani",
"P. Atmasiddha",
"K. Augsten",
"S. Auricchio",
"A. Auriol",
"V. A. Austrup",
"G. Avolio",
"K. Axiotis",
"G. Azuelos",
"D. Babal",
"H. Bachacou",
"K. Bachas",
"A. Bachiu",
"F. Backman",
"A. Badea",
"T. M. Baer",
"P. Bagnaia",
"M. Bahmani",
"D. Bahner",
"K. Bai",
"A. Bailey",
"J. Baines",
"L. Baines",
"O. Baker",
"E. Bakos",
"D. Bakshi Gupta",
"V. Balakrishnan",
"R. Balasubramanian",
"E. Baldin",
"P. Balek",
"E. Ballabene",
"F. Balli",
"L. M. Baltes",
"W. Balunas",
"J. Balz",
"E. Banas",
"M. Bandieramonte",
"A. Bandyopadhyay",
"S. Bansal",
"L. Barak",
"M. Barakat",
"E. Barberio",
"D. Barberis",
"M. Barbero",
"M. Z. Barel",
"K. N. Barends",
"T. Barillari",
"M. Barisits",
"T. Barklow",
"P. Baron",
"D. Baron Moreno",
"A. Baroncelli",
"G. Barone",
"A. Barr",
"J. Barr",
"F. Barreiro",
"J. Barreiro Guimarães da Costa",
"U. Barron",
"M. Barros Teixeira",
"S. Barsov",
"F. Bartels",
"R. Bartoldus",
"A. E. Barton",
"P. Bartos",
"A. Basan",
"M. Baselga",
"A. Bassalat",
"M. J. Basso",
"R. Bates",
"S. Batlamous",
"Batool Babaei Jahromi",
"M. Battaglia",
"D. Battulga",
"M. Bauce",
"M. Bauer",
"P. Bauer",
"L. Bazzano Hurrell",
"J. Beacham",
"T. Beau",
"J. Y. Beaucamp",
"P. Beauchemin",
"P. Bechtle",
"H. Beck",
"K. Becker",
"A. Beddall",
"V. Bednyakov",
"C. Bee",
"L. J. Beemster",
"T. Beermann",
"M. Begalli",
"M. Begel",
"A. Behera",
"J. Behr",
"J. F. Beirer",
"F. Beisiegel",
"M. Belfkir",
"G. Bella",
"L. Bellagamba",
"A. Bellerive",
"P. Bellos",
"K. Beloborodov",
"D. Benchekroun",
"F. Bendebba",
"Y. Benhammou",
"K. Benkendorfer",
"L. Beresford",
"M. Beretta",
"E. Bergeaas Kuutmann",
"N. Berger",
"B. Bergmann",
"J. Beringer",
"G. Bernardi",
"C. Bernius",
"F. Bernlochner",
"F. Bernon",
"A. Berrocal Guardia",
"T. Berry",
"P. Berta",
"A. Berthold",
"S. Bethke",
"A. Betti",
"A. Bevan",
"N. K. Bhalla",
"M. Bhamjee",
"S. Bhatta",
"D. S. Bhattacharya",
"P. Bhattarai",
"K. D. Bhide",
"V. Bhopatkar",
"R. Bianchi",
"Giuliano Bianco",
"O. Biebel",
"R. Bielski",
"M. Biglietti",
"C. S. Billingsley",
"M. Bindi",
"A. Bingul",
"C. Bini",
"A. Biondini",
"C. Birch-sykes",
"G. Bird",
"M. Birman",
"M. Biros",
"S. Biryukov",
"T. Bisanz",
"E. Bisceglie",
"J. Biswal",
"D. Biswas",
"K. Bjørke",
"I. Bloch",
"A. Blue",
"U. Blumenschein",
"J. Blumenthal",
"V. Bobrovnikov",
"M. Boehler",
"B. Boehm",
"D. Bogavac",
"A. Bogdanchikov",
"C. Bohm",
"V. Boisvert",
"P. Bokan",
"T. Bold",
"M. Bomben",
"M. Bona",
"M. Boonekamp",
"C. D. Booth",
"A. Borbély",
"I. Bordulev",
"H. Borecka-Bielska",
"G. Borissov",
"D. Bortoletto",
"D. Boscherini",
"M. Bosman",
"J. Bossio Sola",
"K. Bouaouda",
"N. Bouchhar",
"J. Boudreau",
"E. Bouhova-Thacker",
"D. Boumediene",
"R. Bouquet",
"A. Boveia",
"J. Boyd",
"D. Boye",
"I. Boyko",
"J. Bracinik",
"N. Brahimi",
"G. Brandt",
"O. Brandt",
"F. Braren",
"B. Brau",
"J. Brau",
"R. Brener",
"L. Brenner",
"R. Brenner",
"S. Bressler",
"D. Britton",
"D. Britzger",
"I. Brock",
"G. Brooijmans",
"E. Brost",
"L. M. Brown",
"L. Bruce",
"T. L. Bruckler",
"P. Bruckman de Renstrom",
"B. Brüers",
"A. Bruni",
"G. Bruni",
"M. Bruschi",
"N. Bruscino",
"T. Buanes",
"Q. Buat",
"D. Buchin",
"A. Buckley",
"O. Bulekov",
"B. Bullard",
"S. Burdin",
"C. Burgard",
"A. Burger",
"B. Burghgrave",
"O. Burlayenko",
"J. Burr",
"C. D. Burton",
"J. C. Burzynski",
"E. L. Busch",
"V. Büscher",
"P. Bussey",
"J. Butler",
"C. Buttar",
"J. Butterworth",
"W. Buttinger",
"C. Buxo Vazquez",
"A. Buzykaev",
"S. Cabrera Urbán",
"L. Cadamuro",
"D. Caforio",
"H. Cai",
"Y. Cai",
"Y. Cai",
"V. Cairo",
"O. Cakir",
"N. Calace",
"P. Calafiura",
"G. Calderini",
"P. Calfayan",
"G. Callea",
"L. Caloba",
"D. Calvet",
"S. Calvet",
"M. Calvetti",
"R. Camacho Toro",
"S. Camarda",
"D. Camarero Munoz",
"P. Camarri",
"M. T. Camerlingo",
"D. Cameron",
"C. Camincher",
"M. Campanelli",
"A. Camplani",
"V. Canale",
"A. C. Canbay",
"E. Canonero",
"J. Cantero",
"Y. Cao",
"F. Capocasa",
"M. Capua",
"A. Carbone",
"R. Cardarelli",
"J. Cardenas",
"F. Cardillo",
"G. Carducci",
"T. Carli",
"G. Carlino",
"J. Carlotto",
"B. Carlson",
"E. M. Carlson",
"L. Carminati",
"A. Carnelli",
"M. Carnesale",
"S. Caron",
"E. Carquin",
"S. Carrá",
"G. Carratta",
"A. M. Carroll",
"T. M. Carter",
"M. Casado",
"M. Caspar",
"F. L. Castillo",
"L. Castillo García",
"V. Castillo Gimenez",
"N. Castro",
"A. Catinaccio",
"J. Catmore",
"T. Cavaliere",
"V. Cavaliere",
"N. Cavalli",
"Y. C. Cekmecelioglu",
"E. Celebi",
"S. Cella",
"F. Celli",
"M. S. Centonze",
"V. Cepaitis",
"K. Cerny",
"A. S. Cerqueira",
"A. Cerri",
"L. Cerrito",
"F. Cerutti",
"B. Cervato",
"A. Cervelli",
"G. Cesarini",
"S. Cetin",
"D. Chakraborty",
"J. Chan",
"W. Chan",
"J. D. Chapman",
"E. Chapon",
"B. Chargeishvili",
"D. Charlton",
"M. Chatterjee",
"C. Chauhan",
"Y. Che",
"S. Chekanov",
"S. Chekulaev",
"G. Chelkov",
"A. Chen",
"B. Chen",
"B. Chen",
"H. Chen",
"H. Chen",
"J. Chen",
"J. Chen",
"M. Chen",
"S. Chen",
"S. J. Chen",
"X. Chen",
"X. Chen",
"Y. Chen",
"C. L. Cheng",
"H. C. Cheng",
"S. Cheong",
"A. Cheplakov",
"E. Cheremushkina",
"E. Cherepanova",
"R. Cherkaoui El Moursli",
"E. Cheu",
"K. Cheung",
"L. Chevalier",
"V. Chiarella",
"G. Chiarelli",
"N. Chiedde",
"G. Chiodini",
"A. Chisholm",
"A. Chitan",
"M. Chitishvili",
"M. Chizhov",
"K. Choi",
"Y. Chou",
"E. Chow",
"K. Chu",
"M. Chu",
"X. Chu",
"J. Chudoba",
"J. Chwastowski",
"D. Cieri",
"K. M. Ciesla",
"V. Cindro",
"A. Ciocio",
"F. Cirotto",
"Z. Citron",
"M. Citterio",
"D. Ciubotaru",
"A. Clark",
"P. J. Clark"
],
"externalIds": {
"ArXiv": "2401.14046",
"DOI": "10.1103/PhysRevLett.132.221801",
"CorpusId": 267211914,
"PubMed": "38877905"
},
"url": "https://www.semanticscholar.org/paper/788987d4ebb165fb085a8ab2a27dd140d62be913",
"referenceCount": 78,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Medicine"
]
},
{
"title": "Testing a Neural Network for Anomaly Detection in the CMS Global Trigger Test Crate during Run 3",
"abstract": "We present the deployment and testing of an autoencoder trained for unbiased detection of new physics signatures in the CMS Level-1 Global Trigger (GT) test crate during LHC Run 3. The GT test crate is a copy of the main GT system, receiving the same input data, but whose output is not used to trigger the readout of CMS, providing a platform for thorough testing of new trigger algorithms on live data, but without interrupting data taking. We describe the integration of the Neural Network into the GT test crate, and the monitoring, testing, and validation of the algorithm during proton collisions.",
"year": 2023,
"venue": "Journal of Instrumentation",
"authors": [
"N. Zipper"
],
"externalIds": {
"ArXiv": "2312.10009",
"DOI": "10.1088/1748-0221/19/03/C03029",
"CorpusId": 266335776
},
"url": "https://www.semanticscholar.org/paper/df2fa4233280cd569374b147389e13ff3904443e",
"referenceCount": 12,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Nanosecond anomaly detection with decision trees and real-time application to exotic Higgs decays",
"abstract": null,
"year": 2023,
"venue": "Nature Communications",
"authors": [
"Stephen Roche",
"Quincy Bayer",
"Benjamin Carlson",
"William Ouligian",
"Pavel Serhiayenka",
"Joerg Stelzer",
"Tae Min Hong"
],
"externalIds": {
"PubMedCentral": "11045859",
"ArXiv": "2304.03836",
"DOI": "10.1038/s41467-024-47704-8",
"CorpusId": 269148389,
"PubMed": "38664390"
},
"url": "https://www.semanticscholar.org/paper/be73d8c6d80a86f564ee0bd375db17bb79b4b649",
"referenceCount": 114,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Medicine"
]
},
{
"title": "Development of the ATLAS Liquid Argon Calorimeter Readout Electronics and Machine Learning for the HL-LHC",
"abstract": "The High Luminosity era of the Large Hadron Collider (LHC) starting in 2029 promises exciting discovery potential, giving unprecedented sensitivity to key new physics models and precise characterization of the Higgs boson. In order to maintain current performance in this challenging environment, the ATLAS liquid argon electromagnetic calorimeter will get entirely new electronics that reads out the entire detector with full precision at the LHC frequency of 40 MHz, and provides high granularity trigger information, while withstanding high operational radiation doses. New results will be presented from both front-end and off-detector component development, along with highlights from machine learning applications. The future steps and outlook of the project will be discussed, with an eye towards installation in the ATLAS cavern beginning in 2026.",
"year": 2022,
"venue": "Instruments",
"authors": [
"J. Gonski"
],
"externalIds": {
"DOI": "10.3390/instruments6030028",
"CorpusId": 251938933
},
"url": "https://www.semanticscholar.org/paper/ecfe3aff88ba950fda56467ad6e5201ca8c179cf",
"referenceCount": 1,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Nanosecond machine learning regression with deep boosted decision trees in FPGA for high energy physics",
"abstract": "We present a novel application of the machine learning / artificial intelligence method called boosted decision trees to estimate physical quantities on field programmable gate arrays (FPGA). The software package fwXmachina features a new architecture called parallel decision paths that allows for deep decision trees with arbitrary number of input variables. It also features a new optimization scheme to use different numbers of bits for each input variable, which produces optimal physics results and ultraefficient FPGA resource utilization. Problems in high energy physics of proton collisions at the Large Hadron Collider (LHC) are considered. Estimation of missing transverse momentum (ET miss) at the first level trigger system at the High Luminosity LHC (HL-LHC) experiments, with a simplified detector modeled by Delphes, is used to benchmark and characterize the firmware performance. The firmware implementation with a maximum depth of up to 10 using eight input variables of 16-bit precision gives a latency value of 𝒪(10) ns, independent of the clock speed, and 𝒪(0.1)% of the available FPGA resources without using digital signal processors.",
"year": 2022,
"venue": "Journal of Instrumentation",
"authors": [
"Benjamin Carlson",
"Quincy Bayer",
"Taeyang Hong",
"Stephen Roche"
],
"externalIds": {
"ArXiv": "2207.05602",
"DOI": "10.1088/1748-0221/17/09/P09039",
"CorpusId": 250451172
},
"url": "https://www.semanticscholar.org/paper/7862dbf2d5d70353ef97c448a4da06c87cf6678b",
"referenceCount": 65,
"citationCount": 7,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Search for invisible Higgs-boson decays in events with vector-boson fusion signatures using 139 fb−1 of proton-proton data recorded by the ATLAS experiment",
"abstract": null,
"year": 2022,
"venue": "Journal of High Energy Physics",
"authors": [
"G. Aad",
"B. Abbott",
"D. Abbott",
"A. Abed Abud",
"K. Abeling",
"D. Abhayasinghe",
"S. H. Abidi",
"A. Aboulhorma",
"H. Abramowicz",
"H. Abreu",
"Y. Abulaiti",
"A. Abusleme Hoffman",
"B. Acharya",
"B. Achkar",
"L. Adam",
"C. Adam Bourdarios",
"L. Adamczyk",
"L. Adamek",
"S. Addepalli",
"J. Adelman",
"A. Adiguzel",
"S. Adorni",
"T. Adye",
"A. Affolder",
"Y. Afik",
"M. N. Agaras",
"J. Agarwala",
"A. Aggarwal",
"C. Agheorghiesei",
"J. A. Aguilar-Saavedra",
"A. Ahmad",
"F. Ahmadov",
"W. S. Ahmed",
"X. Ai",
"G. Aielli",
"I. Aizenberg",
"M. Akbiyik",
"T. Åkesson",
"A. Akimov",
"K. Al Khoury",
"G. Alberghi",
"J. Albert",
"P. Albicocco",
"M. Alconada Verzini",
"S. Alderweireldt",
"M. Aleksa",
"I. Aleksandrov",
"C. Alexa",
"T. Alexopoulos",
"A. Alfonsi",
"F. Alfonsi",
"M. Alhroob",
"B. Ali",
"S. Ali",
"M. Aliev",
"G. Alimonti",
"C. Allaire",
"B. Allbrooke",
"P. Allport",
"A. Aloisio",
"F. Alonso",
"C. Alpigiani",
"E. Alunno Camelia",
"M. Alvarez Estevez",
"M. Alviggi",
"Y. Amaral Coutinho",
"A. Ambler",
"L. Ambroz",
"C. Amelung",
"D. Amidei",
"S. Amor Dos Santos",
"S. Amoroso",
"K. Amos",
"C. Amrouche",
"V. Ananiev",
"C. Anastopoulos",
"N. Andari",
"T. Andeen",
"J. Anders",
"S. Y. Andrean",
"A. Andreazza",
"S. Angelidakis",
"A. Angerami",
"A. Anisenkov",
"A. Annovi",
"C. Antel",
"M. Anthony",
"E. Antipov",
"M. Antonelli",
"D. Antrim",
"F. Anulli",
"M. Aoki",
"J. Aparisi Pozo",
"M. Aparo",
"L. Aperio Bella",
"C. Appelt",
"N. Aranzabal",
"V. Araujo Ferraz",
"C. Arcangeletti",
"A. Arce",
"E. Arena",
"J. Arguin",
"S. Argyropoulos",
"J. Arling",
"A. Armbruster",
"O. Arnaez",
"H. Arnold",
"Z. Arrubarrena Tame",
"G. Artoni",
"H. Asada",
"K. Asai",
"S. Asai",
"N. Asbah",
"E. Asimakopoulou",
"J. Assahsah",
"K. Assamagan",
"R. Astalos",
"R. Atkin",
"M. Atkinson",
"N. B. Atlay",
"H. Atmani",
"P. Atmasiddha",
"K. Augsten",
"S. Auricchio",
"V. A. Austrup",
"G. Avner",
"G. Avolio",
"M. Ayoub",
"G. Azuelos",
"D. Babal",
"H. Bachacou",
"K. Bachas",
"A. Bachiu",
"F. Backman",
"A. Badea",
"P. Bagnaia",
"M. Bahmani",
"A. Bailey",
"V. R. Bailey",
"J. Baines",
"C. Bakalis",
"O. Baker",
"P. Bakker",
"E. Bakos",
"D. Bakshi Gupta",
"S. Balaji",
"R. Balasubramanian",
"E. Baldin",
"P. Balek",
"E. Ballabene",
"F. Balli",
"L. M. Baltes",
"W. Balunas",
"J. Balz",
"E. Banas",
"M. Bandieramonte",
"A. Bandyopadhyay",
"S. Bansal",
"L. Barak",
"E. Barberio",
"D. Barberis",
"M. Barbero",
"G. Barbour",
"K. N. Barends",
"T. Barillari",
"M. Barisits",
"J. Barkeloo",
"T. Barklow",
"R. Barnett",
"P. Baron",
"A. Baroncelli",
"G. Barone",
"A. Barr",
"L. Barranco Navarro",
"F. Barreiro",
"J. Barreiro Guimarães da Costa",
"U. Barron",
"S. Barsov",
"F. Bartels",
"R. Bartoldus",
"G. Bartolini",
"A. Barton",
"P. Bartos",
"A. Basalaev",
"A. Basan",
"M. Baselga",
"I. Bashta",
"A. Bassalat",
"M. Basso",
"C. Basson",
"R. Bates",
"S. Batlamous",
"J. Batley",
"B. Batool",
"M. Battaglia",
"M. Bauce",
"F. Bauer",
"P. Bauer",
"A. Bayirli",
"J. Beacham",
"T. Beau",
"P. Beauchemin",
"F. Becherer",
"P. Bechtle",
"H. Beck",
"K. Becker",
"C. Becot",
"A. Beddall",
"V. Bednyakov",
"C. Bee",
"L. Beemster",
"T. Beermann",
"M. Begalli",
"M. Begel",
"A. Behera",
"J. Behr",
"C. Beirao Da Cruz E Silva",
"J. F. Beirer",
"F. Beisiegel",
"M. Belfkir",
"G. Bella",
"L. Bellagamba",
"A. Bellerive",
"P. Bellos",
"K. Beloborodov",
"K. Belotskiy",
"N. Belyaev",
"D. Benchekroun",
"Y. Benhammou",
"D. Benjamin",
"M. Benoit",
"J. Bensinger",
"S. Bentvelsen",
"L. Beresford",
"M. Beretta",
"D. Berge",
"E. Bergeaas Kuutmann",
"N. Berger",
"B. Bergmann",
"J. Beringer",
"S. Berlendis",
"G. Bernardi",
"C. Bernius",
"F. Bernlochner",
"T. Berry",
"P. Berta",
"I. Bertram",
"O. Bessidskaia Bylund",
"S. Bethke",
"A. Betti",
"A. Bevan",
"S. Bhatta",
"D. Bhattacharya",
"P. Bhattarai",
"V. Bhopatkar",
"R. Bi",
"R. Bianchi",
"O. Biebel",
"R. Bielski",
"N. Biesuz",
"M. Biglietti",
"T. Billoud",
"M. Bindi",
"A. Bingul",
"C. Bini",
"S. Biondi",
"A. Biondini",
"C. Birch-sykes",
"G. Bird",
"M. Birman",
"T. Bisanz",
"J. Biswal",
"D. Biswas",
"A. Bitadze",
"K. Bjørke",
"I. Bloch",
"C. Blocker",
"A. Blue",
"U. Blumenschein",
"J. Blumenthal",
"G. Bobbink",
"V. Bobrovnikov",
"M. Boehler",
"D. Bogavac",
"A. Bogdanchikov",
"C. Bohm",
"V. Boisvert",
"P. Bokan",
"T. Bold",
"M. Bomben",
"M. Bona",
"M. Boonekamp",
"C. Booth",
"A. Borbély",
"H. Borecka-Bielska",
"L. S. Borgna",
"G. Borissov",
"Daniela Bortoletto",
"D. Boscherini",
"M. Bosman",
"J. Bossio Sola",
"K. Bouaouda",
"J. Boudreau",
"E. Bouhova-Thacker",
"D. Boumediene",
"R. Bouquet",
"A. Boveia",
"J. Boyd",
"D. Boye",
"I. Boyko",
"J. Bracinik",
"N. Brahimi",
"G. Brandt",
"O. Brandt",
"F. Braren",
"B. Brau",
"J. Brau",
"W. Breaden Madden",
"K. Brendlinger",
"R. Brener",
"L. Brenner",
"R. Brenner",
"S. Bressler",
"B. Brickwedde",
"D. Britton",
"D. Britzger",
"I. Brock",
"G. Brooijmans",
"W. Brooks",
"E. Brost",
"P. Bruckman de Renstrom",
"B. Brüers",
"D. Bruncko",
"A. Bruni",
"G. Bruni",
"M. Bruschi",
"N. Bruscino",
"L. Bryngemark",
"T. Buanes",
"Q. Buat",
"P. Buchholz",
"A. Buckley",
"I. Budagov",
"M. K. Bugge",
"O. Bulekov",
"B. Bullard",
"S. Burdin",
"C. Burgard",
"A. Burger",
"B. Burghgrave",
"J. Burr",
"C. Burton",
"J. Burzynski",
"E. L. Busch",
"V. Büscher",
"P. Bussey",
"J. Butler",
"C. Buttar",
"J. Butterworth",
"W. Buttinger",
"C. Buxo Vazquez",
"A. Buzykaev",
"G. Cabras",
"S. Cabrera Urbán",
"D. Caforio",
"Han Cai",
"Y. Cai",
"V. Cairo",
"O. Cakir",
"N. Calace",
"P. Calafiura",
"G. Calderini",
"P. Calfayan",
"G. Callea",
"L. Caloba",
"D. Calvet",
"S. Calvet",
"T. Calvet",
"M. Calvetti",
"R. Camacho Toro",
"S. Camarda",
"D. Camarero Munoz",
"P. Camarri",
"M. Camerlingo",
"D. Cameron",
"C. Camincher",
"M. Campanelli",
"A. Camplani",
"V. Canale",
"A. Canesse",
"M. Cano Bret",
"J. Cantero",
"Y. Cao",
"F. Capocasa",
"M. Capua",
"A. Carbone",
"R. Cardarelli",
"J. Cardenas",
"F. Cardillo",
"G. Carducci",
"T. Carli",
"G. Carlino",
"B. Carlson",
"E. Carlson",
"L. Carminati",
"M. Carnesale",
"S. Caron",
"E. Carquin",
"S. Carrá",
"G. Carratta",
"J. Carter",
"T. Carter",
"D. Casadei",
"M. P. Casado",
"A. Casha",
"E. G. Castiglia",
"F. Castillo",
"L. Castillo García",
"V. Castillo Gimenez",
"N. Castro",
"A. Catinaccio",
"J. Catmore",
"V. Cavaliere",
"N. Cavalli",
"V. Cavasinni",
"E. Celebi",
"F. Celli",
"M. S. Centonze",
"K. Cerny",
"A. Cerqueira",
"A. Cerri",
"L. Cerrito",
"F. Cerutti",
"A. Cervelli",
"S. Çetin",
"Z. Chadi",
"D. Chakraborty",
"M. Chala",
"J. Chan",
"W. Chan",
"W. Y. Chan",
"J. Chapman",
"B. Chargeishvili",
"D. Charlton",
"T. Charman",
"M. Chatterjee",
"S. Chekanov",
"S. Chekulaev",
"G. Chelkov",
"A. Chen",
"B. Chen",
"C. Chen",
"H. Chen",
"J. Chen",
"S. Chen",
"S. Chen",
"X. Chen",
"Y. Chen",
"C. Cheng",
"H. C. Cheng",
"A. Cheplakov",
"E. Cheremushkina",
"E. Cherepanova",
"R. Cherkaoui El Moursli",
"E. Cheu",
"K. Cheung",
"L. Chevalier",
"V. Chiarella",
"G. Chiarelli",
"G. Chiodini",
"A. Chisholm",
"A. Chitan",
"Y. Chiu",
"M. Chizhov",
"K. Choi",
"A. Chomont",
"Y. Chou",
"E. Chow",
"T. Chowdhury",
"L. D. Christopher",
"M. Chu",
"X. Chu",
"J. Chudoba",
"J. Chwastowski",
"D. Cieri",
"K. Ciesla",
"V. Cindro",
"A. Ciocio",
"F. Cirotto",
"Z. Citron",
"M. Citterio",
"D. Ciubotaru",
"B. Ciungu",
"A. Clark",
"P. Clark",
"J. Clavijo Columbie",
"S. E. Clawson",
"C. Clement",
"L. Clissa",
"Y. Coadou"
],
"externalIds": {
"ArXiv": "2202.07953",
"DOI": "10.1007/JHEP08(2022)104",
"CorpusId": 251551851
},
"url": "https://www.semanticscholar.org/paper/fbb6179fc3b9749e02069431342bff52b0ff0631",
"referenceCount": 141,
"citationCount": 72,
"influentialCitationCount": 9,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Fast muon tracking with machine learning implemented in FPGA",
"abstract": null,
"year": 2022,
"venue": "Nuclear Instruments and Methods in Physics Research Section A : Accelerators, Spectrometers, Detectors and Associated Equipment",
"authors": [
"Changliang Sun",
"T. Nakajima",
"Y. Mitsumori",
"Y. Horii",
"M. Tomoto"
],
"externalIds": {
"ArXiv": "2202.04976",
"DOI": "10.1016/j.nima.2022.167546",
"CorpusId": 246706258
},
"url": "https://www.semanticscholar.org/paper/a77d28bb494663f6c138461f0c0209469f6bbdc2",
"referenceCount": 31,
"citationCount": 8,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Development of a resource-efficient FPGA-based neural network regression model for the ATLAS muon trigger upgrades",
"abstract": null,
"year": 2022,
"venue": "The European Physical Journal C",
"authors": [
"R. Ospanov",
"C. Feng",
"W. Dong",
"Wenhao Feng",
"Kan Zhang",
"Shining Yang"
],
"externalIds": {
"ArXiv": "2201.06288",
"DOI": "10.1140/epjc/s10052-022-10521-8",
"CorpusId": 246015634
},
"url": "https://www.semanticscholar.org/paper/c996ad12aa315c14303b6f6feeb7ef3a26d05d91",
"referenceCount": 35,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Autoencoders on field-programmable gate arrays for real-time, unsupervised new physics detection at 40 MHz at the Large Hadron Collider",
"abstract": null,
"year": 2021,
"venue": "Nature Machine Intelligence",
"authors": [
"E. Govorkova",
"E. Puljak",
"T. Aarrestad",
"T. James",
"V. Loncar",
"M. Pierini",
"Adrian Alan Pol",
"Nicolò Ghielmetti",
"Maksymilian Graczyk",
"S. Summers",
"J. Ngadiuba",
"Thong Q. Nguyen",
"Javier Mauricio Duarte",
"Zhenbin Wu"
],
"externalIds": {
"DBLP": "journals/natmi/GovorkovaPAJLPP22",
"ArXiv": "2108.03986",
"DOI": "10.1038/s42256-022-00441-3",
"CorpusId": 236957254
},
"url": "https://www.semanticscholar.org/paper/7d4a66901da64c85d2c943fd5ba5fb682a92b486",
"referenceCount": 70,
"citationCount": 49,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Computer Science"
]
},
{
"title": "Metallographic analysis of 11 T dipole coils for High Luminosity-Large Hadron Collider (HL-LHC)",
"abstract": "For next-generation accelerator magnets for fields beyond those achievable using Nb–Ti, Nb3Sn is the most viable superconductor. The high luminosity upgrade for the Large Hadron Collider (HL-LHC) marks an important milestone as it will be the first project where Nb3Sn magnets will be installed in an accelerator. Nb3Sn is a brittle intermetallic, so magnet coils are typically wound from composite strands containing ductile precursors before heat treating the wire components to form Nb3Sn. However, some mechanical assembly is still required after the coils have been heat-treated. In this paper, we present direct evidence of cracking of the brittle Nb3Sn filaments in a prototype dipole that resulted in degraded magnet performance. The cracking of the Nb3Sn, in this case, can be attributed to an issue with the collaring process that is required in the assembly of dipole accelerator magnets. Metallographic procedures were developed to visualize cracks present in the cables, along with quantitative image analysis for location-based crack analysis. We show that the stresses experienced in the damaged coil are above the critical damage stress of Nb3Sn conductor, as evidenced by a measured Cu stabilizer hardness of 85 HV0.1, which is higher than the Cu stabilizer hardness in a reference Nb3Sn cable ten-stack that was subjected to a 210 MPa transverse compression. We also show that once the collaring procedure issue was rectified in a subsequent dipole, the Nb3Sn filaments were found to be undamaged, and the Cu stabilizer hardness values were reduced to the expected levels. This paper provides a post-mortem verification pathway to analyze the damage, provides strand level mechanical properties, which could be beneficial for improving model prediction capabilities. This method could be applied beyond Nb3Sn magnets to composite designs involving high work hardening materials.",
"year": 2020,
"venue": "",
"authors": [
"S. Balachandran",
"Jonathan Cooper",
"Orion B Van Oss",
"P. Lee",
"L. Bottura",
"A. Devred",
"F. Savary",
"C. Scheuerlein",
"Felix Wolf"
],
"externalIds": {
"MAG": "3095816663",
"DOI": "10.1088/1361-6668/abc56a",
"CorpusId": 228961732
},
"url": "https://www.semanticscholar.org/paper/85b1914951072e0105477727cd05e9920a25c87c",
"referenceCount": 30,
"citationCount": 122,
"influentialCitationCount": 18,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "The Phase-2 Upgrade of the CMS Level-1 Trigger",
"abstract": "This Technical Design Report describes the ongoing developments and plans towards the upgrade of the CMS Level-1 trigger for the High-Luminosity Large Hadron Collider.",
"year": 2020,
"venue": "",
"authors": [
"A. Zabi",
"J. Berryhill",
"E. Perez",
"A. Tapper"
],
"externalIds": {
"MAG": "3092940566",
"CorpusId": 92986748
},
"url": "https://www.semanticscholar.org/paper/12880d6af7cd45d98091ca383d34385926a2f5c7",
"referenceCount": 12,
"citationCount": 121,
"influentialCitationCount": 18,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Performance of the missing transverse momentum triggers for the ATLAS detector during Run-2 data taking",
"abstract": null,
"year": 2020,
"venue": "Journal of High Energy Physics",
"authors": [
"G. Aad",
"A. Kupco",
"T. Dreyer",
"Yufeng Wang",
"K. Jakobs",
"B. Le",
"M. Spousta",
"M. Cobal",
"Peilong Wang",
"S. Schmitt",
"J. Schovancová",
"A. Bassalat",
"A. Bassalat",
"M. Melo",
"M. Shapiro",
"G. Tarna",
"S. Zimmermann",
"T. Eifert",
"L. Rehnisch",
"S. Kuday",
"M. Sioli",
"H. Herr",
"N. Bruscino",
"J. Huston",
"T. Sumida",
"S. Robertson",
"R. Gonçalo",
"A. Snesarev",
"L. Rotonda",
"D. Duschinger",
"J. Thomas",
"J. Thomas",
"E. Carquin",
"Y. W. Y. Ng",
"S. Crépé-Renaudin",
"J. Parsons",
"W. Balunas",
"Y. Tikhonov",
"Y. Tikhonov",
"M. K. Ayoub",
"J. A. Pozo",
"C. Mwewa",
"David Miller",
"A. Ivina",
"P. Mastrandrea",
"Jan-Ulf Mjoernmark",
"W. Leight",
"A. Colijn",
"Liaoshan Shi",
"M. E. Nelson",
"K. Cerny",
"Jun Yan",
"N. Warrack",
"H. Krueger",
"J. Ocariz",
"M. Nordberg",
"C. Weber",
"Daniela Bortoletto",
"A. Lankford",
"S. Tapprogge",
"Y. Hu",
"F. Parodi",
"T. Masubuchi",
"D. E. F. Lima",
"T. LeCompte",
"Shenjian Chen",
"S. Batlamous",
"T. Martin",
"J. Poveda",
"C. Roda",
"T. N. Manh",
"M. Ouchrif",
"K. Korcyl",
"T. Lyubushkina",
"C. Grefe",
"P. Tipton",
"F. Klitzner",
"A. Valero",
"T. Kishimoto",
"K. Kawagoe",
"H. Bachacou",
"A. Policicchio",
"F. Speiser",
"S. Zambito",
"S. Karpov",
"P. Strizenec",
"C. Lester",
"S. K. Haghighat",
"J. Navarro",
"Shuzhou Zhang",
"B. Micco",
"S. Koperny",
"L. Schaefer",
"C. Bertella",
"P. Schwemling",
"F. Rizatdinova",
"E. Meoni",
"T. Holmes",
"I. Sanderswood",
"E. M. Villhauer",
"Z. Hubacek",
"C. Doglioni",
"A. Ferrante",
"L. Vigani",
"A. Nag",
"P. Malecki",
"S. R. Maschek",
"J. Stark",
"E. Yatsenko",
"P. Gessinger-Befurt",
"M. Kuze",
"B. Hooberman",
"S. Carrá",
"K. Pachal",
"D. Costanzo",
"M. Fenton",
"Jesse Liu",
"A. Klimentov",
"S. P. Griso",
"I. Panagoulias",
"T. Huffman",
"Hongbin Liu",
"T. Kuhl",
"G. Gustavino",
"M. Dyndał",
"F. An",
"M. Antonelli",
"B. Malaescu",
"A. Skaf",
"D. K. Abhayasinghe",
"K. Grimm",
"K. Grimm",
"D. Zanzi",
"Sundeep Singh",
"M. Eggleston",
"V. R. Bailey",
"A. Ezhilov",
"S. Y. Andrean",
"A. Bellerive",
"J. Masik",
"A. Loesle",
"L. Adamek",
"L. Barak",
"D. Godin",
"G. Iacobucci",
"E. Shulga",
"B. Gorini",
"J. Heilman",
"D. Zhong",
"J. Butler",
"H. Fox",
"S. Grancagnolo",
"H. Cheng",
"C. Garner",
"S. Pino",
"N. Madysa",
"G. Hallewell",
"L. Franconi",
"L. Horyn",
"D. Fassouliotis",
"J. Smith",
"Yi Liu",
"A. Tricoli",
"M. Dumancic",
"H. Iwasaki",
"M. Kuna",
"M. Giannelli",
"B. Stapf",
"T. Cao",
"Michela Paganini",
"V. Ellajosyula",
"I. Pogrebnyak",
"F. Capriles",
"E. Antipov",
"J. Faltova",
"Z. Yang",
"Y. Chiu",
"Wenzhong Guo",
"S. Swift",
"E. Lipeles",
"B. Bergmann",
"S. Artz",
"M. Oreglia",
"E. Drechsler",
"K. Einsweiler",
"F. Monticelli",
"S. Giagu",
"E. Kneringer",
"B. Freund",
"H. Yildiz",
"D. Whiteson",
"K. Shaw",
"Yingchun Zhu",
"N. Biesuz",
"J. Terron",
"D. S. Nielsen",
"M. G. Bostanabad",
"V. C. Gimenez",
"T. Barillari",
"T. Neep",
"F. Peri",
"P. Clark",
"K. Vorobev",
"J. Hrivnac",
"M. Barisits",
"T. Kunigo",
"A. Grillo",
"S. Camarda",
"T. Vale",
"R. Kopeliansky",
"M. Swiatlowski",
"N. Konstantinidis",
"O. Jinnouchi",
"H. Sadrozinski",
"V. Kazanin",
"V. Kazanin",
"E. Barberio",
"D. Noel",
"K. Tackmann",
"D. Pietreanu",
"A. Khanov",
"Y. Kano",
"D. C. Munoz",
"J. Zahreddine",
"M. Sutton",
"Y. Noguchi",
"L. Živković",
"L. Dell'Asta",
"V. Wallangen",
"K. Abeling",
"M. Vincter",
"G. Herten",
"V. Nikolaenko",
"V. Nikolaenko",
"D. Kirchmeier",
"C. Chau",
"A. Girolamo",
"N. Abraham",
"M. Elsing",
"C. Geng",
"K. Mochizuki",
"A. Ciaccio",
"B. Burghgrave",
"A. Fray",
"P. Massarotti",
"L. Rossini",
"S. Bahrasemani",
"C. J. Mcnicol",
"G. Gregorio",
"F. Corriveau",
"K. Tariq",
"G. Rodriguez",
"I. Bloch",
"K. Smolek",
"R. Brenner",
"P. Ott",
"I. Maniatis",
"A. Gomez",
"G. Marceca",
"B. Petersen",
"V. Solovyev",
"B. Haney",
"S. Gonzalez-Sevilla",
"A. Jaspan",
"P. Schacht",
"N. Whallon",
"A. Negri",
"S. Farrington",
"M. Ziolkowski",
"V. Cindro",
"P. Sommer",
"A. Minaenko",
"X. Ruan",
"P. F. Salvatore",
"M. Franklin",
"B. Mansoulie",
"Y. Qin",
"G. Galster",
"C. Leggett",
"J. Cowley",
"P. Buchholz",
"K. Zoch",
"Zuzana Blenessy",
"J. E. Lambert",
"C. Ferretti",
"D. Biedermann",
"J. Kroll",
"E. M. Shrif",
"Z. Uysal",
"A. Behera",
"H. Torre",
"C. E. Leitgeb",
"F. Tresoldi",
"S. Che",
"S. Oda",
"C. Gutschow",
"M. Saito",
"J. Stupak",
"D. P. Mungo",
"J. Vossebeld",
"M. Czurylo",
"J. Moss",
"J. Moss",
"M. Dunford",
"R. Middleton",
"A. Kowalewska",
"Kyungeon Choi",
"S. Harkusha",
"P. Saha",
"J. Hrdinka",
"R. Roehrig",
"H. Sakamoto",
"E. Hansen",
"Matt Zhang",
"A. Bailey",
"M. Biglietti",
"S. Jones",
"T. Jakoubek",
"L. Marcoccia",
"S. Connell",
"A. Doria",
"Hoang Dai Nghia Nguyen",
"M. Danninger",
"C. Blocker",
"S. Istin",
"E. Varnes",
"J. Hansen",
"M. Ghneimat",
"G. Iakovidis",
"A. Picazio",
"C. J. Treado",
"G. Jarlskog",
"K. Nagai",
"Yi Chen",
"W. Vandelli",
"T. H. Park",
"A. Salvo",
"A. Kourkoumeli-Charalampidi",
"J. H. Lindon",
"Y. Heng",
"F. Sohns",
"P. Shatalov",
"Y. Smirnov",
"S. Majewski",
"K. Sliwa",
"J. G. Rojas",
"P. Bechtle",
"M. Fiolhais",
"M. Fiolhais",
"F. H. Phillips",
"F. Ito",
"F. Ukegawa",
"T. Guillemin",
"E. Winkels",
"J. J. Kempster",
"A. Ghosh",
"Shuo Han",
"I. Maznas",
"M. Wobisch",
"K. Augsten",
"J. Ochoa",
"E. Guirriec",
"N. Belyaev",
"A. Ryzhov",
"D. Moreno",
"G. Usai",
"P. Deviveiros",
"M. Shehade",
"M. Stanitzki",
"L. Wilkins",
"B. King",
"A. P. Pages",
"M. Begel",
"G. Forcolin",
"Yongsung Kim",
"L. Morvaj",
"C. Burton",
"M. Weber",
"T. Heim",
"A. Rej",
"K. Belotskiy",
"V. W. Wong",
"Shuaiyan Kang",
"C. Agheorghiesei",
"H. Pacey",
"R. Carney",
"R. Jansky",
"A. Kotsokechagia",
"A. Undrus",
"B. Stamas",
"M. W. O'Keefe",
"J. M. I. Ponce",
"D. Boscherini",
"C. Zhu",
"D. Tovey",
"N. Semprini-Cesari",
"P. Fassnacht",
"K. Finelli",
"B. Brickwedde",
"A. Matic",
"C. David",
"L. Zwalinski",
"M. A. Verzini",
"T. Stevenson",
"Jie Yu",
"D. Boerner",
"L. Heinrich",
"G. Rovelli",
"C. Troncon",
"F. Guescini",
"J. Pascual",
"Chunhui Chen",
"S. Menke",
"I. Vulpen",
"E. Shabalina",
"G. Unal",
"R. Gardner",
"A. Fehr",
"Yu Zhang",
"S. Kazakos",
"M. Morii",
"A. Sciandra",
"Zhiqin Zhang",
"S. Xella",
"R. Iguchi",
"T. Lin",
"L. Flores",
"G. Chiodini",
"A. Caltabiano",
"Jun-Jie Guo",
"J. Gonski",
"A. Gabrielli",
"E. Akilli",
"T. Klapdor-Kleingrothaus",
"O. Kind",
"R. Schamberger",
"R. Schamberger",
"A. Schwartzman",
"Shahzad Ali",
"L. A. Bella",
"F. Ruehr",
"A. Weidberg",
"H. Hibi",
"A. Traeet",
"L. Mijović",
"H. Potti",
"S. Snyder",
"U. Blumenschein",
"P. Maettig",
"M. Javurkova",
"C. Kitsaki",
"E. Tzovara",
"M. Tasevsky",
"F. Pasquali",
"C. Solans",
"J. Kvita",
"T. Klingl",
"H. Imam",
"B. W. Allen",
"T. Yamazaki",
"R. Hunter",
"S. Veneziano",
"M. Zaazoua",
"Y. Hasegawa",
"Y. Takubo",
"M. Huhtinen",
"A. Kiryunin",
"A. Beddall",
"N. Kimura",
"S. Amoroso",
"L. B. Navarro",
"L. Serkin",
"Dengfeng Zhang",
"I. Gkialas",
"I. Gkialas",
"S. Smirnov",
"M. Haleem",
"D. Froidevaux",
"B. Ali",
"F. Lyu",
"D. Emeliyanov",
"A. Filipčič",
"S. Kuehn",
"M. Lassnig",
"T. Pauly",
"G. Ottino",
"A. Struebig",
"I. Nitsche",
"B. Wosiek",
"Yang Liu",
"S. Strandberg",
"A. Karyukhin",
"Y. D. Diaz",
"A. Mizukami",
"L. Pontecorvo",
"A. Jinaru",
"Liang Li",
"T. Lenz",
"J. Butterworth",
"A. Korn"
],
"externalIds": {
"MAG": "3035076551",
"DOI": "10.1007/JHEP08(2020)080",
"CorpusId": 225893031
},
"url": "https://www.semanticscholar.org/paper/15f8087b8e647f3d32fe30cfeb72dc47941c86fb",
"referenceCount": 95,
"citationCount": 54,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Performance of the ATLAS RPC detector and Level-1 muon barrel trigger at s = 13 TeV",
"abstract": "The Level-1 muon trigger system of the ATLAS experiment at the Large Hadron Collider selects muon candidates with six transverse momentum thresholds and associates them with the correct LHC bunch crossing. The barrel region of the ATLAS Muon Spectrometer is instrumented with Resistive Plate Chambers (RPCs), covering the pseudo-rapidity range ∣ η ∣ < 1.05 . The RPC detectors are arranged in three concentric double layers and consist of 3600 gas volumes, with a total surface of more than 4000 m2. This contribution will discuss the performance of the RPC detector and Level-1 Muon Barrel trigger system, using the data from proton-proton collisions recorded during the 2018 data-taking period at a centre-of-mass energy of 13 TeV.",
"year": 2020,
"venue": "Physica Scripta",
"authors": [
"M. Sessa"
],
"externalIds": {
"MAG": "3037423319",
"DOI": "10.1088/1402-4896/ab9bd9",
"CorpusId": 225450934
},
"url": "https://www.semanticscholar.org/paper/4a762c6cc87133e312648ab129ed19e862c16e89",
"referenceCount": 10,
"citationCount": 5,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Fast inference of Boosted Decision Trees in FPGAs for particle physics",
"abstract": "We describe the implementation of Boosted Decision Trees in the hls4ml library, which allows the translation of a trained model into FPGA firmware through an automated conversion process. Thanks to its fully on-chip implementation, hls4ml performs inference of Boosted Decision Tree models with extremely low latency. With a typical latency less than 100 ns, this solution is suitable for FPGA-based real-time processing, such as in the Level-1 Trigger system of a collider experiment. These developments open up prospects for physicists to deploy BDTs in FPGAs for identifying the origin of jets, better reconstructing the energies of muons, and enabling better selection of rare signal processes.",
"year": 2020,
"venue": "Journal of Instrumentation",
"authors": [
"S. Summers",
"G. D. Guglielmo",
"Javier Mauricio Duarte",
"P. Harris",
"Duc Hoang",
"S. Jindariani",
"E. Kreinar",
"V. Loncar",
"J. Ngadiuba",
"M. Pierini",
"D. Rankin",
"N. Tran",
"Zhenbin Wu"
],
"externalIds": {
"DBLP": "journals/corr/abs-2002-02534",
"MAG": "3098984841",
"ArXiv": "2002.02534",
"DOI": "10.1088/1748-0221/15/05/P05026",
"CorpusId": 211066490
},
"url": "https://www.semanticscholar.org/paper/eb8bd26cdf21961137f13f8725dae3e0b5f2c41c",
"referenceCount": 21,
"citationCount": 59,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Computer Science"
]
},
{
"title": "Searches for electroweak production of supersymmetric particles with compressed mass spectra in √s = 13 TeV pp collisions with the ATLAS detector",
"abstract": "This paper presents results of searches for the electroweak production of supersymmetric particles in models with compressed mass spectra. The searches use 139 fb(-1) of root s = 13 TeV proton-prot ...",
"year": 2019,
"venue": "",
"authors": [
"Atlas Collaboration"
],
"externalIds": {
"MAG": "2995425743",
"ArXiv": "1911.12606",
"DOI": "10.1103/PhysRevD.101.052005",
"CorpusId": 260746092
},
"url": "https://www.semanticscholar.org/paper/eec8cf797bd98150c94f8894ba1310c959a4e6c0",
"referenceCount": 115,
"citationCount": 112,
"influentialCitationCount": 7,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Search for invisible decays of a Higgs boson produced through vector boson fusion in proton-proton collisions at $\\sqrt{s} =$ 13 TeV",
"abstract": null,
"year": 2018,
"venue": "",
"authors": [
"Cms Collaboration"
],
"externalIds": {
"ArXiv": "1809.05937",
"DOI": "10.1016/j.physletb.2019.04.025",
"CorpusId": 260739401
},
"url": "https://www.semanticscholar.org/paper/70d15965e0aabd57a845595fc51199f85fbf9bcd",
"referenceCount": 82,
"citationCount": 298,
"influentialCitationCount": 16,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Fast inference of deep neural networks in FPGAs for particle physics",
"abstract": "Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA (Field Programmable Gate Array) hardware has only just begun. FPGA-based trigger and data acquisition systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. A companion compiler package for this work is developed based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.",
"year": 2018,
"venue": "Journal of Instrumentation",
"authors": [
"Javier Mauricio Duarte",
"Song Han",
"P. Harris",
"S. Jindariani",
"E. Kreinar",
"B. Kreis",
"J. Ngadiuba",
"M. Pierini",
"R. Rivera",
"N. Tran",
"Zhenbin Wu"
],
"externalIds": {
"MAG": "3101493857",
"DBLP": "journals/corr/abs-1804-06913",
"ArXiv": "1804.06913",
"DOI": "10.1088/1748-0221/13/07/P07027",
"CorpusId": 5007685
},
"url": "https://www.semanticscholar.org/paper/5e963ffdf331f003dfb25a598fc93a0deef1f728",
"referenceCount": 105,
"citationCount": 350,
"influentialCitationCount": 34,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Computer Science",
"Mathematics"
]
},
{
"title": "Software and firmware co-development using high-level synthesis",
"abstract": "Accelerating trigger applications on FPGAs (using VHDL/Verilog) at the CMS experiment at CERN's Large Hadron Collider warrants consistency between each trigger firmware and its corresponding C++ model. This tedious and time consuming process of convergence is exacerbated during each upgrade study. High-level synthesis, with its promise of increased productivity and C++ design entry bridges this gap exceptionally well. This paper explores the “single source code” approach using Vivado-HLS tool for redeveloping the upgraded CMS Endcap Muon Level-1 Track finder (EMTF). Guidelines for tight latency control, optimal resource usage and compatibility with CMS software framework are outlined in this paper.",
"year": 2017,
"venue": "",
"authors": [
"N.P. Ghanathe",
"A. Madorsky",
"H. Lam",
"D. Acosta",
"A. George",
"M. Carver",
"Y. Xia",
"A. Jyothishwara",
"M. Hansen"
],
"externalIds": {
"MAG": "2580372149",
"DOI": "10.1088/1748-0221/12/01/C01083",
"CorpusId": 125433550
},
"url": "https://www.semanticscholar.org/paper/ebb202ce2aa916eb965bb07f7216b8cf1224a813",
"referenceCount": 6,
"citationCount": 9,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Technical Design Report for the Phase-I Upgrade of the ATLAS TDAQ System",
"abstract": "The Phase-I upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system is to allow the ATLAS experiment to efficiently trigger and record data at instantaneous luminosities that are up to three times that of the original LHC design while maintaining trigger thresholds close to those used in the initial run of the LHC.",
"year": 2013,
"venue": "",
"authors": [
"G. Aad",
"A. Kupco",
"P. Laurelli",
"S. Webb",
"S. Sekula",
"J. Huston",
"K. Jakobs",
"T. Dietzsch",
"Yi-Cheng Pan",
"M. Spousta",
"M. Cobal",
"T. Agatonović-Jovin",
"J. Schovancová",
"V. Gratchev",
"P. Radloff",
"A. Bassalat",
"M. Shapiro",
"S. Zimmermann",
"T. Eifert",
"N. Ozturk",
"X. Anduaga",
"S. P. Lopez",
"T. Cao",
"S. Robertson",
"F. Monticelli",
"A. Snesarev",
"L. Rotonda",
"E. Carquin",
"Mario Martínez",
"S. Crépé-Renaudin",
"S. Diglio",
"Danny Miller",
"P. Mastrandrea",
"S. Ask",
"P. F. Salvatore",
"A. Colijn",
"K. Cerny",
"A. Nikiforov",
"S. French",
"R. Pravahan",
"J. Ocariz",
"S. Shushkevich",
"A. Lankford",
"N. Sinev",
"S. Tapprogge",
"S. Khalek",
"T. Masubuchi",
"D. E. F. Lima",
"Shenjian Chen",
"T. Martin",
"J. Poveda",
"E. Mountricha",
"M. Ouchrif",
"H. Khandanyan",
"M. A. Verzini",
"M. King",
"M. Hurwitz",
"M. Heller",
"P. Tipton",
"A. Valero",
"F. Mueller",
"T. Kishimoto",
"A. Vartapetian",
"K. Kawagoe",
"A. Barton",
"A. Policicchio",
"T. Loddenkoetter",
"S. Karpov",
"P. Strizenec",
"A. Antonov",
"C. Lester",
"M. Stănescu-Bellu",
"J. Navarro",
"B. Micco",
"E. O. García",
"S. Koperny",
"J. Strandberg",
"W. Lockman",
"C. Bertella",
"P. Schwemling",
"M. Vos",
"E. Meoni",
"T. Holmes",
"A. Lanza",
"A. Zaman",
"K. Meier",
"C. Doglioni",
"W. Ehrenfeld",
"T. Mclaughlan",
"C. Dionisi",
"A. Boldyrev",
"J. Searcy",
"J. Stark",
"E. Yatsenko",
"R. Peschke",
"M. Kuze",
"M. Kazarinov",
"V. Wenzel",
"D. Scannicchio",
"D. Costanzo",
"J. Penwell",
"A. Klimentov",
"S. P. Griso",
"T. Huffman",
"M. Rudolph",
"P. Rosendahl",
"T. Kuhl",
"I. Brawn",
"S. Gadatsch",
"B. Malaescu",
"K. Grimm",
"S. Norberg",
"F. Prokoshin",
"D. Macina",
"S. Hasegawa",
"S. Kuday",
"P. Nevski",
"B. Cooper",
"A. Bellerive",
"J. Masik",
"M. Janus",
"L. Barak",
"T. Anh",
"S. Kama",
"G. Iacobucci",
"E. Shulga",
"B. Gorini",
"Xueyao Zhang",
"J. Butler",
"H. Fox",
"A. Komar",
"E. Koffeman",
"S. Pino",
"K. Black",
"J. Koll",
"P. Weigell",
"D. Fassouliotis",
"R. Yoosoofmiya",
"A. Tricoli",
"H. Iwasaki",
"M. Davies",
"T. Wyatt",
"S. Dita",
"M. Deliyergiyev",
"E. Petit",
"J. Faltova",
"P. Phillips",
"G. Snidero",
"M. Venturi",
"M. Nedden",
"T. Ženiš",
"M. Oreglia",
"H. Neal",
"K. Einsweiler",
"F. Lorenzi",
"S. Giagu",
"E. Kneringer",
"O. Arslan",
"B. Girolamo",
"H. Yildiz",
"D. Whiteson",
"A. Dewhurst",
"K. Shaw",
"S. Brunet",
"Yingchun Zhu",
"R. Neves",
"A. Bundock",
"J. Terron",
"P. Urquijo",
"V. C. Gimenez",
"N. Kanaya",
"T. Barillari",
"T. Neep",
"G. Conti",
"P. Clark",
"Jiaming Yu",
"L. Smirnova",
"R. Walker",
"J. Hrivnac",
"T. Kunigo",
"A. Grillo",
"M. Dova",
"R. Hickling",
"I. Ueda",
"D. Maximov",
"T. Serre",
"R. Kopeliansky",
"J. Mattmann",
"N. Konstantinidis",
"O. Jinnouchi",
"H. Sadrozinski",
"V. Kazanin",
"E. Barberio",
"K. Tackmann",
"Y. Tikhonov",
"D. Pelikan",
"H. Schulz",
"D. Vladoiu",
"A. Khanov",
"M. Sutton",
"M. Hoeferkamp",
"J. Almond",
"L. Dell'Asta",
"G. Tsipolitis",
"V. Moeller",
"E. Hines",
"G. Herten",
"V. Nikolaenko",
"M. Sosebee",
"M. Lefebvre",
"A. Girolamo",
"T. Mueller",
"M. Nomachi",
"M. Elsing",
"A. Ciaccio",
"B. Burghgrave",
"A. S. Mete",
"M. Stoebe",
"S. Sivoklokov",
"A. Meade",
"F. Corriveau",
"B. Clement",
"I. Bloch",
"K. Smolek",
"R. Brenner",
"P. Tas",
"H. Graaf",
"T. Bain",
"G. Ciapetti",
"S. Gonzalez-Sevilla",
"C. Shimmin",
"S. Hamilton",
"P. Schacht",
"A. Negri",
"S. Bethke",
"P. Dita",
"G. Sedov",
"L. Bianchini",
"I. Mussche",
"P. Soueid",
"A. Minaenko",
"G. Besjes",
"M. Kuna",
"M. Červ",
"X. Ruan",
"J. Henderson",
"U. Schäfer",
"M. Franklin",
"P. Schwegler",
"B. Mansoulie",
"C. Leggett",
"B. Allbrooke",
"S. Hageböck",
"C. Ferretti",
"E. Gornicki",
"J. Kroll",
"F. Giorgi",
"H. Torre",
"T. Heim",
"S. Shimizu",
"S. Oda",
"C. Gatti",
"C. Gutschow",
"B. O'Brien",
"F. L. Sterzo",
"T. Åkesson",
"J. Vossebeld",
"J. Moss",
"C. Goeringer",
"M. Dunford",
"K. Mueller",
"S. Burdin",
"S. Harkusha",
"D. Schaile",
"M. Düren",
"T. Abajyan",
"M. Goulette",
"F. Luehring",
"J. Wittkowski",
"A. Olszewski",
"M. Biglietti",
"V. Smakhtin",
"I. Christidi",
"T. Jakoubek",
"J. Souza",
"H. Martinez",
"A. Zhemchugov",
"S. Vallecorsa",
"E. Kluge",
"A. Doria",
"S. Wang",
"F. Socher",
"J. Hobbs",
"C. Blocker",
"S. Istin",
"F. Quiñónez",
"E. Varnes",
"J. Hansen",
"E. Torrence",
"V. Chernyatin",
"G. Iakovidis",
"A. Picazio",
"K. Nagai",
"R. Caputo",
"D. Urbaniec",
"M. Ishitsuka",
"R. Mccarthy",
"M. Rammensee",
"A. Salvo",
"H. Bertelsen",
"A. Gorišek",
"D. Gillberg",
"P. Shatalov",
"Y. Smirnov",
"S. Majewski",
"K. Sliwa",
"K. Oussoren",
"A. Demilly",
"B. Wynne",
"P. Bechtle",
"M. Fiolhais",
"R. Turra",
"A. Oh",
"S. Prasad",
"A. Alonso",
"T. Guillemin",
"C. Potter",
"J. J. Kempster",
"K. Augsten",
"M. Baak",
"A. Nelson",
"E. Guirriec",
"J. Tsung",
"O. Røhne",
"D. Moreno",
"F. Spanò",
"E. Schmidt",
"P. Deviveiros",
"Simon Lin",
"M. Stanitzki",
"C. Sandoval",
"Jongseo Lee",
"B. King",
"A. P. Pages",
"M. Begel",
"F. Podlyski",
"V. Scharf",
"L. Morvaj",
"E. Guido",
"T. Ravenscroft",
"A. Firan",
"J. Lundberg",
"R. Debbe",
"K. Belotskiy",
"F. Marchese",
"M. Jha",
"D. Su",
"M. Pohl",
"B. Abi",
"T. Doan",
"H. Tran",
"C. Roda",
"D. Quilty",
"F. Friedrich",
"A. Artamonov",
"J. M. I. Ponce",
"P. Jussel",
"L. Kashif",
"N. Semprini-Cesari",
"P. Fassnacht",
"K. Finelli",
"C. David",
"H. Reisin",
"L. Zwalinski",
"K. Korcyl",
"R. Ströhmer",
"Jie Yu",
"P. Klimek",
"C. Galea",
"L. Heinrich",
"A. Zibell",
"C. Troncon",
"F. Guescini",
"A. Leyko",
"E. Berglund",
"M. Kacimi",
"Chunhui Chen",
"J. Lima",
"S. Menke",
"C. Lester",
"M. Vlasak",
"S. Terada",
"E. Shabalina",
"D. Kharchenko",
"R. Gardner",
"A. Forti",
"C. Hensel",
"M. Morii",
"I. Gorelov",
"E. Sarkisyan-Grinbaum",
"Zhiqin Zhang",
"T. Brooks",
"S. Xella",
"J. Šolc",
"G. Pásztor",
"R. Ciftci",
"S. Darmora",
"Jun Guo",
"L. Simić",
"A. Gabrielli",
"M. Wessels",
"O. Kind",
"G. Cortiana",
"D. Ster",
"Y. Schnellbach",
"A. Schwartzman",
"V. Kaushik",
"R. Brunelière",
"L. A. Bella",
"S. Hellman",
"S. Moritz",
"S. Snyder",
"J. Hoffman",
"U. Blumenschein",
"R. Konoplich",
"J. M. Ramos",
"M. Tasevsky",
"A. Kravchenko",
"J. Kvita",
"P. Nilsson",
"V. Boisvert",
"J. Schieck",
"S. Veneziano",
"Y. Hasegawa",
"Y. Takubo",
"M. Huhtinen",
"A. Kiryunin",
"J. Fischer",
"A. Beddall",
"S. Amoroso",
"P. Gris",
"I. Gkialas",
"S. Smirnov",
"M. Volpi",
"M. Haleem",
"D. Froidevaux",
"R. Pezoa",
"M. Neubauer",
"D. Emeliyanov",
"S. Kuehn",
"G. Sartisohn",
"M. Lassnig",
"A. C. F. Bustos",
"T. Pauly",
"D. Nguyen",
"A. Davison",
"J. Jackson",
"H. Hakobyan",
"A. Kapliy",
"B. Wosiek",
"A. Karyukhin",
"C. Wiglesworth",
"L. Pontecorvo",
"J. Snow",
"T. Lenz",
"J. Butterworth",
"A. Korn",
"S. Henrot-Versillé",
"F. Dallaire",
"M. Scherzer",
"E. Rossi",
"Kun Liu",
"A. Ouraou",
"N. M. Bolnet",
"A. Dudarev",
"K. Hara",
"G. Arabidze",
"R. C. Armadans",
"G. Jeng",
"C. Ohm",
"M. Petteni",
"Y. Oren",
"J. Shank",
"C. Bourdarios",
"F. Legger",
"L. Cerrito",
"R. Astalos",
"J. Parsons",
"L. Tomlinson",
"H. Wilkens",
"E. Rizvi",
"E. Gramstad",
"T. Jones",
"V. Vrba",
"A. Pingel",
"T. Hryn'ova",
"A. Ahmad",
"L. Gladilin",
"S. Ye",
"Y. Makida"
],
"externalIds": {
"MAG": "969249351",
"CorpusId": 107251763
},
"url": "https://www.semanticscholar.org/paper/11c1d0a1867f8393a76afaf711e680c1eb159df0",
"referenceCount": 0,
"citationCount": 217,
"influentialCitationCount": 10,
"isOpenAccess": false,
"fieldsOfStudy": [
"Engineering"
]
},
{
"title": "The ATLAS Experiment at the CERN Large Hadron Collider",
"abstract": "The ATLAS detector as installed in its experimental cavern at point 1 at CERN is described in this paper. A brief overview of the expected performance of the detector when the Large Hadron Collider begins operation is also presented.",
"year": 2008,
"venue": "",
"authors": [
"C. Benchouk",
"M. Bendel",
"B. Benedict",
"N. Benekos",
"J. Benes",
"Y. Benhammou",
"G. Benincasa",
"D. Benjamin",
"J. Bensinger",
"K. Benslama",
"S. Bentvelsen",
"M. Beretta",
"D. Berge",
"E. Bergeaas",
"N. Berger",
"F. Berghaus",
"S. Berglund",
"F. Bergsma",
"J. Beringer",
"J. Bernabéu",
"K. Bernardet",
"C. Berriaud",
"T. Berry",
"H. Bertelsen",
"A. Bertin",
"F. Bertinelli",
"S. Bertolucci",
"N. Besson",
"A. Beteille",
"S. Bethke",
"W. Białas",
"R. Bianchi",
"M. Bianco",
"O. Biebel",
"M. Bieri",
"M. Biglietti",
"H. Bilokon",
"M. Binder",
"S. Binet",
"N. Bingefors",
"A. Bingul",
"C. Bini",
"C. Biscarat",
"R. Bischof",
"M. Bischofberger",
"A. Bitadze",
"J. Bizzell",
"K. Black",
"R. Blair",
"J. Blaising",
"O. Blanch",
"G. Blanchot",
"C. Blocker",
"J. Blocki",
"A. Blondel",
"W. Blum",
"U. Blumenschein",
"C. Boaretto",
"G. J. Bobbink",
"A. Bocci",
"D. Bocian",
"R. Bock",
"M. Boehm",
"J. Boek",
"J. Bogaerts",
"A. Bogouch",
"C. Bohm",
"J. Bohm",
"V. Boisvert",
"T. Bold",
"V. Boldea",
"V. Bondarenko",
"R. Bonino",
"J. Bonis",
"W. Bonivento",
"P. Bonneau",
"M. Boonekamp",
"G. Boorman",
"M. Boosten",
"C. Booth",
"P. Boóth",
"P. Boóth",
"J. Booth",
"K. Borer",
"A. Borisov",
"I. Borjanović",
"K. Bos",
"D. Boscherini",
"F. Bosi",
"M. Bosman",
"M. Bosteels",
"B. Botchev",
"H. Boterenbrood",
"D. Botterill",
"J. Boudreau",
"E. Bouhova-Thacker",
"C. Boulahouache",
"C. Bourdarios",
"M. Boutemeur",
"K. Bouzakis",
"G. Boyd",
"J. Boyd",
"B. H. Boyer",
"I. Boyko",
"N. Bozhko",
"S. Braccini",
"A. Braem",
"P. Branchini",
"G. Brandenburg",
"A. Brandt",
"O. Brandt",
"U. Bratzler",
"H. Braun",
"S. Bravo",
"I. Brawn",
"B. Brelier",
"J. Bremer",
"R. Brenner",
"S. Bressler",
"D. Breton",
"N. Brett",
"P. Breugnon",
"P. Bright-Thomas",
"F. Brochu",
"I. Brock",
"R. Brock",
"T. Brodbeck",
"E. Brodet",
"F. Broggi",
"Z. Broklova",
"C. Bromberg",
"G. Brooijmans",
"G. Brouwer",
"J. Broz",
"E. Brubaker",
"P. D. Renstrom",
"D. Bruncko",
"A. Bruni",
"G. Bruni",
"M. Bruschi",
"T. Buanes",
"N. Buchanan",
"P. Buchholz",
"I. Budagov",
"V. Büscher",
"L. Bugge",
"D. Buira-Clark",
"E. Buis",
"F. Bujor",
"T. Buran",
"H. Burckhart",
"D. Burckhart-Chromek",
"S. Burdin",
"R. Burns",
"E. Busato",
"J. Buskop",
"K. Buszello",
"F. Butin",
"J. Butler",
"C. Buttar",
"J. Butterworth",
"J. Butterworth",
"T. Byatt",
"S. Urbán",
"E. C. Casas",
"M. Caccia",
"D. Caforio",
"O. Cakir",
"P. Calafiura",
"G. Calderini",
"D. C. Terol",
"J. Callahan",
"L. Caloba",
"R. Caloi",
"D. Calvet",
"A. Camard",
"F. Camarena",
"P. Camarri",
"M. Cambiaghi",
"D. Cameron",
"J. Cammin",
"F. Segura",
"S. Campana",
"V. Canale",
"J. Cantero",
"M. C. Garrido",
"I. Caprini",
"M. Caprini",
"M. Caprio",
"D. Caracinha",
"C. Caramarcu",
"Y. Carcagno",
"R. Cardarelli",
"C. Cardeira",
"L. Sas",
"A. Cardini",
"T. Carli",
"G. Carlino",
"L. Carminati",
"B. Caron",
"S. Caron",
"C. Carpentieri",
"F. S. Carr",
"A. Carter",
"J. R. Carter",
"J. Carvalho",
"D. Casadei",
"M. P. Casado",
"M. Cascella",
"C. Caso",
"J. Castelo",
"V. Gimenez",
"N. Castro",
"F. Castrovillari",
"G. Cataldi",
"F. Cataneo",
"A. Catinaccio",
"J. Catmore",
"A. Cattai",
"S. Caughron",
"D. Cauz",
"A. Cavallari",
"P. Cavalleri",
"D. Cavalli",
"M. Cavalli-Sforza",
"V. Cavasinni",
"F. Ceradini",
"C. Cerna",
"C. Cernoch",
"A. Cerqueira",
"A. Cerri",
"F. Cerutti",
"M. Cervetto",
"S. Çetin",
"F. Cevenini",
"M. Chalifour",
"M. Llatas",
"A. Chan",
"J. Chapman",
"D. G. Charlton",
"S. Charron",
"S. Chekulaev",
"G. Chelkov",
"H. Chen",
"L. Chen"
],
"externalIds": {
"MAG": "1663560223",
"CorpusId": 118242696
},
"url": "https://www.semanticscholar.org/paper/a0d533492817952818ee8b607e28c77edfff827c",
"referenceCount": 247,
"citationCount": 5401,
"influentialCitationCount": 768,
"isOpenAccess": false,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "The CMS experiment at the CERN LHC",
"abstract": "The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 1034 cm-2 s-1 (1027 cm-2 s-1). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4π solid angle. Forward sampling calorimeters extend the pseudorapidity coverage to high values (|η| ≤ 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.",
"year": 2008,
"venue": "",
"authors": [
"S. Chatrchyan",
"G. Hmayakyan",
"V. Khachatryan",
"A. Sirunyan",
"W. Adam",
"T. Bauer",
"T. Bergauer",
"H. Bergauer",
"M. Dragicevic",
"J. Erö",
"M. Friedl",
"R. Frühwirth",
"V. Ghete",
"P. Glaser",
"C. Hartl",
"N. Hoermann",
"J. Hrubec",
"S. Hänsel",
"M. Jeitler",
"K. Kastner",
"M. Krammer",
"M. Markytan",
"I. Mikulec",
"B. Neuherz",
"Tobias Nöbauer",
"M. Oberegger",
"M. Padrta",
"M. Pernicka",
"P. Porth",
"H. Rohringer",
"S. Schmid",
"T. Schreiner",
"R. Stark",
"H. Steininger",
"J. Strauss",
"A. Taurok",
"D. Uhl",
"W. Waltenberger",
"G. Walzel",
"E. Widl",
"C. Wulz",
"V. Petrov",
"V. Prosolovich",
"V. Chekhovsky",
"O. Dvornikov",
"I. Emeliantchik",
"A. Litomin",
"V. Makarenko",
"I. Marfin",
"V. Mossolov",
"N. Shumeiko",
"A. Solin",
"R. Stefanovitch",
"S. Gonzalez",
"A. Tikhonov",
"A. Fedorov",
"M. Korzhik",
"O. Missevitch",
"R. Zuyeuski",
"W. Beaumont",
"M. Cardaci",
"E. D. Langhe",
"E. Wolf",
"E. Delmeire",
"S. Ochesanu",
"M. Tasevsky",
"P. Mechelen",
"J. D’Hondt",
"S. D. Weirdt",
"O. Devroede",
"R. Goorens",
"S. Hannaert",
"J. Heyninck",
"J. Maes",
"M. Mozer",
"S. Tavernier",
"W. Doninck",
"L. V. Lancker",
"P. Mulders",
"I. Villella",
"C. Wastiels",
"C. Yu",
"O. Bouhali",
"O. Charaf",
"B. Clerbaux",
"P. D. Harenne",
"G. Lentdecker",
"J. Dewulf",
"S. Elgammal",
"R. Gindroz",
"G. Hammad",
"T. Mahmoud",
"L. Neukermans",
"M. Pins",
"R. Pins",
"S. Rugovac",
"J. Stefanescu",
"V. Sundararajan",
"C. Velde",
"P. Vanlaer",
"J. Wickens",
"M. Tytgat",
"S. Assouak",
"J. Bonnet",
"G. Bruno",
"J. Caudron",
"B. D. Callatay",
"D. F. D. Jeneret",
"S. Visscher",
"P. Demin",
"D. Favart",
"C. Felix",
"B. Florins",
"E. Forton",
"A. Giammanco",
"G. Grégoire",
"M. Jonckman",
"D. Kcira",
"T. Keutgen",
"V. Lemaitre",
"D. Michotte",
"O. Militaru",
"S. Ovyn",
"T. Pierzchala",
"K. Piotrzkowski",
"V. Roberfroid",
"X. Rouby",
"N. Schul",
"O. Aa",
"N. Beliy",
"E. Daubie",
"P. Herquet",
"G. Alves",
"M. Pol",
"M. Souza",
"M. Vaz",
"D. Damiao",
"V. Oguri",
"A. Santoro",
"A. Sznajder",
"E. Gregores",
"R. Iope",
"S. Novaes",
"T. Tomei",
"T. Anguelov",
"G. Antchev",
"I. Atanasov",
"J. Damgov",
"N. Darmenov",
"L. Dimitrov",
"V. Genchev",
"P. Iaydjiev",
"A. Marinov",
"S. Piperov",
"S. Stoykova",
"G. Sultanov",
"R. Trayanov",
"I. Vankov",
"C. Cheshkov",
"A. Dimitrov",
"M. Dyulendarova",
"I. Glushkov",
"V. Kozhuharov",
"L. Litov",
"M. Makariev",
"E. Marinova",
"S. Markov",
"M. Mateev",
"I. Nasteva",
"B. Pavlov",
"P. Petev",
"P. Petkov",
"V. Spassov",
"Z. Toteva",
"V. Velev",
"V. Verguilov",
"J. Bian",
"Guo-ming Chen",
"He-Sheng Chen",
"M. Chen",
"C. Jiang",
"B. Liu",
"X. Shen",
"H. Sun",
"J. Tao",
"Jian Wang",
"Mingming Yang",
"Zhiqin Zhang",
"W. Zhao",
"H. Zhuang",
"Y. Ban",
"J. Cai",
"Y. Ge",
"S. Liu",
"H. Liu",
"L. Liu",
"S. Qian",
"Q. Wang",
"Z. Xue",
"Zongchang Yang",
"Y. Ye",
"J. Ying",
"P. Li",
"J. Liao",
"Z. Xue",
"D. Yan",
"H. Yuan",
"C. Montoya",
"J. Sanabria",
"N. Godinovic",
"I. Puljak",
"I. Sorić",
"Z. Antunović",
"M. Dželalija",
"K. Marasovic",
"V. Brigljevic",
"K. Kadija",
"S. Morovic",
"R. Fereos",
"C. Nicolaou",
"A. Papadakis",
"F. Ptochos",
"P. Razis",
"D. Tsiakkouri",
"Z. Zinonos",
"A. Hektor",
"M. Kadastik",
"K. Kannike",
"É. Lippmaa",
"M. Müntel",
"M. Raidal",
"L. Rebane",
"P. Aarnio",
"E. Anttila",
"K. Banzuzi",
"P. Bulteau",
"S. Czellár",
"N. Eiden",
"C. Eklund",
"P. Engstrom",
"A. Heikkinen",
"A. Honkanen",
"J. Härkönen",
"V. Karimäki",
"H. Katajisto",
"R. Kinnunen",
"J. Klem",
"J. Kortesmaa",
"M. Kotamäki",
"A. Kuronen",
"T. Lampén",
"K. Lassila-Perini",
"V. Lefébure",
"S. Lehti",
"T. Lindén",
"P. Luukka",
"S. Michal",
"F. Brígido",
"T. Mäenpää",
"T. Nyman",
"J. Nysten",
"E. Pietarinen",
"K. Skog",
"K. Tammi",
"E. Tuominen",
"J. Tuominiemi",
"D. Ungaro",
"T. Vanhala",
"L. Wendland",
"C. Williams",
"M. Iskanius",
"A. Korpela",
"G. Polese",
"T. Tuuva",
"G. Bassompierre",
"A. Bazan",
"P. David",
"J. Ditta",
"G. Drobychev",
"N. Fouque",
"J. Guillaud",
"V. Hermel",
"A. Karneyeu",
"T. L. Flour",
"S. Lieunard",
"M. Maire",
"P. Mendiburu",
"P. Nédélec",
"J. Peigneux",
"M. Schneegans",
"D. Sillou",
"J. Vialle",
"M. Anfreville",
"J. Bard",
"P. Besson",
"E. Bougamont",
"M. Boyer",
"P. Brédy",
"R. Chipaux",
"M. Dejardin",
"D. Denegri",
"J. Descamps",
"B. Fabbro",
"J. Faure",
"S. Ganjour",
"F. Gentit",
"A. Givernaud",
"P. Gras",
"G. H. Monchenault",
"P. Jarry",
"C. Jeanney",
"F. Kircher",
"M. Lemaire",
"Y. Lemoigne",
"B. Levésy",
"E. Locci",
"J. Lottin",
"I. Mandjavidze",
"M. Mur",
"J. Pansart",
"A. Payn",
"J. Rander",
"J. Reymond",
"J. Rolquin",
"F. Rondeaux",
"A. Rosowsky",
"J. Rousse",
"Z. Sun",
"J. Tartas",
"A. Lysebetten",
"P. Venault",
"P. Verrecchia",
"M. Anduze",
"J. Badier",
"S. Baffioni",
"M. Bercher",
"C. Bernet",
"U. Berthon",
"J. Bourotte",
"A. Busata",
"P. Busson",
"M. Cerutti",
"D. Chamont",
"C. Charlot",
"C. Collard",
"A. Debraine",
"D. Decotigny",
"L. Dobrzyński",
"O. Ferreira",
"Y. Geerebaert",
"J. Gilly",
"C. Gregory",
"L. G. Riveros",
"M. Haguenauer",
"A. Karar",
"B. Koblitz",
"D. Lecouturier",
"A. Mathieu",
"G. Milleret",
"P. Miné",
"P. Paganini",
"P. Poilleux",
"N. Pukhaeva",
"N. Regnault",
"T. Romanteau",
"I. Semeniouk",
"Y. Sirois",
"C. Thiebaux",
"J. Vanel",
"A. Zabi",
"J. Agram",
"A. Albert",
"L. Anckenmann",
"J. Andrea",
"F. Anstotz",
"A. Bergdolt",
"J. Berst",
"R. Blaes",
"D. Bloch",
"J. Brom",
"J. Cailleret",
"F. Charles",
"E. Christophel",
"G. Claus",
"J. Coffin",
"C. Colledani",
"J. Croix",
"E. Dangelser",
"N. Dick",
"F. Didierjean",
"F. Drouhin",
"W. Duliński",
"J. Ernenwein",
"R. Fang",
"J. Fontaine",
"G. Gaudiot",
"W. Geist",
"D. Gelé",
"T. Goeltzenlichter",
"U. Goerlach",
"P. Graehling",
"L. Gross",
"C. Hu",
"J. M. Helleboid",
"T. Henkes",
"M. Hoffer",
"C. Hoffmann",
"J. Hosselet",
"L. Houchu",
"Y. Hu",
"D. Huss",
"C. Illinger",
"F. Jeanneau",
"P. Juillot",
"T. Kachelhoffer",
"M. Kapp",
"H. Kettunen",
"L. Ayat",
"A. Bihan",
"A. Lounis",
"C. Maazouzi",
"V. Mack",
"P. Majewski",
"D. Mangeol",
"J. Michel",
"S. Moreau",
"C. Olivetto",
"A. Pallarès",
"Y. Patois",
"P. P. vorio",
"C. Racca",
"Y. Riahi",
"I. Ripp-Baudot",
"P. Schmitt",
"J. Schunck",
"G. Schuster",
"B. Schwaller",
"M. Sigward",
"J. Sohler",
"J. Speck",
"R. Strub",
"T. Todorov",
"R. Turchetta",
"P. Hove",
"D. Vintache",
"A. Zghiche",
"M. Ageron",
"J. Augustin",
"C. Baty",
"G. Baulieu",
"M. Bedjidian",
"J. Blaha",
"A. Bonnevaux",
"G. Boudoul",
"P. Brunet",
"E. Chabanat",
"E. Chabert",
"R. Chierici",
"V. Chorowicz",
"C. Combaret",
"D. Contardo",
"R. D. Negra",
"P. Depasse",
"O. Drapier",
"M. Dupanloup",
"T. Dupasquier",
"H. Mamouni",
"N. Estre",
"J. Fay",
"S. Gascon",
"N. Giraud",
"C. Girerd",
"G. Guillot",
"R. Haroutunian",
"B. Ille",
"M. Lethuillier",
"N. Lumb",
"C. Martin",
"H. Mathez",
"G. Maurelli",
"S. Muanza",
"P. Pangaud",
"S. Perriès",
"O. Ravat",
"E. Schibler",
"F. Schirra",
"G. Smadja",
"S. Tissot",
"B. Trocmé",
"S. Vanzetto",
"J. Walder",
"Y. Bagaturia",
"D. Mjavia",
"A. Mzhavia",
"Z. Tsamalaidze",
"V. Roǐnishvili",
"R. Adolphi",
"G. Anagnostou",
"R. Brauer",
"W. Braunschweig"
],
"externalIds": {
"MAG": "2151512268",
"DOI": "10.1088/1748-0221/3/08/S08004",
"CorpusId": 122765640
},
"url": "https://www.semanticscholar.org/paper/7fd1ce2f7bc7d15e08a4298b8181254e97159d9d",
"referenceCount": 245,
"citationCount": 6785,
"influentialCitationCount": 505,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "TMVA - Toolkit for Multivariate Data Analysis",
"abstract": "n high-energy physics, with the search for ever smaller signals in ever larger data sets, it has become essential to extract a maximum of the available information from the data. Multivariate classification methods based on machine learning techniques have become a fundamental ingredient to most analyses. Also the multivariate classifiers themselves have significantly evolved in recent years. Statisticians have found new ways to tune and to combine classifiers to further gain in performance. Integrated into the analysis framework ROOT, TMVA is a toolkit which hosts a large variety of multivariate classification algorithms. They range from rectangular cut optimization using a genetic algorithm and from one- and multidimensional likelihood estimators, over linear and nonlinear discriminants and neural networks, to sophisticated more recent classifiers such as a support vector machine, boosted decision trees and rule ensemble fitting. TMVA manages the simultaneous training, testing, and performance evaluation of all these classifiers with a user-friendly interface, and expedites the application of the trained classifiers to data.",
"year": 2007,
"venue": "",
"authors": [
"A. Hoecker",
"P. Speckmayer",
"J. Stelzer",
"J. Therhaag",
"E. Toerne",
"H. Voss",
"M. Backes",
"T. Carli",
"O. Cohen",
"A. Christov",
"D. Dannheim",
"K. Danielowski",
"S. Henrot-Versillé",
"M. Jachowski",
"K. Kraszewski",
"A. Krasznahorkay",
"M. Kruk",
"Y. Mahalalel",
"R. Ospanov",
"X. Prudent",
"A. Robert",
"D. Schouten",
"F. Tegenfeldt",
"A. Voigt",
"K. Voss",
"M. Wolter",
"A. Zemla"
],
"externalIds": {
"MAG": "2126889979",
"ArXiv": "physics/0703039",
"CorpusId": 58086654
},
"url": "https://www.semanticscholar.org/paper/7ae0f1af230b92462b812b92e2ceb4436d0a2f40",
"referenceCount": 19,
"citationCount": 1399,
"influentialCitationCount": 100,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics",
"Physics"
]
},
{
"title": "Development of FPGA-based neural network regression models for the ATLAS Phase-II barrel muon trigger upgrade",
"abstract": "Effective selection of muon candidates is the cornerstone of the LHC physics programme. The ATLAS experiment uses a two-level trigger system for real-time selection of interesting collision events. The first-level hardware trigger system uses the Resistive Plate Chamber detector (RPC) for selecting muon candidates in the central (barrel) region of the detector. With the planned upgrades, the entirely new FPGA-based muon trigger system will be installed in 2025-2026. In this paper, neural network regression models are studied for potential applications in the new RPC trigger system. A simple simulation model of the current detector is developed for training and testing neural network regression models. Effects from additional cluster hits and noise hits are evaluated. Efficiency of selecting muon candidates is estimated as a function of the transverse muon momentum. Several models are evaluated and their performance is compared to that of the current detector, showing promising potential to improve on current algorithms for the ATLAS Phase-II barrel muon trigger upgrade.",
"year": 2021,
"venue": "EPJ Web of Conferences",
"authors": [
"R. Ospanov",
"C. Feng",
"W. Dong",
"Wenhao Feng",
"Shining Yang"
],
"externalIds": {
"MAG": "3194460949",
"DOI": "10.1051/epjconf/202125104031",
"CorpusId": 238933718
},
"url": "https://www.semanticscholar.org/paper/21db75a235329c2aef951033e4a1bfa3918a9935",
"referenceCount": 13,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "LHC Machine",
"abstract": "The Large Hadron Collider (LHC) at CERN near Geneva is the world's newest and most powerful tool for Particle Physics research. It is designed to collide proton beams with a centre-of-mass energy of 14 TeV and an unprecedented luminosity of 1034 cm−2 s−1. It can also collide heavy (Pb) ions with an energy of 2.8 TeV per nucleon and a peak luminosity of 1027 cm−2 s−1. In this paper, the machine design is described.",
"year": 2008,
"venue": "",
"authors": [
"Lyndon R Evans",
"P. Bryant"
],
"externalIds": {
"DOI": "10.1088/1748-0221/3/08/s08001",
"CorpusId": 250669238
},
"url": "https://www.semanticscholar.org/paper/325f8d81a510ccf915e1965e13ab185906b0f65b",
"referenceCount": 31,
"citationCount": 1645,
"influentialCitationCount": 528,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Progress in resistive plate counters",
"abstract": null,
"year": 1988,
"venue": "",
"authors": [
"R. Cardarelli",
"R. Santonico",
"A. Biagio",
"A. Lucci"
],
"externalIds": {
"MAG": "2070496948",
"DOI": "10.1016/0168-9002(88)91011-X",
"CorpusId": 122548660
},
"url": "https://www.semanticscholar.org/paper/8002701f47b8c2e41335f52d2c6ae1edd06d510d",
"referenceCount": 8,
"citationCount": 181,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Xilinx inputs for nanosecond hardware decision trees for missing transverse energy , 2024",
"abstract": null,
"year": null,
"venue": "D-Scholarship@Pitt",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "MuonTriggerPhase2RPC: python code for toy simulation of ATLAS RPC trigger",
"abstract": null,
"year": null,
"venue": "http://github.com/rustemos/MuonTriggerPhase2RPC, commit tag 76ec993548d45a03e36d296a19e40baee71f3f24",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Learning Parameterized Quantum Circuits with Quantum Gradient": {
"paper_title": "Learning Parameterized Quantum Circuits with Quantum Gradient",
"arxiv_id": "2409.20044",
"authors": [
"Keren Li",
"Yuanfeng Wang",
"Pan Gao",
"Shenggen Zheng"
],
"year": 2024,
"venue": "",
"abstract": "Parameterized quantum circuits (PQCs) are crucial for quantum machine learning and circuit synthesis, enabling the practical implementation of complex quantum tasks. However, PQC learning has been largely confined to classical optimization methods, which suffer from issues like gradient vanishing. In this work, we introduce a nested optimization model that leverages quantum gradient to enhance PQC learning for polynomial-type cost functions. Our approach utilizes quantum algorithms to identify and overcome a type of gradient vanishing-a persistent challenge in PQC learning-by effectively navigating the optimization landscape. We also mitigate potential barren plateaus of our model and manage the learning cost via restricting the optimization region. Numerically, we demonstrate the feasibility of the approach on two tasks: the Max-Cut problem and polynomial optimization. The method excels in generating circuits without gradient vanishing and effectively optimizes the cost function. From the perspective of quantum algorithms, our model improves quantum optimization for polynomial-type cost functions, addressing the challenge of exponential sample complexity growth.",
"references": []
},
"A Simple and Efficient Equivariant Message Passing Neural Network Model for Non-Local Potential Energy Surface": {
"paper_title": "A Simple and Efficient Equivariant Message Passing Neural Network Model for Non-Local Potential Energy Surface",
"arxiv_id": "2409.19864",
"authors": [
"Yibin Wu",
"Junfan Xia",
"Yaolong Zhang",
"Bin Jiang"
],
"year": 2024,
"venue": "",
"abstract": "Machine learning potentials have become increasingly successful in atomistic simulations. Many of these potentials are based on an atomistic representation in a local environment, but an efficient description of non-local interactions that exceed a common local environment remains a challenge. Herein, we propose a simple and efficient equivariant model, EquiREANN, to effectively represent non-local potential energy surface. It relies on a physically inspired message passing framework, where the fundamental descriptors are linear combination of atomic orbitals, while both invariant orbital coefficients and the equivariant orbital functions are iteratively updated. We demonstrate that this EquiREANN model is able to describe the subtle potential energy variation due to the non-local structural change with high accuracy and little extra computational cost than an invariant message passing model. Our work offers a generalized approach to create equivariant message passing adaptations of other advanced local many-body descriptors.",
"references": []
},
"Seeing the Invisible through Speckle Images": {
"paper_title": "Seeing the Invisible through Speckle Images",
"arxiv_id": "2409.18815",
"authors": [
"Weiru Fan",
"Xiaobin Tang",
"Xingqi Xu",
"Huizhu Hu",
"Vladislav V. Yakovlev",
"Shi-Yao Zhu",
"Da-Wei Wang",
"Delong Zhang"
],
"year": 2024,
"venue": "",
"abstract": "Scattering obscures information carried by wave by producing a speckle pattern, posing a common challenge across various fields, including microscopy and astronomy. Traditional methods for extracting information from speckles often rely on significant physical assumptions, complex devices, or intricate algorithms. Recently, machine learning has emerged as a scalable and widely adopted tool for interpreting speckle patterns. However, most current machine learning techniques depend heavily on supervised training with extensive labeled datasets, which is problematic when labels are unavailable. To address this, we propose a strategy based on unsupervised learning for speckle recognition and evaluation, enabling to capture high-level information, such as object classes, directly from speckles without labeled data. By deriving invariant features from speckles, this method allows for the classification of speckles and facilitates diverse applications in image sensing. We experimentally validated our strategy through two significant applications: a noninvasive glucose monitoring system capable of differentiating time-lapse glucose concentrations, and a high-throughput communication system utilizing multimode fibers in dynamic environments. The versatility of this method holds promise for a broad range of far-reaching applications, including biomedical diagnostics, quantum network decoupling, and remote sensing.",
"references": []
},
"MedBench: A Comprehensive, Standardized, and Reliable Benchmarking System for Evaluating Chinese Medical Large Language Models": {
"paper_title": "MedBench: A Comprehensive, Standardized, and Reliable Benchmarking System for Evaluating Chinese Medical Large Language Models",
"arxiv_id": "2407.10990",
"authors": [
"Mianxin Liu",
"Jinru Ding",
"Jie Xu",
"Weiguo Hu",
"Xiaoyang Li",
"Lifeng Zhu",
"Zhian Bai",
"Xiaoming Shi",
"Benyou Wang",
"Haitao Song",
"Pengfei Liu",
"Xiaofan Zhang",
"Shanshan Wang",
"Kang Li",
"Haofen Wang",
"Tong Ruan",
"Xuanjing Huang",
"Xin Sun",
"Shaoting Zhang"
],
"year": 2024,
"venue": "arXiv.org",
"abstract": "Ensuring the general efficacy and goodness for human beings from medical large language models (LLM) before real-world deployment is crucial. However, a widely accepted and accessible evaluation process for medical LLM, especially in the Chinese context, remains to be established. In this work, we introduce\"MedBench\", a comprehensive, standardized, and reliable benchmarking system for Chinese medical LLM. First, MedBench assembles the currently largest evaluation dataset (300,901 questions) to cover 43 clinical specialties and performs multi-facet evaluation on medical LLM. Second, MedBench provides a standardized and fully automatic cloud-based evaluation infrastructure, with physical separations for question and ground truth. Third, MedBench implements dynamic evaluation mechanisms to prevent shortcut learning and answer remembering. Applying MedBench to popular general and medical LLMs, we observe unbiased, reproducible evaluation results largely aligning with medical professionals' perspectives. This study establishes a significant foundation for preparing the practical applications of Chinese medical LLMs. MedBench is publicly accessible at https://medbench.opencompass.org.cn.",
"references": [
{
"title": "OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models in Medicine",
"abstract": "The emerging trend of advancing generalist artificial intelligence, such as GPTv4 and Gemini, has reshaped the landscape of research (academia and industry) in machine learning and many other research areas. However, domain-specific applications of such foundation models (e.g., in medicine) remain untouched or often at their very early stages. It will require an individual set of transfer learning and model adaptation techniques by further expanding and injecting these models with domain knowledge and data. The development of such technologies could be largely accelerated if the bundle of data, algorithms, and pre-trained foundation models were gathered together and open-sourced in an organized manner. In this work, we present OpenMEDLab, an open-source platform for multi-modality foundation models. It encapsulates not only solutions of pioneering attempts in prompting and fine-tuning large language and vision models for frontline clinical and bioinformatic applications but also building domain-specific foundation models with large-scale multi-modal medical data. Importantly, it opens access to a group of pre-trained foundation models for various medical image modalities, clinical text, protein engineering, etc. Inspiring and competitive results are also demonstrated for each collected approach and model in a variety of benchmarks for downstream tasks. We welcome researchers in the field of medical artificial intelligence to continuously contribute cutting-edge methods and models to OpenMEDLab, which can be accessed via https://github.com/openmedlab.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Xiaosong Wang",
"Xiaofan Zhang",
"Guotai Wang",
"Junjun He",
"Zhongyu Li",
"Wentao Zhu",
"Yi Guo",
"Qi Dou",
"Xiaoxiao Li",
"Dequan Wang",
"Liang Hong",
"Qicheng Lao",
"Tong Ruan",
"Yukun Zhou",
"Yixue Li",
"Jie Zhao",
"Kang Li",
"Xin Sun",
"Lifeng Zhu",
"Shaoting Zhang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2402-18028",
"ArXiv": "2402.18028",
"DOI": "10.48550/arXiv.2402.18028",
"CorpusId": 268041665
},
"url": "https://www.semanticscholar.org/paper/746c7e7f6c27f0e8215c2f174386dabe50c36b5f",
"referenceCount": 50,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "OmniMedVQA: A New Large-Scale Comprehensive Evaluation Benchmark for Medical LVLM",
"abstract": "Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in various multimodal tasks. However, their potential in the medical domain re-mains largely unexplored. A significant challenge arises from the scarcity of diverse medical images spanning various modalities and anatomical regions, which is essential in real-world medical applications. To solve this problem, in this paper, we introduce OmniMedVQA, a novel comprehensive medical Visual Question Answering (VQA) benchmark. This benchmark is collected from 73 different medical datasets, including 12 different modalities and covering more than 20 distinct anatomical regions. Importantly, all images in this benchmark are sourced from authentic medical scenarios, ensuring alignment with the requirements of the medical field and suitability for evaluating LVLMs. Through our extensive experiments, we have found that existing LVLMs struggle to address these medical VQA problems effectively. Moreover, what surprises us is that medical-specialized LVLMs even exhibit inferior performance to those general-domain models, calling for a more versatile and robust LVLM in the biomedical field. The evaluation results not only reveal the current limitations of LVLM in understanding real medical images but also highlight our dataset's significance. Our code with dataset are available at https://github.com/OpenGVLab/ Multi Modality-Arena.",
"year": 2024,
"venue": "Computer Vision and Pattern Recognition",
"authors": [
"Yutao Hu",
"Tian-Xin Li",
"Quanfeng Lu",
"Wenqi Shao",
"Junjun He",
"Yu Qiao",
"Ping Luo"
],
"externalIds": {
"DBLP": "journals/corr/abs-2402-09181",
"ArXiv": "2402.09181",
"DOI": "10.1109/CVPR52733.2024.02093",
"CorpusId": 267657686
},
"url": "https://www.semanticscholar.org/paper/7580327ffc9bd5daef83fe8285c0476ca074051d",
"referenceCount": 127,
"citationCount": 9,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Engineering"
]
},
{
"title": "Benchmarking Open-Source Large Language Models, GPT-4 and Claude 2 on Multiple-Choice Questions in Nephrology",
"abstract": null,
"year": 2024,
"venue": "NEJM AI",
"authors": [
"Sean Wu",
"Michael Koo",
"L. Blum",
"Andy Black",
"Liyo Kao",
"Zhe Fei",
"Fabien Scalzo",
"Ira Kurtz"
],
"externalIds": {
"DOI": "10.1056/aidbp2300092",
"CorpusId": 267185760
},
"url": "https://www.semanticscholar.org/paper/00e3ae7f778f5989fae2811ca3f2001bb007af44",
"referenceCount": 4,
"citationCount": 14,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Data-Centric Foundation Models in Computational Healthcare: A Survey",
"abstract": "The advent of foundation models (FMs) as an emerging suite of AI techniques has struck a wave of opportunities in computational healthcare. The interactive nature of these models, guided by pre-training data and human instructions, has ignited a data-centric AI paradigm that emphasizes better data characterization, quality, and scale. In healthcare AI, obtaining and processing high-quality clinical data records has been a longstanding challenge, ranging from data quantity, annotation, patient privacy, and ethics. In this survey, we investigate a wide range of data-centric approaches in the FM era (from model pre-training to inference) towards improving the healthcare workflow. We discuss key perspectives in AI security, assessment, and alignment with human values. Finally, we offer a promising outlook of FM-based analytics to enhance the performance of patient outcome and clinical workflow in the evolving landscape of healthcare and medicine. We provide an up-to-date list of healthcare-related foundation models and datasets at https://github.com/Yunkun-Zhang/Data-Centric-FM-Healthcare .",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Yunkun Zhang",
"Jin Gao",
"Zheling Tan",
"Lingfeng Zhou",
"Kexin Ding",
"Mu Zhou",
"Shaoting Zhang",
"Dequan Wang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2401-02458",
"ArXiv": "2401.02458",
"DOI": "10.48550/arXiv.2401.02458",
"CorpusId": 266818433
},
"url": "https://www.semanticscholar.org/paper/93886752191db25efd096a65af7b09df5c0a64e0",
"referenceCount": 316,
"citationCount": 14,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Six ways large language models are changing healthcare",
"abstract": null,
"year": 2023,
"venue": "Nature Network Boston",
"authors": [
"Paul Webster"
],
"externalIds": {
"DOI": "10.1038/s41591-023-02700-1",
"CorpusId": 265512625,
"PubMed": "38036704"
},
"url": "https://www.semanticscholar.org/paper/cec45e12b93439d8bcaf67223294b412b2f2a0a9",
"referenceCount": 2,
"citationCount": 21,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Fake Alignment: Are LLMs Really Aligned Well?",
"abstract": "The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety. This study investigates an under-explored issue about the evaluation of LLMs, namely the substantial discrepancy in performance between multiple-choice questions and open-ended questions. Inspired by research on jailbreak attack patterns, we argue this is caused by mismatched generalization. That is, LLM only remembers the answer style for open-ended safety questions, which makes it unable to solve other forms of safety tests. We refer to this phenomenon as fake alignment and construct a comparative benchmark to empirically verify its existence in LLMs. We introduce a Fake alIgNment Evaluation (FINE) framework and two novel metrics——Consistency Score (CS) and Consistent Safety Score (CSS), which jointly assess two complementary forms of evaluation to quantify fake alignment and obtain corrected performance estimation. Applying FINE to 14 widely-used LLMs reveals several models with purported safety are poorly aligned in practice. Subsequently, we found that multiple-choice format data can also be used as high-quality contrast distillation-based fine-tuning data, which can strongly improve the alignment consistency of LLMs with minimal fine-tuning overhead. For data and code, see https://github.com/AIFlames/Fake-Alignment.",
"year": 2023,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Yixu Wang",
"Yan Teng",
"Kexin Huang",
"Chengqi Lyu",
"Songyang Zhang",
"Wenwei Zhang",
"Xingjun Ma",
"Yu-Gang Jiang",
"Yu Qiao",
"Yingchun Wang"
],
"externalIds": {
"ArXiv": "2311.05915",
"ACL": "2024.naacl-long.263",
"DBLP": "journals/corr/abs-2311-05915",
"DOI": "10.48550/arXiv.2311.05915",
"CorpusId": 265129034
},
"url": "https://www.semanticscholar.org/paper/712f6dfcee099ee38d6d09af23e8bc0a7e82bb72",
"referenceCount": 44,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "trRosettaRNA: automated prediction of RNA 3D structure with transformer network",
"abstract": null,
"year": 2023,
"venue": "Nature Communications",
"authors": [
"Wenkai Wang",
"Chenjie Feng",
"Renmin Han",
"Ziyi Wang",
"Lisha Ye",
"Zongyang Du",
"Hong Wei",
"Fa Zhang",
"Zhenling Peng",
"Jianyi Yang"
],
"externalIds": {
"PubMedCentral": "10636060",
"DOI": "10.1038/s41467-023-42528-4",
"CorpusId": 265103717,
"PubMed": "37945552"
},
"url": "https://www.semanticscholar.org/paper/eb3a805b173dfe6d84f6a59da13d9f5e106ad204",
"referenceCount": 41,
"citationCount": 31,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Don't Make Your LLM an Evaluation Benchmark Cheater",
"abstract": "Large language models~(LLMs) have greatly advanced the frontiers of artificial intelligence, attaining remarkable improvement in model capacity. To assess the model performance, a typical approach is to construct evaluation benchmarks for measuring the ability level of LLMs in different aspects. Despite that a number of high-quality benchmarks have been released, the concerns about the appropriate use of these benchmarks and the fair comparison of different models are increasingly growing. Considering these concerns, in this paper, we discuss the potential risk and impact of inappropriately using evaluation benchmarks and misleadingly interpreting the evaluation results. Specially, we focus on a special issue that would lead to inappropriate evaluation, \\ie \\emph{benchmark leakage}, referring that the data related to evaluation sets is occasionally used for model training. This phenomenon now becomes more common since pre-training data is often prepared ahead of model test. We conduct extensive experiments to study the effect of benchmark leverage, and find that it can dramatically boost the evaluation results, which would finally lead to an unreliable assessment of model performance. To improve the use of existing evaluation benchmarks, we finally present several guidelines for both LLM developers and benchmark maintainers. We hope this work can draw attention to appropriate training and evaluation of LLMs.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Kun Zhou",
"Yutao Zhu",
"Zhipeng Chen",
"Wentong Chen",
"Wayne Xin Zhao",
"Xu Chen",
"Yankai Lin",
"Ji-Rong Wen",
"Jiawei Han"
],
"externalIds": {
"ArXiv": "2311.01964",
"DBLP": "journals/corr/abs-2311-01964",
"DOI": "10.48550/arXiv.2311.01964",
"CorpusId": 265019021
},
"url": "https://www.semanticscholar.org/paper/84725855d10b531eb8cbe54935dda0440c2fc750",
"referenceCount": 44,
"citationCount": 93,
"influentialCitationCount": 4,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A new paradigm for applying deep learning to protein–ligand interaction prediction",
"abstract": "Protein-ligand interaction prediction poses a significant challenge in the field of drug design. Numerous machine learning and deep learning models have been developed to identify the most accurate docking poses of ligands and active compounds against specific targets. However, the current models often suffer from inadequate accuracy and lack practical physical significance in their scoring systems. In this research paper, we introduce IGModel, a novel approach that leverages the geometric information of protein-ligand complexes as input for predicting the root mean square deviation (RMSD) of docking poses and the binding strength (the negative value of the logrithm of binding affinity, pKd) with the same prediction framework. By incorporating the geometric information, IGModel ensures that its scores carry intuitive meaning. The performance of IGModel has been extensively evaluated on various docking power test sets, including the CASF-2016 benchmark, PDBbind-CrossDocked-Core, and DISCO set, consistently achieving state-of-theart accuracies. Furthermore, we assess IGModel’s generalization ability and robustness by evaluating it on unbiased test sets and sets containing target structures generated by AlphaFold2. The exceptional performance of IGModel on these sets demonstrates its efficacy. Additionally, we visualize the latent space of protein-ligand interactions encoded by IGModel and conduct interpretability analysis, providing valuable insights. This study presents a novel framework for deep learning-based prediction of protein-ligand interactions, contributing to the advancement of this field. Key Messages We introduce the first framework for simultaneously predicting the RMSD of the ligand docking pose and its binding strength to the target. IGModel can effectively improve the accuracy of identifying the near-native binding poses of the ligands, and can still outperform most baseline models in scoring power, ranking power and screening power tasks. IGModel is still ahead of other state-of-the-art models in the unbiased data set and the target structure predicted by AlphaFold2, proving its excellent generalization ability. Latent space provided by IGModel learns the physical interactions, thus indicating the robustness of the model.",
"year": 2023,
"venue": "bioRxiv",
"authors": [
"Zechen Wang",
"Sheng Wang",
"Yangyang Li",
"Jingjing Guo",
"Yanjie Wei",
"Yuguang Mu",
"Liangzhen Zheng",
"Weifeng Li"
],
"externalIds": {
"DBLP": "journals/bib/WangWLGWMZL24",
"PubMedCentral": "10998640",
"DOI": "10.1093/bib/bbae145",
"CorpusId": 268990140,
"PubMed": "38581420"
},
"url": "https://www.semanticscholar.org/paper/597c6c4b574de2c8f2b653b9aa59b2bb0b538242",
"referenceCount": 76,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology",
"Computer Science"
]
},
{
"title": "BianQue: Balancing the Questioning and Suggestion Ability of Health LLMs with Multi-turn Health Conversations Polished by ChatGPT",
"abstract": "Large language models (LLMs) have performed well in providing general and extensive health suggestions in single-turn conversations, exemplified by systems such as ChatGPT, ChatGLM, ChatDoctor, DoctorGLM, and etc. However, the limited information provided by users during single turn results in inadequate personalization and targeting of the generated suggestions, which requires users to independently select the useful part. It is mainly caused by the missing ability to engage in multi-turn questioning. In real-world medical consultations, doctors usually employ a series of iterative inquiries to comprehend the patient's condition thoroughly, enabling them to provide effective and personalized suggestions subsequently, which can be defined as chain of questioning (CoQ) for LLMs. To improve the CoQ of LLMs, we propose BianQue, a ChatGLM-based LLM finetuned with the self-constructed health conversation dataset BianQueCorpus that is consist of multiple turns of questioning and health suggestions polished by ChatGPT. Experimental results demonstrate that the proposed BianQue can simultaneously balance the capabilities of both questioning and health suggestions, which will help promote the research and application of LLMs in the field of proactive health.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Yirong Chen",
"Zhenyu Wang",
"Xiaofen Xing",
"Huimin Zheng",
"Zhipei Xu",
"Kai Fang",
"Junhong Wang",
"Sihang Li",
"Jieling Wu",
"Qi Liu",
"Xiangmin Xu"
],
"externalIds": {
"DBLP": "journals/corr/abs-2310-15896",
"ArXiv": "2310.15896",
"DOI": "10.48550/arXiv.2310.15896",
"CorpusId": 264438844
},
"url": "https://www.semanticscholar.org/paper/c86de166504e73465a64a8ac89335d63cf800b1c",
"referenceCount": 14,
"citationCount": 33,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SAM-Med2D",
"abstract": "The Segment Anything Model (SAM) represents a state-of-the-art research advancement in natural image segmentation, achieving impressive results with input prompts such as points and bounding boxes. However, our evaluation and recent research indicate that directly applying the pretrained SAM to medical image segmentation does not yield satisfactory performance. This limitation primarily arises from significant domain gap between natural images and medical images. To bridge this gap, we introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images. Specifically, we first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets, constructing a large-scale medical image segmentation dataset encompassing various modalities and objects. Then, we comprehensively fine-tune SAM on this dataset and turn it into SAM-Med2D. Unlike previous methods that only adopt bounding box or point prompts as interactive segmentation approach, we adapt SAM to medical image segmentation through more comprehensive prompts involving bounding boxes, points, and masks. We additionally fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D, leading to the most comprehensive fine-tuning strategies to date. Finally, we conducted a comprehensive evaluation and analysis to investigate the performance of SAM-Med2D in medical image segmentation across various modalities, anatomical structures, and organs. Concurrently, we validated the generalization capability of SAM-Med2D on 9 datasets from MICCAI 2023 challenge. Overall, our approach demonstrated significantly superior performance and generalization capability compared to SAM.",
"year": 2023,
"venue": "",
"authors": [
"Junlong Cheng",
"Jin Ye",
"Zhongying Deng",
"Jianpin Chen",
"Tian-Xin Li",
"Hao Wang",
"Yanzhou Su",
"Ziyan Huang",
"Jilong Chen",
"Lei Jiang",
"Hui Sun",
"Junjun He",
"Shaoting Zhang",
"Min Zhu",
"Y. Qiao"
],
"externalIds": {
"ArXiv": "2308.16184",
"CorpusId": 261339487
},
"url": "https://www.semanticscholar.org/paper/d7fb24b589f714cb237c05b2f5312c41f88cec68",
"referenceCount": 42,
"citationCount": 61,
"influentialCitationCount": 7,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Benchmarking large language models’ performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard",
"abstract": null,
"year": 2023,
"venue": "EBioMedicine",
"authors": [
"Zhi Wei Lim",
"Krithi Pushpanathan",
"Samantha Min Er Yew",
"Yien Lai",
"Chen-Hsin Sun",
"Janice Sing Harn Lam",
"D. Chen",
"Jocelyn Hui Lin Goh",
"M. Tan",
"Bin Sheng",
"Ching-Yu Cheng",
"Victor Teck Chang Koh",
"Yih-Chung Tham"
],
"externalIds": {
"PubMedCentral": "10470220",
"DOI": "10.1016/j.ebiom.2023.104770",
"CorpusId": 261152018,
"PubMed": "37625267"
},
"url": "https://www.semanticscholar.org/paper/033d49118f91a044306bd8d92f724ec0f3d52046",
"referenceCount": 63,
"citationCount": 98,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "CMB: A Comprehensive Medical Benchmark in Chinese",
"abstract": "Large Language Models (LLMs) provide a possibility to make a great breakthrough in medicine. The establishment of a standardized medical benchmark becomes a fundamental cornerstone to measure progression. However, medical environments in different regions have their local characteristics, e.g., the ubiquity and significance of traditional Chinese medicine within China. Therefore, merely translating English-based medical evaluation may result in contextual incongruities to a local region. To solve the issue, we propose a localized medical benchmark called CMB, a Comprehensive Medical Benchmark in Chinese, designed and rooted entirely within the native Chinese linguistic and cultural framework. While traditional Chinese medicine is integral to this evaluation, it does not constitute its entirety. Using this benchmark, we have evaluated several prominent large-scale LLMs, including ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical domain. We hope this benchmark provide first-hand experience in existing LLMs for medicine and also facilitate the widespread adoption and enhancement of medical LLMs within China. Our data and code are publicly available at https://github.com/FreedomIntelligence/CMB.",
"year": 2023,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Xidong Wang",
"Guiming Hardy Chen",
"Dingjie Song",
"Zhiyi Zhang",
"Zhihong Chen",
"Qingying Xiao",
"Feng Jiang",
"Jianquan Li",
"Xiang Wan",
"Benyou Wang",
"Haizhou Li"
],
"externalIds": {
"ArXiv": "2308.08833",
"DBLP": "conf/naacl/WangCS0CXCJLWW024",
"ACL": "2024.naacl-long.343",
"DOI": "10.48550/arXiv.2308.08833",
"CorpusId": 261030527
},
"url": "https://www.semanticscholar.org/paper/5df24ed6fdf10d1e92885687abce7bd5e56f3f85",
"referenceCount": 54,
"citationCount": 45,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Open-ended questions automated evaluation: proposal of a new generation",
"abstract": "Abstract. Exams grading for the knowledge validation to recognise competences is an essential element for any learning process. There are two main modes for their evaluation: subjective and automated. Subjective evaluation is accused of many flaws such as the inconsistency of the human corrector and the time it requires. Automating the assessment of open-ended questions saves a lot of time, provides quick feedback to learners and ensures the consistency expected from the human correctors. However, this is a challenging problem to implement because the computer does not have the same faculties as a human. To address this issue, we conducted a literature review on open-ended questions automated evaluation to implement an automatic exam grading system with similar or even higher accuracy than a human corrector. This study allows us to classify the different approaches in three generations: “bag of words” based approaches, classical semantic similarity-based approaches and machine learning based approaches. The third generation offers the best state-of-the-art results despite criticism of it. These approaches rely on neural networks which need to have a large dataset for effective training. To tackle this handicap, we propose a fourth generation (section 3). This contribution relies on the use of pre-trained models for which a dataset for training is not necessary knowing that they are zero-shot-learners. After implementing our architecture, we conducted our experiments with the three main French-speaking models available on Hugging Face. The best model agrees with the human corrector at 96%.",
"year": 2023,
"venue": "JCRAI",
"authors": [
"Idrissa Abdou",
"Thierry Eude"
],
"externalIds": {
"DBLP": "conf/jcrai/AbdouE23",
"DOI": "10.1145/3632971.3632980",
"CorpusId": 267676143
},
"url": "https://www.semanticscholar.org/paper/6dea332877bbc8c832cba4a81ebeeb3d741b69fa",
"referenceCount": 19,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A Survey on Evaluation of Large Language Models",
"abstract": "Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As LLMs continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine LLMs from various perspectives. This paper presents a comprehensive review of these evaluation methods for LLMs, focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate. Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, education, natural and social sciences, agent applications, and other areas. Secondly, we answer the ‘where’ and ‘how’ questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing the performance of LLMs. Then, we summarize the success and failure cases of LLMs in different tasks. Finally, we shed light on several future challenges that lie ahead in LLMs evaluation. Our aim is to offer invaluable insights to researchers in the realm of LLMs evaluation, thereby aiding the development of more proficient LLMs. Our key point is that evaluation should be treated as an essential discipline to better assist the development of LLMs. We consistently maintain the related open-source materials at: https://github.com/MLGroupJLU/LLM-eval-survey",
"year": 2023,
"venue": "ACM Transactions on Intelligent Systems and Technology",
"authors": [
"Yu-Chu Chang",
"Xu Wang",
"Jindong Wang",
"Yuanyi Wu",
"Kaijie Zhu",
"Hao Chen",
"Linyi Yang",
"Xiaoyuan Yi",
"Cunxiang Wang",
"Yidong Wang",
"Weirong Ye",
"Yue Zhang",
"Yi Chang",
"Philip S. Yu",
"Qian Yang",
"Xingxu Xie"
],
"externalIds": {
"DBLP": "journals/tist/ChangWWWYZCYWWYZCYYX24",
"ArXiv": "2307.03109",
"DOI": "10.1145/3641289",
"CorpusId": 259360395
},
"url": "https://www.semanticscholar.org/paper/888728745dbb769e29ed475d4f7661eebe1a71cf",
"referenceCount": 306,
"citationCount": 680,
"influentialCitationCount": 32,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-train",
"abstract": "Foundation models have exhibited remarkable success in various applications, such as disease diagnosis and text report generation. To date, a foundation model for endoscopic video analysis is still lacking. In this paper, we propose Endo-FM, a foundation model specifically developed using massive endoscopic video data. First, we build a video transformer, which captures both local and global long-range dependencies across spatial and temporal dimensions. Second, we pre-train our transformer model using global and local views via a self-supervised manner, aiming to make it robust to spatial-temporal variations and discriminative across different scenes. To develop the foundation model, we construct a large-scale endoscopy video dataset by combining 9 publicly available datasets and a privately collected dataset from Baoshan Branch of Renji Hospital in Shanghai, China. Our dataset overall consists of over 33K video clips with up to 5 million frames, encompassing various protocols, target organs, and disease types. Our pre-trained Endo-FM can be easily adopted for a given downstream task via fine-tuning by serving as the backbone. With experiments on 3 different types of downstream tasks, including classification, segmentation, and detection, our Endo-FM surpasses the current state-of-the-art (SOTA) self-supervised pre-training and adapter-based transfer learning methods by a significant margin, such as VCL (3.1% F1, 4.8% Dice, and 5.5% F1 for classification, segmentation, and detection) and ST-Adapter (5.9% F1, 9.6% Dice, and 9.9% F1 for classification, segmentation, and detection). Code, datasets, and models are released at https://github.com/med-air/Endo-FM.",
"year": 2023,
"venue": "International Conference on Medical Image Computing and Computer-Assisted Intervention",
"authors": [
"Zhao Wang",
"Chang Liu",
"Shaoting Zhang",
"Q. Dou"
],
"externalIds": {
"DBLP": "conf/miccai/WangLZD23",
"ArXiv": "2306.16741",
"DOI": "10.48550/arXiv.2306.16741",
"CorpusId": 259287248
},
"url": "https://www.semanticscholar.org/paper/25b67873c4bc9afb35224bd984554430fe91d5a7",
"referenceCount": 37,
"citationCount": 26,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A Real-world Dataset and Benchmark For Foundation Model Adaptation in Medical Image Classification",
"abstract": null,
"year": 2023,
"venue": "Scientific Data",
"authors": [
"Dequan Wang",
"Xiaosong Wang",
"Lilong Wang",
"Mengzhang Li",
"Q. Da",
"Xiaoqiang Liu",
"Xiangyu Gao",
"Jun Shen",
"Junjun He",
"Tian Shen",
"Qi Duan",
"Jie Zhao",
"Kang Li",
"Y. Qiao",
"Shaoting Zhang"
],
"externalIds": {
"ArXiv": "2306.09579",
"DBLP": "journals/corr/abs-2306-09579",
"PubMedCentral": "10475041",
"DOI": "10.1038/s41597-023-02460-0",
"CorpusId": 259187859,
"PubMed": "37660106"
},
"url": "https://www.semanticscholar.org/paper/98670c679e888f4c97f4a5e29b93eb3a2c77ab15",
"referenceCount": 35,
"citationCount": 12,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "On the Challenges and Perspectives of Foundation Models for Medical Image Analysis",
"abstract": "This article discusses the opportunities, applications and future directions of large-scale pretrained models, i.e., foundation models, which promise to significantly improve the analysis of medical images. Medical foundation models have immense potential in solving a wide range of downstream tasks, as they can help to accelerate the development of accurate and robust models, reduce the dependence on large amounts of labeled data, preserve the privacy and confidentiality of patient data. Specifically, we illustrate the \"spectrum\" of medical foundation models, ranging from general imaging models, modality-specific models, to organ/task-specific models, and highlight their challenges, opportunities and applications. We also discuss how foundation models can be leveraged in downstream medical tasks to enhance the accuracy and efficiency of medical image analysis, leading to more precise diagnosis and treatment decisions.",
"year": 2023,
"venue": "Medical Image Anal.",
"authors": [
"Shaoting Zhang",
"Dimitris N. Metaxas"
],
"externalIds": {
"DBLP": "journals/mia/ZhangM24",
"ArXiv": "2306.05705",
"DOI": "10.48550/arXiv.2306.05705",
"CorpusId": 259129811,
"PubMed": "37857067"
},
"url": "https://www.semanticscholar.org/paper/fed150a219f9c31bdb4920e615c7c9264c634736",
"referenceCount": 87,
"citationCount": 54,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Engineering",
"Computer Science",
"Medicine"
]
},
{
"title": "LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day",
"abstract": "Conversational generative AI has demonstrated remarkable promise for empowering biomedical practitioners, but current investigations focus on unimodal text. Multimodal conversational AI has seen rapid progress by leveraging billions of image-text pairs from the public web, but such general-domain vision-language models still lack sophistication in understanding and conversing about biomedical images. In this paper, we propose a cost-efficient approach for training a vision-language conversational assistant that can answer open-ended research questions of biomedical images. The key idea is to leverage a large-scale, broad-coverage biomedical figure-caption dataset extracted from PubMed Central, use GPT-4 to self-instruct open-ended instruction-following data from the captions, and then fine-tune a large general-domain vision-language model using a novel curriculum learning method. Specifically, the model first learns to align biomedical vocabulary using the figure-caption pairs as is, then learns to master open-ended conversational semantics using GPT-4 generated instruction-following data, broadly mimicking how a layperson gradually acquires biomedical knowledge. This enables us to train a Large Language and Vision Assistant for BioMedicine (LLaVA-Med) in less than 15 hours (with eight A100s). LLaVA-Med exhibits excellent multimodal conversational capability and can follow open-ended instruction to assist with inquiries about a biomedical image. On three standard biomedical visual question answering datasets, LLaVA-Med outperforms previous supervised state-of-the-art on certain metrics. To facilitate biomedical multimodal research, we will release our instruction-following data and the LLaVA-Med model.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Chunyuan Li",
"Cliff Wong",
"Sheng Zhang",
"Naoto Usuyama",
"Haotian Liu",
"Jianwei Yang",
"Tristan Naumann",
"Hoifung Poon",
"Jianfeng Gao"
],
"externalIds": {
"DBLP": "journals/corr/abs-2306-00890",
"ArXiv": "2306.00890",
"DOI": "10.48550/arXiv.2306.00890",
"CorpusId": 258999820
},
"url": "https://www.semanticscholar.org/paper/f22d71c7ce9720ba1f717a4f1181488200e78198",
"referenceCount": 46,
"citationCount": 319,
"influentialCitationCount": 45,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge",
"abstract": "Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks. Nevertheless, LLMs have not yet performed optimally in biomedical domain tasks due to the need for medical expertise in the responses. In response to this challenge, we propose HuaTuo, a LLaMA-based model that has been supervised-fine-tuned with generated QA (Question-Answer) instances. The experimental results demonstrate that HuaTuo generates responses that possess more reliable medical knowledge. Our proposed HuaTuo model is accessible at https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hao Wang",
"Chi-Liang Liu",
"Nuwa Xi",
"Zewen Qiang",
"Sendong Zhao",
"Bing Qin",
"Ting Liu"
],
"externalIds": {
"ArXiv": "2304.06975",
"DBLP": "journals/corr/abs-2304-06975",
"DOI": "10.48550/arXiv.2304.06975",
"CorpusId": 258170497
},
"url": "https://www.semanticscholar.org/paper/302ee27524a717ddc21f332ca634b9211c6ec6aa",
"referenceCount": 13,
"citationCount": 133,
"influentialCitationCount": 20,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The SCARE 2023 guideline: updating consensus Surgical CAse REport (SCARE) guidelines",
"abstract": "Background: The Surgical CAse REport (SCARE) guidelines were first published in 2016 as a tool for surgeons to document and report their surgical cases in a standardised and comprehensive manner. However, with advances in technology and changes in the healthcare landscape, it is important to revise and update these guidelines to ensure they remain relevant and valuable for surgeons. Materials and methods: The updated guidelines were produced through a Delphi consensus exercise. Members of the SCARE 2020 guidelines Delphi group, editorial board members, and peer reviewers were invited to participate. Potential contributors were contacted by e-mail. An online survey was completed to indicate their agreement with the proposed changes to the guideline items. Results: A total of 54 participants were invited to participate and 44 (81.5%) completed the survey. There was a high degree of agreement among reviewers, with 36 items (83.7%) meeting the threshold for inclusion. Conclusion: Through a completed Delphi consensus exercise we present the SCARE 2023 guidelines. This will provide surgeons with a comprehensive and up-to-date tool for documenting and reporting their surgical cases while highlighting the importance of patient-centred care.",
"year": 2023,
"venue": "International Journal of Surgery",
"authors": [
"C. Sohrabi",
"Ginimol Mathew",
"Nicola Maria",
"Ahmed Kerwan",
"T. Franchi",
"R. Agha"
],
"externalIds": {
"PubMedCentral": "10389401",
"DOI": "10.1097/JS9.0000000000000373",
"CorpusId": 257923306,
"PubMed": "37013953"
},
"url": "https://www.semanticscholar.org/paper/e0b886f39c49b4e525b673d2139a0f6fb5b9c04f",
"referenceCount": 27,
"citationCount": 1335,
"influentialCitationCount": 57,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Large language models encode clinical knowledge",
"abstract": null,
"year": 2022,
"venue": "Nature",
"authors": [
"K. Singhal",
"Shekoofeh Azizi",
"T. Tu",
"S. Mahdavi",
"Jason Wei",
"Hyung Won Chung",
"Nathan Scales",
"A. Tanwani",
"H. Cole-Lewis",
"S. Pfohl",
"P. Payne",
"Martin G. Seneviratne",
"P. Gamble",
"C. Kelly",
"Nathaneal Scharli",
"Aakanksha Chowdhery",
"P. A. Mansfield",
"B. A. Y. Arcas",
"D. Webster",
"Greg S. Corrado",
"Yossi Matias",
"K. Chou",
"Juraj Gottweis",
"Nenad Tomašev",
"Yun Liu",
"A. Rajkomar",
"J. Barral",
"Christopher Semturs",
"A. Karthikesalingam",
"Vivek Natarajan"
],
"externalIds": {
"ArXiv": "2212.13138",
"PubMedCentral": "10396962",
"DBLP": "journals/corr/abs-2212-13138",
"DOI": "10.1038/s41586-023-06291-2",
"CorpusId": 255124952,
"PubMed": "37438534"
},
"url": "https://www.semanticscholar.org/paper/6052486bc9144dc1730c12bf35323af3792a1fd0",
"referenceCount": 111,
"citationCount": 1349,
"influentialCitationCount": 78,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "GLM-130B: An Open Bilingual Pre-trained Model",
"abstract": "We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as good as GPT-3 (davinci) and unveil how models of such a scale can be successfully pre-trained. Over the course of this effort, we face numerous unexpected technical and engineering challenges, particularly on loss spikes and divergence. In this paper, we introduce the training process of GLM-130B including its design choices, training strategies for both efficiency and stability, and engineering efforts. The resultant GLM-130B model offers significant outperformance over GPT-3 175B (davinci) on a wide range of popular English benchmarks while the performance advantage is not observed in OPT-175B and BLOOM-176B. It also consistently and significantly outperforms ERNIE TITAN 3.0 260B -- the largest Chinese language model -- across related benchmarks. Finally, we leverage a unique scaling property of GLM-130B to reach INT4 quantization without post training, with almost no performance loss, making it the first among 100B-scale models and more importantly, allowing its effective inference on 4$\\times$RTX 3090 (24G) or 8$\\times$RTX 2080 Ti (11G) GPUs, the most affordable GPUs required for using 100B-scale models. The GLM-130B model weights are publicly accessible and its code, training logs, related toolkit, and lessons learned are open-sourced at \\url{https://github.com/THUDM/GLM-130B/}.",
"year": 2022,
"venue": "International Conference on Learning Representations",
"authors": [
"Aohan Zeng",
"Xiao Liu",
"Zhengxiao Du",
"Zihan Wang",
"Hanyu Lai",
"Ming Ding",
"Zhuoyi Yang",
"Yifan Xu",
"Wendi Zheng",
"Xiao Xia",
"W. Tam",
"Zixuan Ma",
"Yufei Xue",
"Jidong Zhai",
"Wenguang Chen",
"P. Zhang",
"Yuxiao Dong",
"Jie Tang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2210-02414",
"ArXiv": "2210.02414",
"DOI": "10.48550/arXiv.2210.02414",
"CorpusId": 252715691
},
"url": "https://www.semanticscholar.org/paper/1d26c947406173145a4665dd7ab255e03494ea28",
"referenceCount": 154,
"citationCount": 946,
"influentialCitationCount": 113,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark",
"abstract": "Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually offering great promise for medical practice. With the development of biomedical language understanding benchmarks, AI applications are widely used in the medical field. However, most benchmarks are limited to English, which makes it challenging to replicate many of the successes in English for other languages. To facilitate research in this direction, we collect real-world biomedical data and present the first Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark: a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, single-sentence/sentence-pair classification, and an associated online platform for model evaluation, comparison, and analysis. To establish evaluation on these tasks, we report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform by far worse than the human ceiling.",
"year": 2021,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Ningyu Zhang",
"Zhen Bi",
"Xiaozhuan Liang",
"Lei Li",
"Xiang Chen",
"Shumin Deng",
"Luoqiu Li",
"Xin Xie",
"Hongbin Ye",
"Xin Shang",
"Kangping Yin",
"Chuanqi Tan",
"Jian Xu",
"Mosha Chen",
"Fei Huang",
"Luo Si",
"Yuan Ni",
"G. Xie",
"Zhifang Sui",
"Baobao Chang",
"Hui Zong",
"Zheng Yuan",
"Linfeng Li",
"Jun Yan",
"Hongying Zan",
"Kunli Zhang",
"Huajun Chen",
"Buzhou Tang",
"Qingcai Chen"
],
"externalIds": {
"DBLP": "journals/corr/abs-2106-08087",
"ArXiv": "2106.08087",
"ACL": "2022.acl-long.544",
"DOI": "10.18653/v1/2022.acl-long.544",
"CorpusId": 235415270
},
"url": "https://www.semanticscholar.org/paper/cb2a38002e2f346b084e7dd6c9fcf2bb45de5a9e",
"referenceCount": 49,
"citationCount": 141,
"influentialCitationCount": 13,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Automatic grading and hinting in open-ended text questions",
"abstract": null,
"year": 2020,
"venue": "Cognitive Systems Research",
"authors": [
"O. Sychev",
"A. Anikin",
"A. Prokudin"
],
"externalIds": {
"MAG": "2975540156",
"DBLP": "journals/cogsr/SychevAP20",
"DOI": "10.1016/j.cogsys.2019.09.025",
"CorpusId": 203570016
},
"url": "https://www.semanticscholar.org/paper/fd33cb99f6381236a6bd2615c98d43e06d25977a",
"referenceCount": 26,
"citationCount": 22,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Integrating exosomal microRNAs and electronic health data improved tuberculosis diagnosis",
"abstract": null,
"year": 2019,
"venue": "EBioMedicine",
"authors": [
"Xuejiao Hu",
"Shun Liao",
"H. Bai",
"Li-juan Wu",
"Minjin Wang",
"Qian Wu",
"Juan Zhou",
"Lin Jiao",
"Xuerong Chen",
"Yanhong Zhou",
"Xiao-jun Lu",
"B. Ying",
"Zhaolei Zhang",
"Weimin Li"
],
"externalIds": {
"MAG": "2913725813",
"PubMedCentral": "6413343",
"DOI": "10.1016/j.ebiom.2019.01.023",
"CorpusId": 73452074,
"PubMed": "30745169"
},
"url": "https://www.semanticscholar.org/paper/9143d4430be21e01112a4a6f52bf1ad95ee91442",
"referenceCount": 43,
"citationCount": 58,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics",
"abstract": "In this paper we describe two new objective automatic evaluation methods for machine translation. The first method is based on longest common subsequence between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring in-sequence n-grams automatically. The second method relaxes strict n-gram matching to skip-bigram matching. Skip-bigram is any pair of words in their sentence order. Skip-bigram cooccurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. The empirical results show that both methods correlate with human judgments very well in both adequacy and fluency.",
"year": 2004,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Chin-Yew Lin",
"F. Och"
],
"externalIds": {
"ACL": "P04-1077",
"MAG": "2108325777",
"DBLP": "conf/acl/LinO04",
"DOI": "10.3115/1218955.1219032",
"CorpusId": 1586456
},
"url": "https://www.semanticscholar.org/paper/74d2ad28be32a5802a1b15d4e9a430db2234a3dd",
"referenceCount": 21,
"citationCount": 742,
"influentialCitationCount": 77,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"abstract": "Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.",
"year": 2002,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"K. Papineni",
"Salim Roukos",
"T. Ward",
"Wei-Jing Zhu"
],
"externalIds": {
"DBLP": "conf/acl/PapineniRWZ02",
"MAG": "2101105183",
"ACL": "P02-1040",
"DOI": "10.3115/1073083.1073135",
"CorpusId": 11080756
},
"url": "https://www.semanticscholar.org/paper/d7da009f457917aa381619facfa5ffae9329a6e9",
"referenceCount": 5,
"citationCount": 24976,
"influentialCitationCount": 5731,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Opencompass: A universal evaluation platform for foundation models",
"abstract": null,
"year": 2023,
"venue": "OpenCompass",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "MMbench: Is your",
"abstract": null,
"year": 2023,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "MLEC-QA: A Chinese Multi-Choice Biomedical Question Answering Dataset",
"abstract": "Question Answering (QA) has been successfully applied in scenarios of human-computer interaction such as chatbots and search engines. However, for the specific biomedical domain, QA systems are still immature due to expert-annotated datasets being limited by category and scale. In this paper, we present MLEC-QA, the largest-scale Chinese multi-choice biomedical QA dataset, collected from the National Medical Licensing Examination in China. The dataset is composed of five subsets with 136,236 biomedical multi-choice questions with extra materials (images or tables) annotated by human experts, and first covers the following biomedical sub-fields: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Traditional Chinese Medicine Combined with Western Medicine. We implement eight representative control methods and open-domain QA methods as baselines. Experimental results demonstrate that even the current best model can only achieve accuracies between 40% to 55% on five subsets, especially performing poorly on questions that require sophisticated reasoning ability. We hope the release of the MLEC-QA dataset can serve as a valuable resource for research and evaluation in open-domain QA, and also make advances for biomedical QA systems.",
"year": 2021,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Jing Li",
"Shangping Zhong",
"Kaizhi Chen"
],
"externalIds": {
"DBLP": "conf/emnlp/LiZC21a",
"ACL": "2021.emnlp-main.698",
"DOI": "10.18653/v1/2021.emnlp-main.698",
"CorpusId": 243865589
},
"url": "https://www.semanticscholar.org/paper/31acfba3a19f780a3239925ff12a7a4047d6a705",
"referenceCount": 37,
"citationCount": 24,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
}
]
},
"Xiwu: A Basis Flexible and Learnable LLM for High Energy Physics": {
"paper_title": "Xiwu: A Basis Flexible and Learnable LLM for High Energy Physics",
"arxiv_id": "2404.08001",
"authors": [
"Zhengde Zhang",
"Yiyu Zhang",
"Haodong Yao",
"Jianwen Luo",
"Rui Zhao",
"Bo Huang",
"Jiameng Zhao",
"Yipu Liao",
"Ke Li",
"Lina Zhao",
"Jun Cao",
"Fazhi Qi",
"Changzheng Yuan"
],
"year": 2024,
"venue": "arXiv.org",
"abstract": "Large Language Models (LLMs) are undergoing a period of rapid updates and changes, with state-of-the-art (SOTA) model frequently being replaced. When applying LLMs to a specific scientific field, it's challenging to acquire unique domain knowledge while keeping the model itself advanced. To address this challenge, a sophisticated large language model system named as Xiwu has been developed, allowing you switch between the most advanced foundation models and quickly teach the model domain knowledge. In this work, we will report on the best practices for applying LLMs in the field of high-energy physics (HEP), including: a seed fission technology is proposed and some data collection and cleaning tools are developed to quickly obtain domain AI-Ready dataset; a just-in-time learning system is implemented based on the vector store technology; an on-the-fly fine-tuning system has been developed to facilitate rapid training under a specified foundation model. The results show that Xiwu can smoothly switch between foundation models such as LLaMA, Vicuna, ChatGLM and Grok-1. The trained Xiwu model is significantly outperformed the benchmark model on the HEP knowledge question-and-answering and code generation. This strategy significantly enhances the potential for growth of our model's performance, with the hope of surpassing GPT-4 as it evolves with the development of open-source models. This work provides a customized LLM for the field of HEP, while also offering references for applying LLM to other fields, the corresponding codes are available on Github.",
"references": [
{
"title": "SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning",
"abstract": "Large Language Models (LLMs) have shown promise in assisting scientific discovery. However, such applications are currently limited by LLMs' deficiencies in understanding intricate scientific concepts, deriving symbolic equations, and solving advanced numerical calculations. To bridge these gaps, we introduce SciGLM, a suite of scientific language models able to conduct college-level scientific reasoning. Central to our approach is a novel self-reflective instruction annotation framework to address the data scarcity challenge in the science domain. This framework leverages existing LLMs to generate step-by-step reasoning for unlabelled scientific questions, followed by a process of self-reflective critic-and-revise. Applying this framework, we curated SciInstruct, a diverse and high-quality dataset encompassing physics, chemistry, math, and formal proofs. We fine-tuned the ChatGLM family of language models with SciInstruct, enhancing their scientific and mathematical reasoning capabilities. Remarkably, the SciGLM consistently improves both the base model (ChatGLM3-6B-Base) by 4.87% and larger-scale models (32B) by 2.67%, without sacrificing the language understanding capabilities of the base model. This makes SciGLM a suitable foundational model to facilitate diverse scientific discovery tasks. For the benefit of the wider research community, we release SciInstruct, and SciGLM, alongside a self-reflective framework and fine-tuning code at https://github.com/THUDM/SciGLM.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Dan Zhang",
"Ziniu Hu",
"Sining Zhoubian",
"Zhengxiao Du",
"Kaiyu Yang",
"Zihan Wang",
"Yisong Yue",
"Yuxiao Dong",
"Jie Tang"
],
"externalIds": {
"ArXiv": "2401.07950",
"DBLP": "journals/corr/abs-2401-07950",
"DOI": "10.48550/arXiv.2401.07950",
"CorpusId": 266999634
},
"url": "https://www.semanticscholar.org/paper/c6e162aedf6a5ab0135e3b991577d77ca06673f9",
"referenceCount": 66,
"citationCount": 17,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Feasibility of Using the Privacy-preserving Large Language Model Vicuna for Labeling Radiology Reports.",
"abstract": "Background Large language models (LLMs) such as ChatGPT, though proficient in many text-based tasks, are not suitable for use with radiology reports due to patient privacy constraints. Purpose To test the feasibility of using an alternative LLM (Vicuna-13B) that can be run locally for labeling radiography reports. Materials and Methods Chest radiography reports from the MIMIC-CXR and National Institutes of Health (NIH) data sets were included in this retrospective study. Reports were examined for 13 findings. Outputs reporting the presence or absence of the 13 findings were generated by Vicuna by using a single-step or multistep prompting strategy (prompts 1 and 2, respectively). Agreements between Vicuna outputs and CheXpert and CheXbert labelers were assessed using Fleiss κ. Agreement between Vicuna outputs from three runs under a hyperparameter setting that introduced some randomness (temperature, 0.7) was also assessed. The performance of Vicuna and the labelers was assessed in a subset of 100 NIH reports annotated by a radiologist with use of area under the receiver operating characteristic curve (AUC). Results A total of 3269 reports from the MIMIC-CXR data set (median patient age, 68 years [IQR, 59-79 years]; 161 male patients) and 25 596 reports from the NIH data set (median patient age, 47 years [IQR, 32-58 years]; 1557 male patients) were included. Vicuna outputs with prompt 2 showed, on average, moderate to substantial agreement with the labelers on the MIMIC-CXR (κ median, 0.57 [IQR, 0.45-0.66] with CheXpert and 0.64 [IQR, 0.45-0.68] with CheXbert) and NIH (κ median, 0.52 [IQR, 0.41-0.65] with CheXpert and 0.55 [IQR, 0.41-0.74] with CheXbert) data sets, respectively. Vicuna with prompt 2 performed at par (median AUC, 0.84 [IQR, 0.74-0.93]) with both labelers on nine of 11 findings. Conclusion In this proof-of-concept study, outputs of the LLM Vicuna reporting the presence or absence of 13 findings on chest radiography reports showed moderate to substantial agreement with existing labelers. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Cai in this issue.",
"year": 2023,
"venue": "Radiology",
"authors": [
"Pritam Mukherjee",
"Benjamin Hou",
"Ricardo Bigolin Lanfredi",
"Ronald M. Summers"
],
"externalIds": {
"DOI": "10.1148/radiol.231147",
"CorpusId": 263802197,
"PubMed": "37815442"
},
"url": "https://www.semanticscholar.org/paper/e6614101207d8f532f42deafac991bf8aa12eef1",
"referenceCount": 13,
"citationCount": 31,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Nougat: Neural Optical Understanding for Academic Documents",
"abstract": "Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that performs an Optical Character Recognition (OCR) task for processing scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific documents. The proposed approach offers a promising solution to enhance the accessibility of scientific knowledge in the digital age, by bridging the gap between human-readable documents and machine-readable text. We release the models and code to accelerate future work on scientific text recognition.",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"Lukas Blecher",
"Guillem Cucurull",
"Thomas Scialom",
"Robert Stojnic"
],
"externalIds": {
"ArXiv": "2308.13418",
"DBLP": "journals/corr/abs-2308-13418",
"DOI": "10.48550/arXiv.2308.13418",
"CorpusId": 261214750
},
"url": "https://www.semanticscholar.org/paper/4b4a329e54325e80be50cdc77e274c6e9fd5ade4",
"referenceCount": 48,
"citationCount": 57,
"influentialCitationCount": 17,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct",
"abstract": "Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Haipeng Luo",
"Qingfeng Sun",
"Can Xu",
"Pu Zhao",
"Jian-Guang Lou",
"Chongyang Tao",
"Xiubo Geng",
"Qingwei Lin",
"Shifeng Chen",
"Dongmei Zhang"
],
"externalIds": {
"ArXiv": "2308.09583",
"DBLP": "journals/corr/abs-2308-09583",
"DOI": "10.48550/arXiv.2308.09583",
"CorpusId": 261030818
},
"url": "https://www.semanticscholar.org/paper/dd18782960f9ee4c66b79e1518b342ad3f8d19e7",
"referenceCount": 107,
"citationCount": 251,
"influentialCitationCount": 48,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Llama 2: Open Foundation and Fine-Tuned Chat Models",
"abstract": "In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Louis Martin",
"Kevin R. Stone",
"Peter Albert",
"Amjad Almahairi",
"Yasmine Babaei",
"Nikolay Bashlykov",
"Soumya Batra",
"Prajjwal Bhargava",
"Shruti Bhosale",
"D. Bikel",
"Lukas Blecher",
"Cristian Cantón Ferrer",
"Moya Chen",
"Guillem Cucurull",
"David Esiobu",
"Jude Fernandes",
"Jeremy Fu",
"Wenyin Fu",
"Brian Fuller",
"Cynthia Gao",
"Vedanuj Goswami",
"Naman Goyal",
"A. Hartshorn",
"Saghar Hosseini",
"Rui Hou",
"Hakan Inan",
"Marcin Kardas",
"Viktor Kerkez",
"Madian Khabsa",
"Isabel M. Kloumann",
"A. Korenev",
"Punit Singh Koura",
"Marie-Anne Lachaux",
"Thibaut Lavril",
"Jenya Lee",
"Diana Liskovich",
"Yinghai Lu",
"Yuning Mao",
"Xavier Martinet",
"Todor Mihaylov",
"Pushkar Mishra",
"Igor Molybog",
"Yixin Nie",
"Andrew Poulton",
"Jeremy Reizenstein",
"Rashi Rungta",
"Kalyan Saladi",
"Alan Schelten",
"Ruan Silva",
"Eric Michael Smith",
"R. Subramanian",
"Xia Tan",
"Binh Tang",
"Ross Taylor",
"Adina Williams",
"Jian Xiang Kuan",
"Puxin Xu",
"Zhengxu Yan",
"Iliyan Zarov",
"Yuchen Zhang",
"Angela Fan",
"Melanie Kambadur",
"Sharan Narang",
"Aurelien Rodriguez",
"Robert Stojnic",
"Sergey Edunov",
"Thomas Scialom"
],
"externalIds": {
"ArXiv": "2307.09288",
"DBLP": "journals/corr/abs-2307-09288",
"CorpusId": 259950998
},
"url": "https://www.semanticscholar.org/paper/104b0bb1da562d53cbda87aec79ef6a2827d191a",
"referenceCount": 131,
"citationCount": 7147,
"influentialCitationCount": 1094,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning",
"abstract": "Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging VICUNA, a large language model based on LLAMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned VICUNA using a customized instruction dataset collection called FLANMINI. This collection includes a subset of the large-scale instruction dataset known as FLAN, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, FLACUNA, are obtained through fine-tuning VICUNA on the FLAN dataset, leading to significant improvements across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly available at https://huggingface.co/declare-lab/flacuna-13b-v1.0.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Deepanway Ghosal",
"Yew Ken Chia",
"Navonil Majumder",
"Soujanya Poria"
],
"externalIds": {
"ArXiv": "2307.02053",
"DBLP": "journals/corr/abs-2307-02053",
"DOI": "10.48550/arXiv.2307.02053",
"CorpusId": 259342582
},
"url": "https://www.semanticscholar.org/paper/f356e977c5ba1c5341a48d18db2d8c1658bd98ed",
"referenceCount": 16,
"citationCount": 11,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena",
"abstract": "Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them. We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Lianmin Zheng",
"Wei-Lin Chiang",
"Ying Sheng",
"Siyuan Zhuang",
"Zhanghao Wu",
"Yonghao Zhuang",
"Zi Lin",
"Zhuohan Li",
"Dacheng Li",
"E. Xing",
"Haotong Zhang",
"Joseph Gonzalez",
"Ion Stoica"
],
"externalIds": {
"ArXiv": "2306.05685",
"DBLP": "journals/corr/abs-2306-05685",
"DOI": "10.48550/arXiv.2306.05685",
"CorpusId": 259129398
},
"url": "https://www.semanticscholar.org/paper/a0a79dad89857a96f8f71b14238e5237cbfc4787",
"referenceCount": 59,
"citationCount": 2060,
"influentialCitationCount": 337,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases",
"abstract": "Enabling large language models to utilize real-world tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tool-use abilities without tool-specific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tool-use corpus and learn generalized tool-use abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tool-use corpus by building a multi-agent simulation environment. The corpus contains 3938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to fine-tune compact language models, resulting in two models, namely ToolAlpaca-7B and ToolAlpaca-13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tool-use capabilities comparable to those of extremely large language models like GPT-3.5, demonstrating that learning generalized tool-use ability is feasible for compact language models.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Qiaoyu Tang",
"Ziliang Deng",
"Hongyu Lin",
"Xianpei Han",
"Qiao Liang",
"Boxi Cao",
"Le Sun"
],
"externalIds": {
"ArXiv": "2306.05301",
"DBLP": "journals/corr/abs-2306-05301",
"DOI": "10.48550/arXiv.2306.05301",
"CorpusId": 259108190
},
"url": "https://www.semanticscholar.org/paper/455866ca838f356b53a7e3e5b344834f9e93dbbc",
"referenceCount": 31,
"citationCount": 109,
"influentialCitationCount": 18,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel",
"abstract": "It is widely acknowledged that large models have the potential to deliver superior performance across a broad range of domains. Despite the remarkable progress made in the field of machine learning systems research, which has enabled the development and exploration of large models, such abilities remain confined to a small group of advanced users and industry leaders, resulting in an implicit technical barrier for the wider community to access and leverage these technologies. In this paper, we introduce PyTorch Fully Sharded Data Parallel (FSDP) as an industry-grade solution for large model training. FSDP has been closely co-designed with several key PyTorch core components including Tensor implementation, dispatcher system, and CUDA memory caching allocator, to provide non-intrusive user experiences and high training efficiency. Additionally, FSDP natively incorporates a range of techniques and settings to optimize resource utilization across a variety of hardware configurations. The experimental results demonstrate that FSDP is capable of achieving comparable performance to Distributed Data Parallel while providing support for significantly larger models with near-linear scalability in terms of TFLOPS.",
"year": 2023,
"venue": "Proceedings of the VLDB Endowment",
"authors": [
"Yanli Zhao",
"A. Gu",
"R. Varma",
"Liangchen Luo",
"Chien-chin Huang",
"Min Xu",
"Less Wright",
"Hamid Shojanazeri",
"Myle Ott",
"Sam Shleifer",
"Alban Desmaison",
"Can Balioglu",
"Bernard Nguyen",
"Geeta Chauhan",
"Y. Hao",
"Shen Li"
],
"externalIds": {
"DBLP": "journals/corr/abs-2304-11277",
"ArXiv": "2304.11277",
"DOI": "10.14778/3611540.3611569",
"CorpusId": 258297871
},
"url": "https://www.semanticscholar.org/paper/a0e7c31d723608e03f30fc92ffc2a604a7a039da",
"referenceCount": 36,
"citationCount": 165,
"influentialCitationCount": 19,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models",
"abstract": "The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision-language models. However, the technical details behind GPT-4 continue to remain undisclosed. We believe that the enhanced multi-modal generation capabilities of GPT-4 stem from the utilization of sophisticated large language models (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen advanced LLM, Vicuna, using one projection layer. Our work, for the first time, uncovers that properly aligning the visual features with an advanced large language model can possess numerous advanced multi-modal abilities demonstrated by GPT-4, such as detailed image description generation and website creation from hand-drawn drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, teaching users how to cook based on food photos, and so on. In our experiment, we found that the model trained on short image caption pairs could produce unnatural language outputs (e.g., repetition and fragmentation). To address this problem, we curate a detailed image description dataset in the second stage to finetune the model, which consequently improves the model's generation reliability and overall usability. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/.",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"Deyao Zhu",
"Jun Chen",
"Xiaoqian Shen",
"Xiang Li",
"Mohamed Elhoseiny"
],
"externalIds": {
"ArXiv": "2304.10592",
"DBLP": "conf/iclr/Zhu0SLE24",
"DOI": "10.48550/arXiv.2304.10592",
"CorpusId": 258291930
},
"url": "https://www.semanticscholar.org/paper/ca6a2bc279be5a3349a22bfd6866ed633d18734b",
"referenceCount": 62,
"citationCount": 1216,
"influentialCitationCount": 215,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BloombergGPT: A Large Language Model for Finance",
"abstract": "The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg's extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology. We release Training Chronicles (Appendix C) detailing our experience in training BloombergGPT.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Shijie Wu",
"Ozan Irsoy",
"Steven Lu",
"Vadim Dabravolski",
"Mark Dredze",
"Sebastian Gehrmann",
"P. Kambadur",
"D. Rosenberg",
"Gideon Mann"
],
"externalIds": {
"ArXiv": "2303.17564",
"DBLP": "journals/corr/abs-2303-17564",
"DOI": "10.48550/arXiv.2303.17564",
"CorpusId": 257833842
},
"url": "https://www.semanticscholar.org/paper/83edcfbb206ddad38a971d605da09390604248ea",
"referenceCount": 138,
"citationCount": 505,
"influentialCitationCount": 34,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Economics"
]
},
{
"title": "GPT-4 Technical Report",
"abstract": "We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.",
"year": 2023,
"venue": "",
"authors": [
"OpenAI Josh Achiam",
"Steven Adler",
"Sandhini Agarwal",
"Lama Ahmad",
"Ilge Akkaya",
"Florencia Leoni Aleman",
"Diogo Almeida",
"Janko Altenschmidt",
"Sam Altman",
"Shyamal Anadkat",
"Red Avila",
"Igor Babuschkin",
"S. Balaji",
"Valerie Balcom",
"Paul Baltescu",
"Haim-ing Bao",
"Mo Bavarian",
"Jeff Belgum",
"Irwan Bello",
"Jake Berdine",
"Gabriel Bernadett-Shapiro",
"Christopher Berner",
"Lenny Bogdonoff",
"Oleg Boiko",
"Madelaine Boyd",
"Anna-Luisa Brakman",
"Greg Brockman",
"Tim Brooks",
"Miles Brundage",
"Kevin Button",
"Trevor Cai",
"Rosie Campbell",
"Andrew Cann",
"Brittany Carey",
"Chelsea Carlson",
"Rory Carmichael",
"Brooke Chan",
"Che Chang",
"Fotis Chantzis",
"Derek Chen",
"Sully Chen",
"Ruby Chen",
"Jason Chen",
"Mark Chen",
"B. Chess",
"Chester Cho",
"Casey Chu",
"Hyung Won Chung",
"Dave Cummings",
"Jeremiah Currier",
"Yunxing Dai",
"Cory Decareaux",
"Thomas Degry",
"Noah Deutsch",
"Damien Deville",
"Arka Dhar",
"David Dohan",
"Steve Dowling",
"Sheila Dunning",
"Adrien Ecoffet",
"Atty Eleti",
"Tyna Eloundou",
"David Farhi",
"Liam Fedus",
"Niko Felix",
"Sim'on Posada Fishman",
"Juston Forte",
"Is-abella Fulford",
"Leo Gao",
"Elie Georges",
"C. Gibson",
"Vik Goel",
"Tarun Gogineni",
"Gabriel Goh",
"Raphael Gontijo-Lopes",
"Jonathan Gordon",
"Morgan Grafstein",
"Scott Gray",
"Ryan Greene",
"Joshua Gross",
"S. Gu",
"Yufei Guo",
"Chris Hallacy",
"Jesse Han",
"Jeff Harris",
"Yuchen He",
"Mike Heaton",
"Johannes Heidecke",
"Chris Hesse",
"Alan Hickey",
"Wade Hickey",
"Peter Hoeschele",
"Brandon Houghton",
"Kenny Hsu",
"Shengli Hu",
"Xin Hu",
"Joost Huizinga",
"Shantanu Jain",
"Shawn Jain",
"Joanne Jang",
"Angela Jiang",
"Roger Jiang",
"Haozhun Jin",
"Denny Jin",
"Shino Jomoto",
"B. Jonn",
"Heewoo Jun",
"Tomer Kaftan",
"Lukasz Kaiser",
"Ali Kamali",
"I. Kanitscheider",
"N. Keskar",
"Tabarak Khan",
"Logan Kilpatrick",
"Jong Wook Kim",
"Christina Kim",
"Yongjik Kim",
"Hendrik Kirchner",
"J. Kiros",
"Matthew Knight",
"Daniel Kokotajlo",
"Lukasz Kondraciuk",
"A. Kondrich",
"Aris Konstantinidis",
"Kyle Kosic",
"Gretchen Krueger",
"Vishal Kuo",
"Michael Lampe",
"Ikai Lan",
"Teddy Lee",
"J. Leike",
"Jade Leung",
"Daniel Levy",
"Chak Ming Li",
"Rachel Lim",
"Molly Lin",
"Stephanie Lin",
"Ma-teusz Litwin",
"Theresa Lopez",
"Ryan Lowe",
"Patricia Lue",
"A. Makanju",
"Kim Malfacini",
"Sam Manning",
"Todor Markov",
"Yaniv Markovski",
"Bianca Martin",
"Katie Mayer",
"Andrew Mayne",
"Bob McGrew",
"S. McKinney",
"C. McLeavey",
"Paul McMillan",
"Jake McNeil",
"David Medina",
"Aalok Mehta",
"Jacob Menick",
"Luke Metz",
"Andrey Mishchenko",
"Pamela Mishkin",
"Vinnie Monaco",
"Evan Morikawa",
"Daniel P. Mossing",
"Tong Mu",
"Mira Murati",
"O. Murk",
"David M'ely",
"Ashvin Nair",
"Reiichiro Nakano",
"Rajeev Nayak",
"Arvind Neelakantan",
"Richard Ngo",
"Hyeonwoo Noh",
"Ouyang Long",
"Cullen O'Keefe",
"J. Pachocki",
"Alex Paino",
"Joe Palermo",
"Ashley Pantuliano",
"Giambattista Parascandolo",
"Joel Parish",
"Emy Parparita",
"Alexandre Passos",
"Mikhail Pavlov",
"Andrew Peng",
"Adam Perelman",
"Filipe de Avila Belbute Peres",
"Michael Petrov",
"Henrique Pondé de Oliveira Pinto",
"Michael Pokorny",
"Michelle Pokrass",
"Vitchyr H. Pong",
"Tolly Powell",
"Alethea Power",
"Boris Power",
"Elizabeth Proehl",
"Raul Puri",
"Alec Radford",
"Jack W. Rae",
"Aditya Ramesh",
"Cameron Raymond",
"Francis Real",
"Kendra Rimbach",
"Carl Ross",
"Bob Rotsted",
"Henri Roussez",
"Nick Ryder",
"M. Saltarelli",
"Ted Sanders",
"Shibani Santurkar",
"Girish Sastry",
"Heather Schmidt",
"David Schnurr",
"John Schulman",
"Daniel Selsam",
"Kyla Sheppard",
"Toki Sherbakov",
"Jessica Shieh",
"Sarah Shoker",
"Pranav Shyam",
"Szymon Sidor",
"Eric Sigler",
"Maddie Simens",
"Jordan Sitkin",
"Katarina Slama",
"Ian Sohl",
"Benjamin D. Sokolowsky",
"Yang Song",
"Natalie Staudacher",
"F. Such",
"Natalie Summers",
"I. Sutskever",
"Jie Tang",
"N. Tezak",
"Madeleine Thompson",
"Phil Tillet",
"Amin Tootoonchian",
"Elizabeth Tseng",
"Preston Tuggle",
"Nick Turley",
"Jerry Tworek",
"Juan Felipe Cer'on Uribe",
"Andrea Vallone",
"Arun Vijayvergiya",
"Chelsea Voss",
"Carroll L. Wainwright",
"Justin Jay Wang",
"Alvin Wang",
"Ben Wang",
"Jonathan Ward",
"Jason Wei",
"CJ Weinmann",
"Akila Welihinda",
"P. Welinder",
"Jiayi Weng",
"Lilian Weng",
"Matt Wiethoff",
"Dave Willner",
"Clemens Winter",
"Samuel Wolrich",
"Hannah Wong",
"Lauren Workman",
"Sherwin Wu",
"Jeff Wu",
"Michael Wu",
"Kai Xiao",
"Tao Xu",
"Sarah Yoo",
"Kevin Yu",
"Qim-ing Yuan",
"Wojciech Zaremba",
"Rowan Zellers",
"Chong Zhang",
"Marvin Zhang",
"Shengjia Zhao",
"Tianhao Zheng",
"Juntang Zhuang",
"William Zhuk",
"Barret Zoph"
],
"externalIds": {
"ArXiv": "2303.08774",
"CorpusId": 257532815
},
"url": "https://www.semanticscholar.org/paper/163b4d6a79a5b19af88b8585456363340d9efd04",
"referenceCount": 0,
"citationCount": 7060,
"influentialCitationCount": 1037,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "LLaMA: Open and Efficient Foundation Language Models",
"abstract": "We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Thibaut Lavril",
"Gautier Izacard",
"Xavier Martinet",
"Marie-Anne Lachaux",
"Timothée Lacroix",
"Baptiste Rozière",
"Naman Goyal",
"Eric Hambro",
"Faisal Azhar",
"Aurelien Rodriguez",
"Armand Joulin",
"Edouard Grave",
"Guillaume Lample"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-13971",
"ArXiv": "2302.13971",
"CorpusId": 257219404
},
"url": "https://www.semanticscholar.org/paper/57e849d0de13ed5f91d086936296721d4ff75a75",
"referenceCount": 80,
"citationCount": 8037,
"influentialCitationCount": 1074,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Large language models encode clinical knowledge",
"abstract": null,
"year": 2022,
"venue": "Nature",
"authors": [
"K. Singhal",
"Shekoofeh Azizi",
"T. Tu",
"S. Mahdavi",
"Jason Wei",
"Hyung Won Chung",
"Nathan Scales",
"A. Tanwani",
"H. Cole-Lewis",
"S. Pfohl",
"P. Payne",
"Martin G. Seneviratne",
"P. Gamble",
"C. Kelly",
"Nathaneal Scharli",
"Aakanksha Chowdhery",
"P. A. Mansfield",
"B. A. Y. Arcas",
"D. Webster",
"Greg S. Corrado",
"Yossi Matias",
"K. Chou",
"Juraj Gottweis",
"Nenad Tomašev",
"Yun Liu",
"A. Rajkomar",
"J. Barral",
"Christopher Semturs",
"A. Karthikesalingam",
"Vivek Natarajan"
],
"externalIds": {
"ArXiv": "2212.13138",
"PubMedCentral": "10396962",
"DBLP": "journals/corr/abs-2212-13138",
"DOI": "10.1038/s41586-023-06291-2",
"CorpusId": 255124952,
"PubMed": "37438534"
},
"url": "https://www.semanticscholar.org/paper/6052486bc9144dc1730c12bf35323af3792a1fd0",
"referenceCount": 111,
"citationCount": 1349,
"influentialCitationCount": 78,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Galactica: A Large Language Model for Science",
"abstract": "Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Ross Taylor",
"Marcin Kardas",
"Guillem Cucurull",
"Thomas Scialom",
"A. Hartshorn",
"Elvis Saravia",
"Andrew Poulton",
"Viktor Kerkez",
"Robert Stojnic"
],
"externalIds": {
"ArXiv": "2211.09085",
"DBLP": "journals/corr/abs-2211-09085",
"DOI": "10.48550/arXiv.2211.09085",
"CorpusId": 253553203
},
"url": "https://www.semanticscholar.org/paper/7d645a3fd276918374fd9483fd675c28e46506d1",
"referenceCount": 107,
"citationCount": 570,
"influentialCitationCount": 66,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model",
"abstract": "Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Teven Le Scao",
"Angela Fan",
"Christopher Akiki",
"Ellie Pavlick",
"Suzana Ili'c",
"Daniel Hesslow",
"Roman Castagn'e",
"A. Luccioni",
"François Yvon",
"Matthias Gallé",
"J. Tow",
"Alexander M. Rush",
"Stella Biderman",
"Albert Webson",
"Pawan Sasanka Ammanamanchi",
"Thomas Wang",
"Benoît Sagot",
"Niklas Muennighoff",
"Albert Villanova del Moral",
"Olatunji Ruwase",
"Rachel Bawden",
"Stas Bekman",
"Angelina McMillan-Major",
"Iz Beltagy",
"Huu Nguyen",
"Lucile Saulnier",
"Samson Tan",
"Pedro Ortiz Suarez",
"Victor Sanh",
"Hugo Laurenccon",
"Yacine Jernite",
"Julien Launay",
"Margaret Mitchell",
"Colin Raffel",
"Aaron Gokaslan",
"Adi Simhi",
"Aitor Soroa Etxabe",
"Alham Fikri Aji",
"Amit Alfassy",
"Anna Rogers",
"Ariel Kreisberg Nitzav",
"Canwen Xu",
"Chenghao Mou",
"Chris C. Emezue",
"Christopher Klamm",
"Colin Leong",
"Daniel Alexander van Strien",
"David Ifeoluwa Adelani",
"Dragomir R. Radev",
"E. G. Ponferrada",
"Efrat Levkovizh",
"Ethan Kim",
"Eyal Natan",
"F. Toni",
"Gérard Dupont",
"Germán Kruszewski",
"Giada Pistilli",
"Hady ElSahar",
"Hamza Benyamina",
"H. Tran",
"Ian Yu",
"Idris Abdulmumin",
"Isaac Johnson",
"Itziar Gonzalez-Dios",
"Javier de la Rosa",
"Jenny Chim",
"Jesse Dodge",
"Jian Zhu",
"Jonathan Chang",
"Jorg Frohberg",
"Josephine Tobing",
"J. Bhattacharjee",
"Khalid Almubarak",
"Kimbo Chen",
"Kyle Lo",
"Leandro von Werra",
"Leon Weber",
"Long Phan",
"Loubna Ben Allal",
"Ludovic Tanguy",
"Manan Dey",
"M. Muñoz",
"Maraim Masoud",
"Mar'ia Grandury",
"Mario vSavsko",
"Max Huang",
"Maximin Coavoux",
"Mayank Singh",
"Mike Tian-Jian Jiang",
"Minh Chien Vu",
"M. A. Jauhar",
"Mustafa Ghaleb",
"Nishant Subramani",
"Nora Kassner",
"Nurulaqilla Khamis",
"Olivier Nguyen",
"Omar Espejel",
"Ona de Gibert",
"Paulo Villegas",
"Peter Henderson",
"Pierre Colombo",
"Priscilla Amuok",
"Quentin Lhoest",
"Rheza Harliman",
"Rishi Bommasani",
"R. L'opez",
"Rui Ribeiro",
"Salomey Osei",
"S. Pyysalo",
"Sebastian Nagel",
"Shamik Bose",
"Shamsuddeen Hassan Muhammad",
"Shanya Sharma",
"S. Longpre",
"Somaieh Nikpoor",
"S. Silberberg",
"S. Pai",
"S. Zink",
"Tiago Timponi Torrent",
"Timo Schick",
"Tristan Thrush",
"V. Danchev",
"Vassilina Nikoulina",
"Veronika Laippala",
"Violette Lepercq",
"V. Prabhu",
"Zaid Alyafeai",
"Zeerak Talat",
"Arun Raja",
"Benjamin Heinzerling",
"Chenglei Si",
"Elizabeth Salesky",
"Sabrina J. Mielke",
"Wilson Y. Lee",
"Abheesht Sharma",
"Andrea Santilli",
"Antoine Chaffin",
"Arnaud Stiegler",
"Debajyoti Datta",
"Eliza Szczechla",
"Gunjan Chhablani",
"Han Wang",
"Harshit Pandey",
"Hendrik Strobelt",
"Jason Alan Fries",
"Jos Rozen",
"Leo Gao",
"Lintang Sutawika",
"M Saiful Bari",
"Maged S. Al-Shaibani",
"Matteo Manica",
"Nihal V. Nayak",
"Ryan Teehan",
"Samuel Albanie",
"Sheng Shen",
"Srulik Ben-David",
"Stephen H. Bach",
"Taewoon Kim",
"T. Bers",
"Thibault Févry",
"Trishala Neeraj",
"Urmish Thakker",
"Vikas Raunak",
"Xiang Tang",
"Zheng-Xin Yong",
"Zhiqing Sun",
"Shaked Brody",
"Y. Uri",
"Hadar Tojarieh",
"Adam Roberts",
"Hyung Won Chung",
"Jaesung Tae",
"Jason Phang",
"Ofir Press",
"Conglong Li",
"D. Narayanan",
"Hatim Bourfoune",
"J. Casper",
"Jeff Rasley",
"Max Ryabinin",
"Mayank Mishra",
"Minjia Zhang",
"M. Shoeybi",
"Myriam Peyrounette",
"N. Patry",
"Nouamane Tazi",
"Omar Sanseviero",
"Patrick von Platen",
"Pierre Cornette",
"Pierre Franccois Lavall'ee",
"R. Lacroix",
"Samyam Rajbhandari",
"Sanchit Gandhi",
"Shaden Smith",
"S. Requena",
"Suraj Patil",
"Tim Dettmers",
"Ahmed Baruwa",
"Amanpreet Singh",
"Anastasia Cheveleva",
"Anne-Laure Ligozat",
"Arjun Subramonian",
"Aur'elie N'ev'eol",
"Charles Lovering",
"Daniel H Garrette",
"D. Tunuguntla",
"Ehud Reiter",
"Ekaterina Taktasheva",
"E. Voloshina",
"Eli Bogdanov",
"Genta Indra Winata",
"Hailey Schoelkopf",
"Jan-Christoph Kalo",
"Jekaterina Novikova",
"J. Forde",
"Xiangru Tang",
"Jungo Kasai",
"Ken Kawamura",
"Liam Hazan",
"Marine Carpuat",
"Miruna Clinciu",
"Najoung Kim",
"Newton Cheng",
"O. Serikov",
"Omer Antverg",
"Oskar van der Wal",
"Rui Zhang",
"Ruochen Zhang",
"Sebastian Gehrmann",
"Shachar Mirkin",
"S. Pais",
"Tatiana Shavrina",
"Thomas Scialom",
"Tian Yun",
"Tomasz Limisiewicz",
"Verena Rieser",
"Vitaly Protasov",
"V. Mikhailov",
"Yada Pruksachatkun",
"Yonatan Belinkov",
"Zachary Bamberger",
"Zdenvek Kasner",
"Zdeněk Kasner",
"A. Pestana",
"A. Feizpour",
"Ammar Khan",
"Amy Faranak",
"A. Santos",
"Anthony Hevia",
"Antigona Unldreaj",
"Arash Aghagol",
"Arezoo Abdollahi",
"A. Tammour",
"A. HajiHosseini",
"Bahareh Behroozi",
"Benjamin Ayoade Ajibade",
"B. Saxena",
"Carlos Muñoz Ferrandis",
"Danish Contractor",
"D. Lansky",
"Davis David",
"Douwe Kiela",
"D. A. Nguyen",
"Edward Tan",
"Emi Baylor",
"Ezinwanne Ozoani",
"F. Mirza",
"Frankline Ononiwu",
"Habib Rezanejad",
"H.A. Jones",
"Indrani Bhattacharya",
"Irene Solaiman",
"Irina Sedenko",
"I. Nejadgholi",
"J. Passmore",
"Joshua Seltzer",
"Julio Bonis Sanz",
"Karen Fort",
"Lívia Dutra",
"Mairon Samagaio",
"Maraim Elbadri",
"Margot Mieskes",
"Marissa Gerchick",
"Martha Akinlolu",
"Michael McKenna",
"Mike Qiu",
"M. Ghauri",
"Mykola Burynok",
"Nafis Abrar",
"Nazneen Rajani",
"Nour Elkott",
"N. Fahmy",
"Olanrewaju Samuel",
"Ran An",
"R. Kromann",
"Ryan Hao",
"S. Alizadeh",
"Sarmad Shubber",
"Silas L. Wang",
"Sourav Roy",
"S. Viguier",
"Thanh-Cong Le",
"Tobi Oyebade",
"T. Le",
"Yoyo Yang",
"Zach Nguyen",
"Abhinav Ramesh Kashyap",
"Alfredo Palasciano",
"A. Callahan",
"Anima Shukla",
"Antonio Miranda-Escalada",
"A. Singh",
"Benjamin Beilharz",
"Bo Wang",
"C. Brito",
"Chenxi Zhou",
"Chirag Jain",
"Chuxin Xu",
"Clémentine Fourrier",
"Daniel Le'on Perin'an",
"Daniel Molano",
"Dian Yu",
"Enrique Manjavacas",
"Fabio Barth",
"Florian Fuhrimann",
"Gabriel Altay",
"Giyaseddin Bayrak",
"Gully Burns",
"Helena U. Vrabec",
"I. Bello",
"Isha Dash",
"J. Kang",
"John Giorgi",
"Jonas Golde",
"J. Posada",
"Karthi Sivaraman",
"Lokesh Bulchandani",
"Lu Liu",
"Luisa Shinzato",
"Madeleine Hahn de Bykhovetz",
"Maiko Takeuchi",
"Marc Pàmies",
"M. A. Castillo",
"Marianna Nezhurina",
"Mario Sanger",
"M. Samwald",
"Michael Cullan",
"Michael Weinberg",
"M. Wolf",
"Mina Mihaljcic",
"Minna Liu",
"M. Freidank",
"Myungsun Kang",
"Natasha Seelam",
"N. Dahlberg",
"N. Broad",
"N. Muellner",
"Pascale Fung",
"Patricia Haller",
"Patrick Haller",
"R. Eisenberg",
"Robert Martin",
"Rodrigo Canalli",
"Rosaline Su",
"Ruisi Su",
"Samuel Cahyawijaya",
"Samuele Garda",
"Shlok S Deshmukh",
"Shubhanshu Mishra",
"Sid Kiblawi",
"Simon Ott",
"Sinee Sang-aroonsiri",
"Srishti Kumar",
"Stefan Schweter",
"S. Bharati",
"Tanmay Laud",
"Théo Gigant",
"Tomoya Kainuma",
"Wojciech Kusa",
"Yanis Labrak",
"Yashasvi Bajaj",
"Y. Venkatraman",
"Yifan Xu",
"Ying Xu",
"Yu Xu",
"Z. Tan",
"Zhongli Xie",
"Zifan Ye",
"M. Bras",
"Younes Belkada",
"Thomas Wolf"
],
"externalIds": {
"DBLP": "journals/corr/abs-2211-05100",
"ArXiv": "2211.05100",
"DOI": "10.48550/arXiv.2211.05100",
"CorpusId": 253420279
},
"url": "https://www.semanticscholar.org/paper/964bd39b546f0f6625ff3b9ef1083f797807ef2e",
"referenceCount": 171,
"citationCount": 1861,
"influentialCitationCount": 196,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness",
"abstract": "Transformers are slow and memory-hungry on long sequences, since the time and memory complexity of self-attention are quadratic in sequence length. Approximate attention methods have attempted to address this problem by trading off model quality to reduce the compute complexity, but often do not achieve wall-clock speedup. We argue that a missing principle is making attention algorithms IO-aware -- accounting for reads and writes between levels of GPU memory. We propose FlashAttention, an IO-aware exact attention algorithm that uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of FlashAttention, showing that it requires fewer HBM accesses than standard attention, and is optimal for a range of SRAM sizes. We also extend FlashAttention to block-sparse attention, yielding an approximate attention algorithm that is faster than any existing approximate attention method. FlashAttention trains Transformers faster than existing baselines: 15% end-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the MLPerf 1.1 training speed record, 3$\\times$ speedup on GPT-2 (seq. length 1K), and 2.4$\\times$ speedup on long-range arena (seq. length 1K-4K). FlashAttention and block-sparse FlashAttention enable longer context in Transformers, yielding higher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on long-document classification) and entirely new capabilities: the first Transformers to achieve better-than-chance performance on the Path-X challenge (seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1% accuracy).",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Tri Dao",
"Daniel Y. Fu",
"Stefano Ermon",
"A. Rudra",
"Christopher R'e"
],
"externalIds": {
"DBLP": "journals/corr/abs-2205-14135",
"ArXiv": "2205.14135",
"CorpusId": 249151871
},
"url": "https://www.semanticscholar.org/paper/87c5b281fa43e6f27191b20a8dd694eda1126336",
"referenceCount": 111,
"citationCount": 1198,
"influentialCitationCount": 108,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "LoRA: Low-Rank Adaptation of Large Language Models",
"abstract": "An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.",
"year": 2021,
"venue": "International Conference on Learning Representations",
"authors": [
"J. E. Hu",
"Yelong Shen",
"Phillip Wallis",
"Zeyuan Allen-Zhu",
"Yuanzhi Li",
"Shean Wang",
"Weizhu Chen"
],
"externalIds": {
"DBLP": "conf/iclr/HuSWALWWC22",
"ArXiv": "2106.09685",
"CorpusId": 235458009
},
"url": "https://www.semanticscholar.org/paper/a8ca46b171467ceb2d7652fbfb67fe701ad86092",
"referenceCount": 65,
"citationCount": 5654,
"influentialCitationCount": 1004,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "CogView: Mastering Text-to-Image Generation via Transformers",
"abstract": "Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView achieves the state-of-the-art FID on the blurred MS COCO dataset, outperforming previous GAN-based models and a recent similar work DALL-E.",
"year": 2021,
"venue": "Neural Information Processing Systems",
"authors": [
"Ming Ding",
"Zhuoyi Yang",
"Wenyi Hong",
"Wendi Zheng",
"Chang Zhou",
"Da Yin",
"Junyang Lin",
"Xu Zou",
"Zhou Shao",
"Hongxia Yang",
"Jie Tang"
],
"externalIds": {
"DBLP": "conf/nips/DingYHZZYLZSYT21",
"ArXiv": "2105.13290",
"CorpusId": 235212350
},
"url": "https://www.semanticscholar.org/paper/1197ae4a62f0e0e4e3f3fb70396b5ff06ef371aa",
"referenceCount": 58,
"citationCount": 596,
"influentialCitationCount": 44,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ZeRO-Offload: Democratizing Billion-Scale Model Training",
"abstract": "Large-scale model training has been a playing ground for a limited few requiring complex model refactoring and access to prohibitively expensive GPU clusters. ZeRO-Offload changes the large model training landscape by making large model training accessible to nearly everyone. It can train models with over 13 billion parameters on a single GPU, a 10x increase in size compared to popular framework such as PyTorch, and it does so without requiring any model change from the data scientists or sacrificing computational efficiency. ZeRO-Offload enables large model training by offloading data and compute to CPU. To preserve compute efficiency, it is designed to minimize the data movement to/from GPU, and reduce CPU compute time while maximizing memory savings on GPU. As a result, ZeRO-Offload can achieve 40 TFlops/GPU on a single NVIDIA V100 GPU for 10B parameter model compared to 30TF using PyTorch alone for a 1.4B parameter model, the largest that can be trained without running out of memory. ZeRO-Offload is also designed to scale on multiple-GPUs when available, offering near linear speedup on up to 128 GPUs. Additionally, it can work together with model parallelism to train models with over 70 billion parameters on a single DGX-2 box, a 4.5x increase in model size compared to using model parallelism alone. By combining compute and memory efficiency with ease-of-use, ZeRO-Offload democratizes large-scale model training making it accessible to even data scientists with access to just a single GPU.",
"year": 2021,
"venue": "USENIX Annual Technical Conference",
"authors": [
"Jie Ren",
"Samyam Rajbhandari",
"Reza Yazdani Aminabadi",
"Olatunji Ruwase",
"Shuangyang Yang",
"Minjia Zhang",
"Dong Li",
"Yuxiong He"
],
"externalIds": {
"DBLP": "conf/usenix/0015RARYZ0H21",
"MAG": "3121562065",
"ArXiv": "2101.06840",
"CorpusId": 231632857
},
"url": "https://www.semanticscholar.org/paper/12b71736392209b4292471b7da0aed71ba2aa545",
"referenceCount": 39,
"citationCount": 314,
"influentialCitationCount": 36,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Language Models are Few-Shot Learners",
"abstract": "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.",
"year": 2020,
"venue": "Neural Information Processing Systems",
"authors": [
"Tom B. Brown",
"Benjamin Mann",
"Nick Ryder",
"Melanie Subbiah",
"J. Kaplan",
"Prafulla Dhariwal",
"Arvind Neelakantan",
"Pranav Shyam",
"Girish Sastry",
"Amanda Askell",
"Sandhini Agarwal",
"Ariel Herbert-Voss",
"Gretchen Krueger",
"T. Henighan",
"R. Child",
"A. Ramesh",
"Daniel M. Ziegler",
"Jeff Wu",
"Clemens Winter",
"Christopher Hesse",
"Mark Chen",
"Eric Sigler",
"Ma-teusz Litwin",
"Scott Gray",
"B. Chess",
"Jack Clark",
"Christopher Berner",
"Sam McCandlish",
"Alec Radford",
"I. Sutskever",
"Dario Amodei"
],
"externalIds": {
"ArXiv": "2005.14165",
"DBLP": "conf/nips/BrownMRSKDNSSAA20",
"MAG": "3030163527",
"CorpusId": 218971783
},
"url": "https://www.semanticscholar.org/paper/90abbc2cf38462b954ae1b772fac9532e2ccd8b0",
"referenceCount": 146,
"citationCount": 30859,
"influentialCitationCount": 3528,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Training Large Neural Networks with Constant Memory using a New Execution Algorithm",
"abstract": "Widely popular transformer-based NLP models such as BERT and Turing-NLG have enormous capacity trending to billions of parameters. Current execution methods demand brute-force resources such as HBM devices and high speed interconnectivity for data parallelism. In this paper, we introduce a new relay-style execution technique called L2L (layer-to-layer) where at any given moment, the device memory is primarily populated only with the executing layer(s)'s footprint. The model resides in the DRAM memory attached to either a CPU or an FPGA as an entity we call eager param-server (EPS). To overcome the bandwidth issues of shuttling parameters to and from EPS, the model is executed a layer at a time across many micro-batches instead of the conventional method of minibatches over whole model. L2L is implemented using 16GB V100 devices for BERT-Large running it with a device batch size of up to 256. Our results show 45% reduction in memory and 40% increase in the throughput compared to the state-of-the-art baseline. L2L is also able to fit models up to 50 Billion parameters on a machine with a single 16GB V100 and 512GB CPU memory and without requiring any model partitioning. L2L scales to arbitrary depth allowing researchers to develop on affordable devices which is a big step toward democratizing AI. By running the optimizer in the host EPS, we show a new form of mixed precision for faster throughput and convergence. In addition, the EPS enables dynamic neural architecture approaches by varying layers across iterations. Finally, we also propose and demonstrate a constant memory variation of L2L and we propose future enhancements. This work has been performed on GPUs first, but also targeted towards all high TFLOPS/Watt accelerators.",
"year": 2020,
"venue": "arXiv.org",
"authors": [
"B. Pudipeddi",
"Maral Mesmakhosroshahi",
"Jinwen Xi",
"S. Bharadwaj"
],
"externalIds": {
"DBLP": "journals/corr/abs-2002-05645",
"MAG": "3006131567",
"ArXiv": "2002.05645",
"CorpusId": 211096822
},
"url": "https://www.semanticscholar.org/paper/6a5c0fc737b6fbd6672fc4265b5e0ca38de17416",
"referenceCount": 20,
"citationCount": 43,
"influentialCitationCount": 7,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library",
"abstract": "Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it was designed from first principles to support an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several commonly used benchmarks.",
"year": 2019,
"venue": "Neural Information Processing Systems",
"authors": [
"Adam Paszke",
"Sam Gross",
"Francisco Massa",
"Adam Lerer",
"James Bradbury",
"Gregory Chanan",
"Trevor Killeen",
"Zeming Lin",
"N. Gimelshein",
"L. Antiga",
"Alban Desmaison",
"Andreas Köpf",
"E. Yang",
"Zach DeVito",
"Martin Raison",
"Alykhan Tejani",
"Sasank Chilamkurthy",
"Benoit Steiner",
"Lu Fang",
"Junjie Bai",
"Soumith Chintala"
],
"externalIds": {
"MAG": "2970971581",
"DBLP": "journals/corr/abs-1912-01703",
"ArXiv": "1912.01703",
"CorpusId": 202786778
},
"url": "https://www.semanticscholar.org/paper/3c8a456509e6c0805354bd40a35e3f2dbf8069b1",
"referenceCount": 39,
"citationCount": 36164,
"influentialCitationCount": 3695,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.",
"year": 2019,
"venue": "Journal of machine learning research",
"authors": [
"Colin Raffel",
"Noam M. Shazeer",
"Adam Roberts",
"Katherine Lee",
"Sharan Narang",
"Michael Matena",
"Yanqi Zhou",
"Wei Li",
"Peter J. Liu"
],
"externalIds": {
"MAG": "2981852735",
"DBLP": "journals/corr/abs-1910-10683",
"ArXiv": "1910.10683",
"CorpusId": 204838007
},
"url": "https://www.semanticscholar.org/paper/6c4b76232bb72897685d19b3d264c6ee3005bc2b",
"referenceCount": 134,
"citationCount": 15989,
"influentialCitationCount": 2031,
"isOpenAccess": false,
"fieldsOfStudy": [
"Mathematics",
"Computer Science"
]
},
{
"title": "ZeRO: Memory Optimization Towards Training A Trillion Parameter Models",
"abstract": "Training large DL models with billions and potentially trillions of parameters is challenging. Existing solutions exhibit fundamental limitations to obtain both memory and scaling (computation/communication) efficiency together. Data parallelism does not help reduce memory footprint per device: a model with 1.5 billion parameters or more runs out of memory. Model parallelism hardly scales efficiently beyond multiple devices of a single node due to fine-grained computation and expensive communication. We develop a novel solution, Zero Redundancy Optimizer (ZeRO), to optimize memory, achieving both memory efficiency and scaling efficiency. Unlike basic data parallelism where memory states are replicated across data-parallel processes, ZeRO partitions model states instead, to scale the model size linearly with the number of devices. Furthermore, it retains scaling efficiency via computation and communication rescheduling and by reducing the model parallelism degree required to run large models. Our analysis on memory requirements and communication volume demonstrates: ZeRO has the potential to scale beyond 1 Trillion parameters using today's hardware (e.g., 1024 GPUs, 64 DGX-2 nodes). To meet near-term scaling goals and serve as a demonstration of ZeRO's capability, we implemented stage-1 optimizations of ZeRO (out of 3 stages in total described in the paper) and tested this ZeRO-OS version. ZeRO-OS reduces memory and boosts model size by 4x compared with the state-of-art, scaling up to 100B parameters. Moving forward, we will work on unlocking stage-2 optimizations, with up to 8x memory savings per device, and ultimately stage-3 optimizations, reducing memory linearly with respect to the number of devices and potentially scaling to models of arbitrary size. We are excited to transform very large models from impossible to train to feasible and efficient to train!",
"year": 2019,
"venue": "arXiv.org",
"authors": [
"Samyam Rajbhandari",
"Jeff Rasley",
"Olatunji Ruwase",
"Yuxiong He"
],
"externalIds": {
"MAG": "2977720775",
"DBLP": "journals/corr/abs-1910-02054",
"CorpusId": 203736482
},
"url": "https://www.semanticscholar.org/paper/70fe1f854bc59092ded4bf2939a6624a80e5e4c3",
"referenceCount": 10,
"citationCount": 668,
"influentialCitationCount": 83,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"abstract": "Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.",
"year": 2019,
"venue": "arXiv.org",
"authors": [
"Yinhan Liu",
"Myle Ott",
"Naman Goyal",
"Jingfei Du",
"Mandar Joshi",
"Danqi Chen",
"Omer Levy",
"M. Lewis",
"Luke Zettlemoyer",
"Veselin Stoyanov"
],
"externalIds": {
"DBLP": "journals/corr/abs-1907-11692",
"MAG": "2965373594",
"ArXiv": "1907.11692",
"CorpusId": 198953378
},
"url": "https://www.semanticscholar.org/paper/077f8329a7b6fa3b7c877a57b81eb6c18b5f87de",
"referenceCount": 68,
"citationCount": 20968,
"influentialCitationCount": 4862,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"abstract": "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.",
"year": 2019,
"venue": "Neural Information Processing Systems",
"authors": [
"Zhilin Yang",
"Zihang Dai",
"Yiming Yang",
"J. Carbonell",
"R. Salakhutdinov",
"Quoc V. Le"
],
"externalIds": {
"DBLP": "conf/nips/YangDYCSL19",
"MAG": "2950813464",
"ArXiv": "1906.08237",
"CorpusId": 195069387
},
"url": "https://www.semanticscholar.org/paper/e0c6abdbdecf04ffac65c440da77fb9d66bb474c",
"referenceCount": 47,
"citationCount": 7672,
"influentialCitationCount": 903,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Publicly Available Clinical BERT Embeddings",
"abstract": "Contextual word embedding models such as ELMo and BERT have dramatically improved performance for many natural language processing (NLP) tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on 3/5 clinical NLP tasks, establishing a new state-of-the-art on the MedNLI dataset. We find that these domain-specific models are not as performant on 2 clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text.",
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"authors": [
"Emily Alsentzer",
"John R. Murphy",
"Willie Boag",
"W. Weng",
"Di Jin",
"Tristan Naumann",
"Matthew B. A. McDermott"
],
"externalIds": {
"MAG": "2925863688",
"DBLP": "journals/corr/abs-1904-03323",
"ArXiv": "1904.03323",
"ACL": "W19-1909",
"DOI": "10.18653/v1/W19-1909",
"CorpusId": 102352093
},
"url": "https://www.semanticscholar.org/paper/2a567ebd78939d0861d788f0fedff8d40ae62bf2",
"referenceCount": 26,
"citationCount": 1681,
"influentialCitationCount": 248,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SciBERT: A Pretrained Language Model for Scientific Text",
"abstract": "Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.",
"year": 2019,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Iz Beltagy",
"Kyle Lo",
"Arman Cohan"
],
"externalIds": {
"ACL": "D19-1371",
"DBLP": "conf/emnlp/BeltagyLC19",
"MAG": "2973154071",
"ArXiv": "1903.10676",
"DOI": "10.18653/v1/D19-1371",
"CorpusId": 202558505
},
"url": "https://www.semanticscholar.org/paper/156d217b0a911af97fa1b5a71dc909ccef7a8028",
"referenceCount": 32,
"citationCount": 2542,
"influentialCitationCount": 462,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"abstract": "Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.",
"year": 2019,
"venue": "Bioinform.",
"authors": [
"Jinhyuk Lee",
"Wonjin Yoon",
"Sungdong Kim",
"Donghyeon Kim",
"Sunkyu Kim",
"Chan Ho So",
"Jaewoo Kang"
],
"externalIds": {
"MAG": "2972964850",
"ArXiv": "1901.08746",
"DBLP": "journals/bioinformatics/LeeYKKKSK20",
"PubMedCentral": "7703786",
"DOI": "10.1093/bioinformatics/btz682",
"CorpusId": 59291975,
"PubMed": "31501885"
},
"url": "https://www.semanticscholar.org/paper/1e43c7084bdcb6b3102afaf301cce10faead2702",
"referenceCount": 45,
"citationCount": 4720,
"influentialCitationCount": 679,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Mixed Precision Training",
"abstract": "Deep neural networks have enabled progress in a wide variety of applications. Growing the size of the neural network typically results in improved accuracy. As model sizes grow, the memory and compute requirements for training these models also increases. We introduce a technique to train deep neural networks using half precision floating point numbers. In our technique, weights, activations and gradients are stored in IEEE half-precision format. Half-precision floating numbers have limited numerical range compared to single-precision numbers. We propose two techniques to handle this loss of information. Firstly, we recommend maintaining a single-precision copy of the weights that accumulates the gradients after each optimizer step. This single-precision copy is rounded to half-precision format during training. Secondly, we propose scaling the loss appropriately to handle the loss of information with half-precision gradients. We demonstrate that this approach works for a wide variety of models including convolution neural networks, recurrent neural networks and generative adversarial networks. This technique works for large scale models with more than 100 million parameters trained on large datasets. Using this approach, we can reduce the memory consumption of deep learning models by nearly 2x. In future processors, we can also expect a significant computation speedup using half-precision hardware units.",
"year": 2017,
"venue": "International Conference on Learning Representations",
"authors": [
"P. Micikevicius",
"Sharan Narang",
"Jonah Alben",
"G. Diamos",
"Erich Elsen",
"David García",
"Boris Ginsburg",
"Michael Houston",
"Oleksii Kuchaiev",
"Ganesh Venkatesh",
"Hao Wu"
],
"externalIds": {
"DBLP": "journals/corr/abs-1710-03740",
"ArXiv": "1710.03740",
"MAG": "2963112338",
"CorpusId": 3297437
},
"url": "https://www.semanticscholar.org/paper/e7fd6848cb29ca221a7e17d823e06fb566f1f135",
"referenceCount": 38,
"citationCount": 1539,
"influentialCitationCount": 119,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Attention is All you Need",
"abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"year": 2017,
"venue": "Neural Information Processing Systems",
"authors": [
"Ashish Vaswani",
"Noam M. Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N. Gomez",
"Lukasz Kaiser",
"Illia Polosukhin"
],
"externalIds": {
"MAG": "2963403868",
"DBLP": "conf/nips/VaswaniSPUJGKP17",
"ArXiv": "1706.03762",
"CorpusId": 13756489
},
"url": "https://www.semanticscholar.org/paper/204e3073870fae3d05bcbc2f6a8e263d9b72e776",
"referenceCount": 41,
"citationCount": 105006,
"influentialCitationCount": 15361,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design",
"abstract": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card containing 12 GB of memory, with 18% performance loss compared to a hypothetical, oracular GPU with enough memory to hold the entire DNN.",
"year": 2016,
"venue": "Micro",
"authors": [
"Minsoo Rhu",
"N. Gimelshein",
"Jason Clemons",
"A. Zulfiqar",
"S. Keckler"
],
"externalIds": {
"MAG": "2964174152",
"DBLP": "conf/micro/RhuGCZK16",
"DOI": "10.1109/MICRO.2016.7783721",
"CorpusId": 3776655
},
"url": "https://www.semanticscholar.org/paper/b1ee2e7040c396e2002022f876abc6dec61aa501",
"referenceCount": 50,
"citationCount": 374,
"influentialCitationCount": 58,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Recurrent Continuous Translation Models",
"abstract": "We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43% lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.",
"year": 2013,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Nal Kalchbrenner",
"Phil Blunsom"
],
"externalIds": {
"MAG": "1753482797",
"ACL": "D13-1176",
"DBLP": "conf/emnlp/KalchbrennerB13",
"CorpusId": 12639289
},
"url": "https://www.semanticscholar.org/paper/944a1cfd79dbfb6fef460360a0765ba790f4027a",
"referenceCount": 19,
"citationCount": 1406,
"influentialCitationCount": 94,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Speech recognition with deep recurrent neural networks",
"abstract": "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.",
"year": 2013,
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing",
"authors": [
"Alex Graves",
"Abdel-rahman Mohamed",
"Geoffrey E. Hinton"
],
"externalIds": {
"DBLP": "journals/corr/abs-1303-5778",
"MAG": "2950689855",
"ArXiv": "1303.5778",
"DOI": "10.1109/ICASSP.2013.6638947",
"CorpusId": 206741496
},
"url": "https://www.semanticscholar.org/paper/4177ec52d1b80ed57f2e72b0f9a42365f1a8598d",
"referenceCount": 30,
"citationCount": 8212,
"influentialCitationCount": 405,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A Neural Probabilistic Language Model",
"abstract": "A goal of statistical language modeling is to learn the joint probability function of sequences of words. This is intrinsically difficult because of the curse of dimensionality: we propose to fight it with its own weapons. In the proposed approach one learns simultaneously (1) a distributed representation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar to words forming an already seen sentence. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach very significantly improves on a state-of-the-art trigram model.",
"year": 2003,
"venue": "Journal of machine learning research",
"authors": [
"Yoshua Bengio",
"Réjean Ducharme",
"Pascal Vincent",
"Christian Janvin"
],
"externalIds": {
"MAG": "2140679639",
"DBLP": "conf/nips/BengioDV00",
"CorpusId": 221275765
},
"url": "https://www.semanticscholar.org/paper/6c2b28f9354f667cd5bd07afc0471d8334430da7",
"referenceCount": 40,
"citationCount": 7054,
"influentialCitationCount": 468,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Long Short-Term Memory",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.",
"year": 1997,
"venue": "Neural Computation",
"authors": [
"Sepp Hochreiter",
"J. Schmidhuber"
],
"externalIds": {
"MAG": "2064675550",
"DBLP": "journals/neco/HochreiterS97",
"DOI": "10.1162/neco.1997.9.8.1735",
"CorpusId": 1915014,
"PubMed": "9377276"
},
"url": "https://www.semanticscholar.org/paper/2e9d221c206e9503ceb452302d68d10e293f2a10",
"referenceCount": 48,
"citationCount": 80992,
"influentialCitationCount": 9250,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality",
"abstract": null,
"year": 2023,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"abstract": null,
"year": 2020,
"venue": "arXiv",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"year": 2019,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Jacob Devlin",
"Ming-Wei Chang",
"Kenton Lee",
"Kristina Toutanova"
],
"externalIds": {
"MAG": "2951055169",
"ACL": "N19-1423",
"DBLP": "journals/corr/abs-1810-04805",
"ArXiv": "1810.04805",
"DOI": "10.18653/v1/N19-1423",
"CorpusId": 52967399
},
"url": "https://www.semanticscholar.org/paper/df2b0e26d0599ce3e70df8a9da02e51594e0e992",
"referenceCount": 63,
"citationCount": 81690,
"influentialCitationCount": 19054,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Language Models are Unsupervised Multitask Learners",
"abstract": "Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.",
"year": 2019,
"venue": "",
"authors": [
"Alec Radford",
"Jeff Wu",
"R. Child",
"D. Luan",
"Dario Amodei",
"I. Sutskever"
],
"externalIds": {
"MAG": "2955855238",
"CorpusId": 160025533
},
"url": "https://www.semanticscholar.org/paper/9405cc0d6169988371b2755e573cc28650d14dfe",
"referenceCount": 75,
"citationCount": 18460,
"influentialCitationCount": 3039,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Improving Language Understanding by Generative Pre-Training",
"abstract": "Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).",
"year": 2018,
"venue": "",
"authors": [
"Alec Radford",
"Karthik Narasimhan"
],
"externalIds": {
"MAG": "2965425874",
"CorpusId": 49313245
},
"url": "https://www.semanticscholar.org/paper/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035",
"referenceCount": 73,
"citationCount": 9709,
"influentialCitationCount": 1083,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Statistical Language Models Based on Neural Networks",
"abstract": "Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example well-known Google Translate). Traditional techniques for estimating these models are based on N gram counts. Despite known weaknesses of N -grams and huge efforts of research communities across many fields (speech recognition, machine translation, neuroscience, artificial intelligence, natural language processing, data compression, psychology etc.), N -grams remained basically the state-of-the-art. The goal of this thesis is to present various architectures of language models that are based on artificial neural networks. Although these models are computationally more expensive than N -gram models, with the presented techniques it is possible to apply them to state-of-the-art systems efficiently. Achieved reductions of word error rate of speech recognition systems are up to 20%, against stateof-the-art N -gram model. The presented recurrent neural network based model achieves the best published performance on well-known Penn Treebank setup. Kĺıčová slova jazykový model, neuronová śıt’, rekurentńı, maximálńı entropie, rozpoznáváńı řeči, komprese dat, umělá inteligence",
"year": 2012,
"venue": "",
"authors": [
"Vysoké Učení",
"Technické V Brně",
"Grafiky A Multimédií",
"Disertační Práce"
],
"externalIds": {
"MAG": "2908906764",
"CorpusId": 68116583
},
"url": "https://www.semanticscholar.org/paper/96364af2d208ea75ca3aeb71892d2f7ce7326b55",
"referenceCount": 80,
"citationCount": 620,
"influentialCitationCount": 73,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Hainougat: A academic document parser that preserves formulas and tables for high energy physics",
"abstract": null,
"year": null,
"venue": "https",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "The initial topic is considered a seed, such as \"Particle Physics,\" is given to the Newbie",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "An just-in-time learning system based on RAG is realized, capable of acquiring knowledge instantly",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "These questions",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Deepspeed: A deep learning optimization library",
"abstract": null,
"year": null,
"venue": "github.com/microsoft/DeepSpeed",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "bitsandbytes: Highly optimized bit and byte level operations for deep learning",
"abstract": null,
"year": null,
"venue": "github.com/TimDettmers/bitsandbytes",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Xiwu: A basis flexible and learnable llm for high energy physics",
"abstract": null,
"year": null,
"venue": "github",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "The seed fission technology is proposed and realized, which has proven its effectiveness in acquiring training Q&A data related to the field with only one seed",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Haichat: High energy physics generative artificial intelligence chat robot service",
"abstract": null,
"year": null,
"venue": ".ihep.ac.cn",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Emerging opportunities of using large language models for translation between drug molecules and indications": {
"paper_title": "Emerging opportunities of using large language models for translation between drug molecules and indications",
"arxiv_id": "2402.09588",
"authors": [
"David Oniani",
"Jordan Hilsman",
"Chengxi Zang",
"Junmei Wang",
"Lianjin Cai",
"Jan Zawala",
"Yanshan Wang"
],
"year": 2024,
"venue": "Scientific Reports",
"abstract": null,
"references": [
{
"title": "Drug-drug interactions prediction based on deep learning and knowledge graph: A review",
"abstract": null,
"year": 2024,
"venue": "iScience",
"authors": [
"Huimin Luo",
"Weijie Yin",
"Jianlin Wang",
"Ge Zhang",
"Wenjuan Liang",
"Junwei Luo",
"Chaokun Yan"
],
"externalIds": {
"PubMedCentral": "10884936",
"DOI": "10.1016/j.isci.2024.109148",
"CorpusId": 267848564,
"PubMed": "38405609"
},
"url": "https://www.semanticscholar.org/paper/2be9a5fb14dd7b8f11d54703ada4a07cd35c654d",
"referenceCount": 124,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Autonomous chemical research with large language models",
"abstract": null,
"year": 2023,
"venue": "The Naturalist",
"authors": [
"Daniil A. Boiko",
"R. MacKnight",
"Ben Kline",
"Gabe Gomes"
],
"externalIds": {
"PubMedCentral": "10733136",
"DBLP": "journals/nature/BoikoMKG23",
"DOI": "10.1038/s41586-023-06792-0",
"CorpusId": 266432059,
"PubMed": "38123806"
},
"url": "https://www.semanticscholar.org/paper/6fe3779fe5f2e9402abdd08ad8db41a0f13a99eb",
"referenceCount": 19,
"citationCount": 138,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "A Confederacy of Models: a Comprehensive Evaluation of LLMs on Creative Writing",
"abstract": "We evaluate a range of recent LLMs on English creative writing, a challenging and complex task that requires imagination, coherence, and style. We use a difficult, open-ended scenario chosen to avoid training data reuse: an epic narration of a single combat between Ignatius J. Reilly, the protagonist of the Pulitzer Prize-winning novel A Confederacy of Dunces (1980), and a pterodactyl, a prehistoric flying reptile. We ask several LLMs and humans to write such a story and conduct a human evalution involving various criteria such as fluency, coherence, originality, humor, and style. Our results show that some state-of-the-art commercial LLMs match or slightly outperform our writers in most dimensions; whereas open-source LLMs lag behind. Humans retain an edge in creativity, while humor shows a binary divide between LLMs that can handle it comparably to humans and those that fail at it. We discuss the implications and limitations of our study and suggest directions for future research.",
"year": 2023,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carlos G'omez-Rodr'iguez",
"Paul Williams"
],
"externalIds": {
"DBLP": "conf/emnlp/Gomez-Rodriguez23a",
"ArXiv": "2310.08433",
"DOI": "10.48550/arXiv.2310.08433",
"CorpusId": 263908973
},
"url": "https://www.semanticscholar.org/paper/203a297db586ffb4cd858fe5f219a9a1571c87b2",
"referenceCount": 70,
"citationCount": 30,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI",
"abstract": null,
"year": 2023,
"venue": "npj Digit. Medicine",
"authors": [
"Mahyar Abbasian",
"Elahe Khatibi",
"Iman Azimi",
"David Oniani",
"Zahra Shakeri Hossein Abad",
"Alexander H. Thieme",
"Ram Sriram",
"Zhongqi Yang",
"Yanshan Wang",
"Bryant Lin",
"Olivier Gevaert",
"Li-Jia Li",
"Ramesh C. Jain",
"Amir M. Rahmani"
],
"externalIds": {
"PubMedCentral": "10980701",
"DBLP": "journals/npjdm/AbbasianKAOATSYWLGLJR24",
"ArXiv": "2309.12444",
"DOI": "10.1038/s41746-024-01074-z",
"CorpusId": 262217268,
"PubMed": "38553625"
},
"url": "https://www.semanticscholar.org/paper/8d1211cbbdf161feaae2c87832ede063346e76dd",
"referenceCount": 151,
"citationCount": 21,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "TCMBank: bridges between the largest herbal medicines, chemical ingredients, target proteins, and associated diseases with intelligence text mining",
"abstract": "Traditional Chinese Medicine (TCM) has long been viewed as a precious source of modern drug discovery. AI-assisted drug discovery (AIDD) has been investigated extensively. However, there are still two challenges in applying AIDD to guide TCM drug discovery: the lack of a large amount of standardized TCM-related information and AIDD is prone to pathological failures in out-of-domain data. We have released TCM Database@Taiwan in 2011, and it has been widely disseminated and used. Now, we developed TCMBank, the largest systematic free TCM database, which is an extension of TCM Database@Taiwan. TCMBank contains 9192 herbs, 61 966 ingredients (unduplicated), 15 179 targets, 32 529 diseases, and their pairwise relationships. By integrating multiple data sources, TCMBank provides 3D structure information of ingredients and provides a standard list and detailed information on herbs, ingredients, targets and diseases. TCMBank has an intelligent document identification module that continuously adds TCM-related information retrieved from the literature in PubChem. In addition, driven by TCMBank big data, we developed an ensemble learning-based drug discovery protocol for identifying potential leads and drug repurposing. We take colorectal cancer and Alzheimer's disease as examples to demonstrate how to accelerate drug discovery by artificial intelligence. Using TCMBank, researchers can view literature-driven relationship mapping between herbs/ingredients and genes/diseases, allowing the understanding of molecular action mechanisms for ingredients and identification of new potentially effective treatments. TCMBank is available at https://TCMBank.CN/.",
"year": 2023,
"venue": "Chemical Science",
"authors": [
"Qiujie Lv",
"Guanxing Chen",
"Haohuai He",
"Ziduo Yang",
"Lu Zhao",
"Hsin-Yi Chen",
"Calvin Yu‐Chian Chen"
],
"externalIds": {
"PubMedCentral": "10566508",
"DOI": "10.1039/d3sc02139d",
"CorpusId": 260760995,
"PubMed": "37829020"
},
"url": "https://www.semanticscholar.org/paper/6ff45f39dd13aec184efebfda9c878a1b9d8dfc6",
"referenceCount": 11,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions",
"abstract": "Large language models (LLMs) can be used to generate text data for training and evaluating other models. However, creating high-quality datasets with LLMs can be challenging. In this work, we explore human-AI partnerships to facilitate high diversity and accuracy in LLM-based text data generation. We first examine two approaches to diversify text generation: 1) logit suppression, which minimizes the generation of languages that have already been frequently generated, and 2) temperature sampling, which flattens the token sampling probability. We found that diversification approaches can increase data diversity but often at the cost of data accuracy (i.e., text and labels being appropriate for the target domain). To address this issue, we examined two human interventions, 1) label replacement (LR), correcting misaligned labels, and 2) out-of-scope filtering (OOSF), removing instances that are out of the user’s domain of interest or to which no considered label applies. With oracle studies, we found that LR increases the absolute accuracy of models trained with diversified datasets by 14.4%. Moreover, we found that some models trained with data generated with LR interventions outperformed LLM-based few-shot classification. In contrast, OOSF was not effective in increasing model accuracy, implying the need for future work in human-in-the-loop text data generation.",
"year": 2023,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"John Joon Young Chung",
"Ece Kamar",
"Saleema Amershi"
],
"externalIds": {
"DBLP": "conf/acl/ChungKA23",
"ACL": "2023.acl-long.34",
"ArXiv": "2306.04140",
"DOI": "10.18653/v1/2023.acl-long.34",
"CorpusId": 259096160
},
"url": "https://www.semanticscholar.org/paper/5aa26e0b2bb27162a4de07bf8c3d5e0e9d3b0853",
"referenceCount": 56,
"citationCount": 62,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Augmenting Large Language Model Translators via Translation Memories",
"abstract": "Using translation memories (TMs) as prompts is a promising approach to in-context learning of machine translation models. In this work, we take a step towards prompting large language models (LLMs) with TMs and making them better translators. We find that the ability of LLMs to ``understand'' prompts is indeed helpful for making better use of TMs. Experiments show that the results of a pre-trained LLM translator can be greatly improved by using high-quality TM-based prompts. These results are even comparable to those of the state-of-the-art NMT systems which have access to large-scale in-domain bilingual data and are well tuned on the downstream tasks.",
"year": 2023,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Yongyu Mu",
"Abudurexiti Reheman",
"Zhiquan Cao",
"Yuchun Fan",
"Bei Li",
"Yinqiao Li",
"Tong Xiao",
"Chunliang Zhang",
"Jingbo Zhu"
],
"externalIds": {
"DBLP": "conf/acl/MuRCFLLXZZ23",
"ArXiv": "2305.17367",
"DOI": "10.48550/arXiv.2305.17367",
"CorpusId": 258960135
},
"url": "https://www.semanticscholar.org/paper/3ed538484f8ded6b2ffd29bcd19972504909cebf",
"referenceCount": 33,
"citationCount": 15,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "3D graph neural network with few-shot learning for predicting drug-drug interactions in scaffold-based cold start scenario",
"abstract": null,
"year": 2023,
"venue": "Neural Networks",
"authors": [
"Qiujie Lv",
"Jun Zhou",
"Ziduo Yang",
"Haohuai He",
"Calvin Yu‐Chian Chen"
],
"externalIds": {
"DBLP": "journals/nn/LvZYHC23",
"DOI": "10.1016/j.neunet.2023.05.039",
"CorpusId": 258926578,
"PubMed": "37276813"
},
"url": "https://www.semanticscholar.org/paper/1462fb28f8d2b6d6b0d0c172b8d73befd576b787",
"referenceCount": 62,
"citationCount": 7,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Computational approaches streamlining drug discovery",
"abstract": null,
"year": 2023,
"venue": "Nature",
"authors": [
"Anastasiia V. Sadybekov",
"V. Katritch"
],
"externalIds": {
"DOI": "10.1038/s41586-023-05905-z",
"CorpusId": 258336875,
"PubMed": "37100941"
},
"url": "https://www.semanticscholar.org/paper/e4d8de5d5d95c1075050de5bf0b19811141006bd",
"referenceCount": 175,
"citationCount": 261,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "TCMBank-the largest TCM database provides deep learning-based Chinese-Western medicine exclusion prediction",
"abstract": null,
"year": 2023,
"venue": "Signal Transduction and Targeted Therapy",
"authors": [
"Qiujie Lv",
"Guanxing Chen",
"Haohuai He",
"Ziduo Yang",
"Lu Zhao",
"Kaixiang Zhang",
"Calvin Yu‐Chian Chen"
],
"externalIds": {
"PubMedCentral": "10063611",
"DOI": "10.1038/s41392-023-01339-1",
"CorpusId": 257838953,
"PubMed": "36997527"
},
"url": "https://www.semanticscholar.org/paper/7435e5b15fdb52183bac09067c9052ec08181259",
"referenceCount": 8,
"citationCount": 14,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Meta Learning With Graph Attention Networks for Low-Data Drug Discovery",
"abstract": "Finding candidate molecules with favorable pharmacological activity, low toxicity, and proper pharmacokinetic properties is an important task in drug discovery. Deep neural networks have made impressive progress in accelerating and improving drug discovery. However, these techniques rely on a large amount of label data to form accurate predictions of molecular properties. At each stage of the drug discovery pipeline, usually, only a few biological data of candidate molecules and derivatives are available, indicating that the application of deep neural networks for low-data drug discovery is still a formidable challenge. Here, we propose a meta learning architecture with graph attention network, Meta-GAT, to predict molecular properties in low-data drug discovery. The GAT captures the local effects of atomic groups at the atom level through the triple attentional mechanism and implicitly captures the interactions between different atomic groups at the molecular level. GAT is used to perceive molecular chemical environment and connectivity, thereby effectively reducing sample complexity. Meta-GAT further develops a meta learning strategy based on bilevel optimization, which transfers meta knowledge from other attribute prediction tasks to low-data target tasks. In summary, our work demonstrates how meta learning can reduce the amount of data required to make meaningful predictions of molecules in low-data scenarios. Meta learning is likely to become the new learning paradigm in low-data drug discovery. The source code is publicly available at: https://github.com/lol88/Meta-GAT.",
"year": 2023,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"authors": [
"Qiujie Lv",
"Guanxing Chen",
"Ziduo Yang",
"Weihe Zhong",
"Calvin Yu‐Chian Chen"
],
"externalIds": {
"DBLP": "journals/tnn/LvCYZC24",
"DOI": "10.1109/TNNLS.2023.3250324",
"CorpusId": 257391241,
"PubMed": "37028032"
},
"url": "https://www.semanticscholar.org/paper/99d16c8e737eb37ec770ba1f68bf5c775db1ce1c",
"referenceCount": 70,
"citationCount": 24,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "LLaMA: Open and Efficient Foundation Language Models",
"abstract": "We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Thibaut Lavril",
"Gautier Izacard",
"Xavier Martinet",
"Marie-Anne Lachaux",
"Timothée Lacroix",
"Baptiste Rozière",
"Naman Goyal",
"Eric Hambro",
"Faisal Azhar",
"Aurelien Rodriguez",
"Armand Joulin",
"Edouard Grave",
"Guillaume Lample"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-13971",
"ArXiv": "2302.13971",
"CorpusId": 257219404
},
"url": "https://www.semanticscholar.org/paper/57e849d0de13ed5f91d086936296721d4ff75a75",
"referenceCount": 80,
"citationCount": 8037,
"influentialCitationCount": 1074,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Adaptive Machine Translation with Large Language Models",
"abstract": "Consistency is a key requirement of high-quality translation. It is especially important to adhere to pre-approved terminology and adapt to corrected translations in domain-specific projects. Machine translation (MT) has achieved significant progress in the area of domain adaptation. However, real-time adaptation remains challenging. Large-scale language models (LLMs) have recently shown interesting capabilities of in-context learning, where they learn to replicate certain input-output text generation patterns, without further fine-tuning. By feeding an LLM at inference time with a prompt that consists of a list of translation pairs, it can then simulate the domain and style characteristics. This work aims to investigate how we can utilize in-context learning to improve real-time adaptive MT. Our extensive experiments show promising results at translation time. For example, GPT-3.5 can adapt to a set of in-domain sentence pairs and/or terminology while translating a new sentence. We observe that the translation quality with few-shot in-context learning can surpass that of strong encoder-decoder MT systems, especially for high-resource languages. Moreover, we investigate whether we can combine MT from strong encoder-decoder models with fuzzy matches, which can further improve translation quality, especially for less supported languages. We conduct our experiments across five diverse language pairs, namely English-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French (EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES).",
"year": 2023,
"venue": "European Association for Machine Translation Conferences/Workshops",
"authors": [
"Yasmin Moslem",
"Rejwanul Haque",
"Andy Way"
],
"externalIds": {
"ACL": "2023.eamt-1.22",
"ArXiv": "2301.13294",
"DBLP": "conf/eamt/MoslemHKW23",
"DOI": "10.48550/arXiv.2301.13294",
"CorpusId": 256416029
},
"url": "https://www.semanticscholar.org/paper/9689acb6cb760e8bc21c16f368368b37dee977f9",
"referenceCount": 67,
"citationCount": 50,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Large language models encode clinical knowledge",
"abstract": null,
"year": 2022,
"venue": "Nature",
"authors": [
"K. Singhal",
"Shekoofeh Azizi",
"T. Tu",
"S. Mahdavi",
"Jason Wei",
"Hyung Won Chung",
"Nathan Scales",
"A. Tanwani",
"H. Cole-Lewis",
"S. Pfohl",
"P. Payne",
"Martin G. Seneviratne",
"P. Gamble",
"C. Kelly",
"Nathaneal Scharli",
"Aakanksha Chowdhery",
"P. A. Mansfield",
"B. A. Y. Arcas",
"D. Webster",
"Greg S. Corrado",
"Yossi Matias",
"K. Chou",
"Juraj Gottweis",
"Nenad Tomašev",
"Yun Liu",
"A. Rajkomar",
"J. Barral",
"Christopher Semturs",
"A. Karthikesalingam",
"Vivek Natarajan"
],
"externalIds": {
"ArXiv": "2212.13138",
"PubMedCentral": "10396962",
"DBLP": "journals/corr/abs-2212-13138",
"DOI": "10.1038/s41586-023-06291-2",
"CorpusId": 255124952,
"PubMed": "37438534"
},
"url": "https://www.semanticscholar.org/paper/6052486bc9144dc1730c12bf35323af3792a1fd0",
"referenceCount": 111,
"citationCount": 1349,
"influentialCitationCount": 78,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Molecular Graph Generation by Decomposition and Reassembling",
"abstract": "Designing molecular structures with desired chemical properties is an essential task in drug discovery and materials design. However, finding molecules with the optimized desired properties is still a challenging task due to combinatorial explosion of the candidate space of molecules. Here we propose a novel decomposition-and-reassembling-based approach, which does not include any optimization in hidden space, and our generation process is highly interpretable. Our method is a two-step procedure: In the first decomposition step, we apply frequent subgraph mining to a molecular database to collect a smaller size of subgraphs as building blocks of molecules. In the second reassembling step, we search desirable building blocks guided via reinforcement learning and combine them to generate new molecules. Our experiments show that our method not only can find better molecules in terms of two standard criteria, the penalized log P and druglikeness, but also can generate drug molecules showing the valid intermediate molecules.",
"year": 2022,
"venue": "ACS Omega",
"authors": [
"Masatsugu Yamada",
"M. Sugiyama"
],
"externalIds": {
"ArXiv": "2302.00587",
"DBLP": "journals/corr/abs-2302-00587",
"PubMedCentral": "10249382",
"DOI": "10.1021/acsomega.3c01078",
"CorpusId": 256459813,
"PubMed": "37305268"
},
"url": "https://www.semanticscholar.org/paper/bfbc2dd2b97a4ebd8ac410878f00ccf8f637a9fa",
"referenceCount": 52,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science",
"Biology"
]
},
{
"title": "Factuality Enhanced Language Models for Open-Ended Text Generation",
"abstract": "Pretrained language models (LMs) are susceptible to generate text with nonfactual information. In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation. We design the FactualityPrompts test set and metrics to measure the factuality of LM generations. Based on that, we study the factual accuracy of LMs with parameter sizes ranging from 126M to 530B. Interestingly, we find that larger LMs are more factual than smaller ones, although a previous study suggests that larger LMs can be less truthful in terms of misconceptions. In addition, popular sampling algorithms (e.g., top-p) in open-ended text generation can harm the factuality due to the ''uniform randomness'' introduced at every sampling step. We propose the factual-nucleus sampling algorithm that dynamically adapts the randomness to improve the factuality of generation while maintaining quality. Furthermore, we analyze the inefficiencies of the standard training method in learning correct associations between entities from factual text corpus (e.g., Wikipedia). We propose a factuality-enhanced training method that uses TopicPrefix for better awareness of facts and sentence completion as the training objective, which can vastly reduce the factual errors. We release our code and FactualityPrompts benchmark at: https://github.com/nayeon7lee/FactualityPrompt.",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Nayeon Lee",
"Wei Ping",
"Peng Xu",
"M. Patwary",
"M. Shoeybi",
"Bryan Catanzaro"
],
"externalIds": {
"ArXiv": "2206.04624",
"DBLP": "conf/nips/LeePXPFSC22",
"DOI": "10.48550/arXiv.2206.04624",
"CorpusId": 249538460
},
"url": "https://www.semanticscholar.org/paper/a77f498235f12be4173f87bfca503b597c00f30e",
"referenceCount": 79,
"citationCount": 145,
"influentialCitationCount": 12,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Translation between Molecules and Natural Language",
"abstract": "We present MolT5 - a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. MolT5 allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since MolT5 pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that MolT5-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.",
"year": 2022,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carl N. Edwards",
"T. Lai",
"Kevin Ros",
"Garrett Honke",
"Heng Ji"
],
"externalIds": {
"ACL": "2022.emnlp-main.26",
"DBLP": "journals/corr/abs-2204-11817",
"ArXiv": "2204.11817",
"DOI": "10.48550/arXiv.2204.11817",
"CorpusId": 248376906
},
"url": "https://www.semanticscholar.org/paper/3b9b1aba877ecd3f7e508cbc78a41b623349902b",
"referenceCount": 90,
"citationCount": 113,
"influentialCitationCount": 33,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Unified Deep Learning Model for Multitask Reaction Predictions with Explanation",
"abstract": "There is significant interest and importance to develop robust machine learning models to assist organic chemistry synthesis. Typically, task-specific machine learning models for distinct reaction prediction tasks have been developed. In this work, we develop a unified deep learning model, T5Chem, for a variety of chemical reaction predictions tasks by adapting the \"Text-to-Text Transfer Transformer\" (T5) framework in natural language processing (NLP). On the basis of self-supervised pretraining with PubChem molecules, the T5Chem model can achieve state-of-the-art performances for four distinct types of task-specific reaction prediction tasks using four different open-source data sets, including reaction type classification on USPTO_TPL, forward reaction prediction on USPTO_MIT, single-step retrosynthesis on USPTO_50k, and reaction yield prediction on high-throughput C-N coupling reactions. Meanwhile, we introduced a new unified multitask reaction prediction data set USPTO_500_MT, which can be used to train and test five different types of reaction tasks, including the above four as well as a new reagent suggestion task. Our results showed that models trained with multiple tasks are more robust and can benefit from mutual learning on related tasks. Furthermore, we demonstrated the use of SHAP (SHapley Additive exPlanations) to explain T5Chem predictions at the functional group level, which provides a way to demystify sequence-based deep learning models in chemistry. T5Chem is accessible through https://yzhang.hpc.nyu.edu/T5Chem.",
"year": 2022,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Jieyu Lu",
"Yingkai Zhang"
],
"externalIds": {
"DBLP": "journals/jcisd/LuZ22",
"DOI": "10.1021/acs.jcim.1c01467",
"CorpusId": 247362020,
"PubMed": "35266390"
},
"url": "https://www.semanticscholar.org/paper/eee7997106834442f1704e4681a9a761df6696a1",
"referenceCount": 49,
"citationCount": 51,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "A review of molecular representation in the age of machine learning",
"abstract": "Research in chemistry increasingly requires interdisciplinary work prompted by, among other things, advances in computing, machine learning, and artificial intelligence. Everyone working with molecules, whether chemist or not, needs an understanding of the representation of molecules in a machine‐readable format, as this is central to computational chemistry. Four classes of representations are introduced: string, connection table, feature‐based, and computer‐learned representations. Three of the most significant representations are simplified molecular‐input line‐entry system (SMILES), International Chemical Identifier (InChI), and the MDL molfile, of which SMILES was the first to successfully be used in conjunction with a variational autoencoder (VAE) to yield a continuous representation of molecules. This is noteworthy because a continuous representation allows for efficient navigation of the immensely large chemical space of possible molecules. Since 2018, when the first model of this type was published, considerable effort has been put into developing novel and improved methodologies. Most, if not all, researchers in the community make their work easily accessible on GitHub, though discussion of computation time and domain of applicability is often overlooked. Herein, we present questions for consideration in future work which we believe will make chemical VAEs even more accessible.",
"year": 2022,
"venue": "WIREs Computational Molecular Science",
"authors": [
"Daniel S. Wigh",
"J. Goodman",
"A. Lapkin"
],
"externalIds": {
"DOI": "10.1002/wcms.1603",
"CorpusId": 247002516
},
"url": "https://www.semanticscholar.org/paper/f5eb4985c3a888645224f3e4a3279c3dec411eb8",
"referenceCount": 114,
"citationCount": 122,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "SmileGNN: Drug–Drug Interaction Prediction Based on the SMILES and Graph Neural Network",
"abstract": "Concurrent use of multiple drugs can lead to unexpected adverse drug reactions. The interaction between drugs can be confirmed by routine in vitro and clinical trials. However, it is difficult to test the drug–drug interactions widely and effectively before the drugs enter the market. Therefore, the prediction of drug–drug interactions has become one of the research priorities in the biomedical field. In recent years, researchers have been using deep learning to predict drug–drug interactions by exploiting drug structural features and graph theory, and have achieved a series of achievements. A drug–drug interaction prediction model SmileGNN is proposed in this paper, which can be characterized by aggregating the structural features of drugs constructed by SMILES data and the topological features of drugs in knowledge graphs obtained by graph neural networks. The experimental results show that the model proposed in this paper combines a variety of data sources and has a better prediction performance compared with existing prediction models of drug–drug interactions. Five out of the top ten predicted new drug–drug interactions are verified from the latest database, which proves the credibility of SmileGNN.",
"year": 2022,
"venue": "Life",
"authors": [
"Xueting Han",
"Ruixia Xie",
"Xutao Li",
"Junyi Li"
],
"externalIds": {
"PubMedCentral": "8879716",
"DOI": "10.3390/life12020319",
"CorpusId": 236299083,
"PubMed": "35207606"
},
"url": "https://www.semanticscholar.org/paper/a537798de5e5b53c0e584f882d3a8ca5da143b77",
"referenceCount": 28,
"citationCount": 22,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "MolGPT: Molecular Generation Using a Transformer-Decoder Model",
"abstract": "Application of deep learning techniques for de novo generation of molecules, termed as inverse molecular design, has been gaining enormous traction in drug design. The representation of molecules in SMILES notation as a string of characters enables the usage of state of the art models in natural language processing, such as Transformers, for molecular design in general. Inspired by generative pre-training (GPT) models that have been shown to be successful in generating meaningful text, we train a transformer-decoder on the next token prediction task using masked self-attention for the generation of druglike molecules in this study. We show that our model, MolGPT, performs on par with other previously proposed modern machine learning frameworks for molecular generation in terms of generating valid, unique, and novel molecules. Furthermore, we demonstrate that the model can be trained conditionally to control multiple properties of the generated molecules. We also show that the model can be used to generate molecules with desired scaffolds as well as desired molecular properties by conditioning the generation on scaffold SMILES strings of desired scaffolds and property values. Using saliency maps, we highlight the interpretability of the generative process of the model.",
"year": 2021,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Viraj Bagal",
"Rishal Aggarwal",
"P. K. Vinod",
"U. Priyakumar"
],
"externalIds": {
"DBLP": "journals/jcisd/BagalAVP22",
"DOI": "10.1021/acs.jcim.1c00600",
"CorpusId": 263484152,
"PubMed": "34694798"
},
"url": "https://www.semanticscholar.org/paper/77805d75199e7b9e580b4827f56a069ba0ddd13f",
"referenceCount": 53,
"citationCount": 181,
"influentialCitationCount": 13,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Mol2Context-vec: learning molecular representation from context awareness for drug discovery",
"abstract": "With the rapid development of proteomics and the rapid increase of target molecules for drug action, computer-aided drug design (CADD) has become a basic task in drug discovery. One of the key challenges in CADD is molecular representation. High-quality molecular expression with chemical intuition helps to promote many boundary problems of drug discovery. At present, molecular representation still faces several urgent problems, such as the polysemy of substructures and unsmooth information flow between atomic groups. In this research, we propose a deep contextualized Bi-LSTM architecture, Mol2Context-vec, which can integrate different levels of internal states to bring dynamic representations of molecular substructures. And the obtained molecular context representation can capture the interactions between any atomic groups, especially a pair of atomic groups that are topologically distant. Experiments show that Mol2Context-vec achieves state-of-the-art performance on multiple benchmark datasets. In addition, the visual interpretation of Mol2Context-vec is very close to the structural properties of chemical molecules as understood by humans. These advantages indicate that Mol2Context-vec can be used as a reliable and effective tool for molecular expression. Availability: The source code is available for download in https://github.com/lol88/Mol2Context-vec.",
"year": 2021,
"venue": "Briefings Bioinform.",
"authors": [
"Qiujie Lv",
"Guanxing Chen",
"Lu Zhao",
"Weihe Zhong",
"Calvin Yu-Chian Chen"
],
"externalIds": {
"DBLP": "journals/bib/LvCZZC21",
"DOI": "10.1093/bib/bbab317",
"CorpusId": 237290946,
"PubMed": "34428290"
},
"url": "https://www.semanticscholar.org/paper/0b7a4a4975ca48866bd2bc21bf7c99e033872550",
"referenceCount": 67,
"citationCount": 21,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles",
"abstract": "Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery. Existing generative models have several drawbacks including lack of modeling important molecular geometry elements (e.g. torsion angles), separate optimization stages prone to error accumulation, and the need for structure fine-tuning based on approximate classical force-fields or computationally expensive methods such as metadynamics with approximate quantum mechanics calculations at each geometry. We propose GeoMol--an end-to-end, non-autoregressive and SE(3)-invariant machine learning approach to generate distributions of low-energy molecular 3D conformers. Leveraging the power of message passing neural networks (MPNNs) to capture local and global graph information, we predict local atomic 3D structures and torsion angles, avoiding unnecessary over-parameterization of the geometric degrees of freedom (e.g. one angle per non-terminal bond). Such local predictions suffice both for the training loss computation, as well as for the full deterministic conformer assembly (at test time). We devise a non-adversarial optimal transport based loss function to promote diverse conformer generation. GeoMol predominantly outperforms popular open-source, commercial, or state-of-the-art machine learning (ML) models, while achieving significant speed-ups. We expect such differentiable 3D structure generators to significantly impact molecular modeling and related applications.",
"year": 2021,
"venue": "Neural Information Processing Systems",
"authors": [
"O. Ganea",
"L. Pattanaik",
"Connor W. Coley",
"R. Barzilay",
"K. Jensen",
"W. Green",
"T. Jaakkola"
],
"externalIds": {
"DBLP": "journals/corr/abs-2106-07802",
"ArXiv": "2106.07802",
"CorpusId": 235436208
},
"url": "https://www.semanticscholar.org/paper/47883928702efd21c23a4092e65cbcf9f29970aa",
"referenceCount": 57,
"citationCount": 103,
"influentialCitationCount": 16,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Molecular representation learning with language models and domain-relevant auxiliary tasks",
"abstract": "We apply a Transformer architecture, specifically BERT, to learn flexible and high quality molecular representations for drug discovery problems. We study the impact of using different combinations of self-supervised tasks for pre-training, and present our results for the established Virtual Screening and QSAR benchmarks. We show that: i) The selection of appropriate self-supervised task(s) for pre-training has a significant impact on performance in subsequent downstream tasks such as Virtual Screening. ii) Using auxiliary tasks with more domain relevance for Chemistry, such as learning to predict calculated molecular properties, increases the fidelity of our learnt representations. iii) Finally, we show that molecular representations learnt by our model `MolBert' improve upon the current state of the art on the benchmark datasets.",
"year": 2020,
"venue": "arXiv.org",
"authors": [
"Benedek Fabian",
"T. Edlich",
"H. Gaspar",
"Marwin H. S. Segler",
"Joshua Meyers",
"Marco Fiscato",
"Mohamed Ahmed"
],
"externalIds": {
"DBLP": "journals/corr/abs-2011-13230",
"MAG": "3109892317",
"ArXiv": "2011.13230",
"CorpusId": 227209142
},
"url": "https://www.semanticscholar.org/paper/e4c5e81e6e337bb94af3eb719df5f029b40434fa",
"referenceCount": 44,
"citationCount": 106,
"influentialCitationCount": 12,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Artificial intelligence in drug discovery and development",
"abstract": null,
"year": 2020,
"venue": "Drug Discovery Today",
"authors": [
"Debleena Paul",
"G. Sanap",
"S. Shenoy",
"Dnyaneshwar Kalyane",
"K. Kalia",
"R. Tekade"
],
"externalIds": {
"PubMedCentral": "7577280",
"MAG": "3094492244",
"DOI": "10.1016/j.drudis.2020.10.010",
"CorpusId": 224819218,
"PubMed": "33099022"
},
"url": "https://www.semanticscholar.org/paper/0b7b22c5d2af53dc85ff264e920dbdfa1ad5e12d",
"referenceCount": 130,
"citationCount": 447,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Language Models are Few-Shot Learners",
"abstract": "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.",
"year": 2020,
"venue": "Neural Information Processing Systems",
"authors": [
"Tom B. Brown",
"Benjamin Mann",
"Nick Ryder",
"Melanie Subbiah",
"J. Kaplan",
"Prafulla Dhariwal",
"Arvind Neelakantan",
"Pranav Shyam",
"Girish Sastry",
"Amanda Askell",
"Sandhini Agarwal",
"Ariel Herbert-Voss",
"Gretchen Krueger",
"T. Henighan",
"R. Child",
"A. Ramesh",
"Daniel M. Ziegler",
"Jeff Wu",
"Clemens Winter",
"Christopher Hesse",
"Mark Chen",
"Eric Sigler",
"Ma-teusz Litwin",
"Scott Gray",
"B. Chess",
"Jack Clark",
"Christopher Berner",
"Sam McCandlish",
"Alec Radford",
"I. Sutskever",
"Dario Amodei"
],
"externalIds": {
"ArXiv": "2005.14165",
"DBLP": "conf/nips/BrownMRSKDNSSAA20",
"MAG": "3030163527",
"CorpusId": 218971783
},
"url": "https://www.semanticscholar.org/paper/90abbc2cf38462b954ae1b772fac9532e2ccd8b0",
"referenceCount": 146,
"citationCount": 30859,
"influentialCitationCount": 3528,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "One molecular fingerprint to rule them all: drugs, biomolecules, and the metabolome",
"abstract": null,
"year": 2020,
"venue": "Journal of Cheminformatics",
"authors": [
"A. Capecchi",
"Daniel Probst",
"J. Reymond"
],
"externalIds": {
"PubMedCentral": "7291580",
"DBLP": "journals/jcheminf/CapecchiPR20",
"MAG": "3035302862",
"DOI": "10.1186/s13321-020-00445-4",
"CorpusId": 219589988,
"PubMed": "33431010"
},
"url": "https://www.semanticscholar.org/paper/5d7f9607472ff8606debd90cf92b51d14c532775",
"referenceCount": 52,
"citationCount": 196,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Estimated Research and Development Investment Needed to Bring a New Medicine to Market, 2009-2018.",
"abstract": "Importance\nThe mean cost of developing a new drug has been the subject of debate, with recent estimates ranging from $314 million to $2.8 billion.\n\n\nObjective\nTo estimate the research and development investment required to bring a new therapeutic agent to market, using publicly available data.\n\n\nDesign and Setting\nData were analyzed on new therapeutic agents approved by the US Food and Drug Administration (FDA) between 2009 and 2018 to estimate the research and development expenditure required to bring a new medicine to market. Data were accessed from the US Securities and Exchange Commission, Drugs@FDA database, and ClinicalTrials.gov, alongside published data on clinical trial success rates.\n\n\nExposures\nConduct of preclinical and clinical studies of new therapeutic agents.\n\n\nMain Outcomes and Measures\nMedian and mean research and development spending on new therapeutic agents approved by the FDA, capitalized at a real cost of capital rate (the required rate of return for an investor) of 10.5% per year, with bootstrapped CIs. All amounts were reported in 2018 US dollars.\n\n\nResults\nThe FDA approved 355 new drugs and biologics over the study period. Research and development expenditures were available for 63 (18%) products, developed by 47 different companies. After accounting for the costs of failed trials, the median capitalized research and development investment to bring a new drug to market was estimated at $985.3 million (95% CI, $683.6 million-$1228.9 million), and the mean investment was estimated at $1335.9 million (95% CI, $1042.5 million-$1637.5 million) in the base case analysis. Median estimates by therapeutic area (for areas with ≥5 drugs) ranged from $765.9 million (95% CI, $323.0 million-$1473.5 million) for nervous system agents to $2771.6 million (95% CI, $2051.8 million-$5366.2 million) for antineoplastic and immunomodulating agents. Data were mainly accessible for smaller firms, orphan drugs, products in certain therapeutic areas, first-in-class drugs, therapeutic agents that received accelerated approval, and products approved between 2014 and 2018. Results varied in sensitivity analyses using different estimates of clinical trial success rates, preclinical expenditures, and cost of capital.\n\n\nConclusions and Relevance\nThis study provides an estimate of research and development costs for new therapeutic agents based on publicly available data. Differences from previous studies may reflect the spectrum of products analyzed, the restricted availability of data in the public domain, and differences in underlying assumptions in the cost calculations.",
"year": 2020,
"venue": "Journal of the American Medical Association (JAMA)",
"authors": [
"O. Wouters",
"M. Mckee",
"J. Luyten"
],
"externalIds": {
"MAG": "3009999522",
"DOI": "10.1001/jama.2020.1166",
"CorpusId": 211834795,
"PubMed": "32125404"
},
"url": "https://www.semanticscholar.org/paper/898861ab4733194be5e6fd43449d36c700f53884",
"referenceCount": 34,
"citationCount": 784,
"influentialCitationCount": 59,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding",
"abstract": null,
"year": 2020,
"venue": "",
"authors": [
"知秀 柴田"
],
"externalIds": {
"MAG": "3033156098",
"CorpusId": 226096901
},
"url": "https://www.semanticscholar.org/paper/43f2ad297941db230c089ba353efc3f281ab678c",
"referenceCount": 0,
"citationCount": 9806,
"influentialCitationCount": 1847,
"isOpenAccess": false,
"fieldsOfStudy": [
"Psychology"
]
},
{
"title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.",
"year": 2019,
"venue": "Journal of machine learning research",
"authors": [
"Colin Raffel",
"Noam M. Shazeer",
"Adam Roberts",
"Katherine Lee",
"Sharan Narang",
"Michael Matena",
"Yanqi Zhou",
"Wei Li",
"Peter J. Liu"
],
"externalIds": {
"MAG": "2981852735",
"DBLP": "journals/corr/abs-1910-10683",
"ArXiv": "1910.10683",
"CorpusId": 204838007
},
"url": "https://www.semanticscholar.org/paper/6c4b76232bb72897685d19b3d264c6ee3005bc2b",
"referenceCount": 134,
"citationCount": 15989,
"influentialCitationCount": 2031,
"isOpenAccess": false,
"fieldsOfStudy": [
"Mathematics",
"Computer Science"
]
},
{
"title": "Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation",
"abstract": "The discovery of novel materials and functional molecules can help to solve some of society’s most urgent challenges, ranging from efficient energy harvesting and storage to uncovering novel pharmaceutical drug candidates. Traditionally matter engineering–generally denoted as inverse design–was based massively on human intuition and high-throughput virtual screening. The last few years have seen the emergence of significant interest in computer-inspired designs based on evolutionary or deep learning methods. The major challenge here is that the standard strings molecular representation SMILES shows substantial weaknesses in that task because large fractions of strings do not correspond to valid molecules. Here, we solve this problem at a fundamental level and introduce SELFIES (SELF-referencIng Embedded Strings), a string-based representation of molecules which is 100% robust. Every SELFIES string corresponds to a valid molecule, and SELFIES can represent every molecule. SELFIES can be directly applied in arbitrary machine learning models without the adaptation of the models; each of the generated molecule candidates is valid. In our experiments, the model’s internal memory stores two orders of magnitude more diverse molecules than a similar test with SMILES. Furthermore, as all molecules are valid, it allows for explanation and interpretation of the internal working of the generative models.",
"year": 2019,
"venue": "Machine Learning: Science and Technology",
"authors": [
"Mario Krenn",
"Florian Hase",
"AkshatKumar Nigam",
"Pascal Friederich",
"Alán Aspuru-Guzik"
],
"externalIds": {
"DBLP": "journals/mlst/KrennHNFA20",
"MAG": "3045928028",
"DOI": "10.1088/2632-2153/aba947",
"CorpusId": 212415210
},
"url": "https://www.semanticscholar.org/paper/8338a903d8078481ff8af777475f7394d00e9d57",
"referenceCount": 65,
"citationCount": 545,
"influentialCitationCount": 42,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "De novo generation of hit-like molecules from gene expression signatures using artificial intelligence",
"abstract": null,
"year": 2018,
"venue": "Nature Communications",
"authors": [
"O. Méndez-Lucio",
"B. Baillif",
"Djork-Arné Clevert",
"D. Rouquié",
"J. Wichard"
],
"externalIds": {
"MAG": "2998571806",
"PubMedCentral": "6941972",
"DOI": "10.1038/s41467-019-13807-w",
"CorpusId": 209546145,
"PubMed": "31900408"
},
"url": "https://www.semanticscholar.org/paper/d8498da01ce27e17f58adb73a2d782d6af44884d",
"referenceCount": 72,
"citationCount": 260,
"influentialCitationCount": 7,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Fréchet ChemNet Distance: A Metric for Generative Models for Molecules in Drug Discovery",
"abstract": "The new wave of successful generative models in machine learning has increased the interest in deep learning driven de novo drug design. However, method comparison is difficult because of various flaws of the currently employed evaluation metrics. We propose an evaluation metric for generative models called Fréchet ChemNet distance (FCD). The advantage of the FCD over previous metrics is that it can detect whether generated molecules are diverse and have similar chemical and biological properties as real molecules.",
"year": 2018,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Kristina Preuer",
"Philipp Renz",
"Thomas Unterthiner",
"Sepp Hochreiter",
"G. Klambauer"
],
"externalIds": {
"DBLP": "journals/jcisd/PreuerRUHK18",
"MAG": "2887447356",
"ArXiv": "1803.09518",
"DOI": "10.1021/acs.jcim.8b00234",
"CorpusId": 51892387,
"PubMed": "30118593"
},
"url": "https://www.semanticscholar.org/paper/7d878fe31b9b57f75071586d83cdec2e8b81e039",
"referenceCount": 29,
"citationCount": 277,
"influentialCitationCount": 32,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Biology",
"Mathematics"
]
},
{
"title": "Mol2vec: Unsupervised Machine Learning Approach with Chemical Intuition",
"abstract": "Inspired by natural language processing techniques, we here introduce Mol2vec, which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Like the Word2vec models, where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that point in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing the vectors of the individual substructures and, for instance, be fed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pretrained once, yields dense vector representations, and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as a reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment-independent and thus can also be easily used for proteins with low sequence similarities.",
"year": 2018,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Sabrina Jaeger",
"S. Fulle",
"S. Turk"
],
"externalIds": {
"MAG": "2777416523",
"DBLP": "journals/jcisd/JaegerFT18",
"DOI": "10.1021/acs.jcim.7b00616",
"CorpusId": 34512664,
"PubMed": "29268609"
},
"url": "https://www.semanticscholar.org/paper/88a99980f1f7eeac5f36be2e4601898988bdf937",
"referenceCount": 39,
"citationCount": 445,
"influentialCitationCount": 20,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "De Novo Design of Bioactive Small Molecules by Artificial Intelligence",
"abstract": "Generative artificial intelligence offers a fresh view on molecular design. We present the first‐time prospective application of a deep learning model for designing new druglike compounds with desired activities. For this purpose, we trained a recurrent neural network to capture the constitution of a large set of known bioactive compounds represented as SMILES strings. By transfer learning, this general model was fine‐tuned on recognizing retinoid X and peroxisome proliferator‐activated receptor agonists. We synthesized five top‐ranking compounds designed by the generative model. Four of the compounds revealed nanomolar to low‐micromolar receptor modulatory activity in cell‐based assays. Apparently, the computational model intrinsically captured relevant chemical and biological knowledge without the need for explicit rules. The results of this study advocate generative artificial intelligence for prospective de novo molecular design, and demonstrate the potential of these methods for future medicinal chemistry.",
"year": 2018,
"venue": "Molecular Informatics",
"authors": [
"D. Merk",
"Lukas Friedrich",
"F. Grisoni",
"G. Schneider"
],
"externalIds": {
"PubMedCentral": "5838524",
"MAG": "2784270883",
"DOI": "10.1002/minf.201700153",
"CorpusId": 3833836,
"PubMed": "29319225"
},
"url": "https://www.semanticscholar.org/paper/ab1d44fe7ee9165974a2487b6d10ddaba6c26549",
"referenceCount": 23,
"citationCount": 262,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Automating drug discovery",
"abstract": null,
"year": 2017,
"venue": "Nature reviews. Drug discovery",
"authors": [
"G. Schneider"
],
"externalIds": {
"MAG": "2775714759",
"DOI": "10.1038/nrd.2017.232",
"CorpusId": 24832185,
"PubMed": "29242609"
},
"url": "https://www.semanticscholar.org/paper/1256aaed3d89a81a2914e11b7b53757fbc75b2cf",
"referenceCount": 260,
"citationCount": 474,
"influentialCitationCount": 10,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation",
"abstract": "We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.",
"year": 2016,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Chia-Wei Liu",
"Ryan Lowe",
"Iulian Serban",
"Michael Noseworthy",
"Laurent Charlin",
"Joelle Pineau"
],
"externalIds": {
"MAG": "2963903950",
"DBLP": "conf/emnlp/LiuLSNCP16",
"ArXiv": "1603.08023",
"ACL": "D16-1230",
"DOI": "10.18653/v1/D16-1230",
"CorpusId": 9197196
},
"url": "https://www.semanticscholar.org/paper/129cbad01be98ee88a930e31898cb76be79c41c1",
"referenceCount": 48,
"citationCount": 1249,
"influentialCitationCount": 80,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ZINC 15 – Ligand Discovery for Everyone",
"abstract": "Many questions about the biological activity and availability of small molecules remain inaccessible to investigators who could most benefit from their answers. To narrow the gap between chemoinformatics and biology, we have developed a suite of ligand annotation, purchasability, target, and biology association tools, incorporated into ZINC and meant for investigators who are not computer specialists. The new version contains over 120 million purchasable “drug-like” compounds – effectively all organic molecules that are for sale – a quarter of which are available for immediate delivery. ZINC connects purchasable compounds to high-value ones such as metabolites, drugs, natural products, and annotated compounds from the literature. Compounds may be accessed by the genes for which they are annotated as well as the major and minor target classes to which those genes belong. It offers new analysis tools that are easy for nonspecialists yet with few limitations for experts. ZINC retains its original 3D roots – all molecules are available in biologically relevant, ready-to-dock formats. ZINC is freely available at http://zinc15.docking.org.",
"year": 2015,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"T. Sterling",
"J. Irwin"
],
"externalIds": {
"PubMedCentral": "4658288",
"MAG": "1757990252",
"DBLP": "journals/jcisd/SterlingI15",
"DOI": "10.1021/acs.jcim.5b00559",
"CorpusId": 327319,
"PubMed": "26479676"
},
"url": "https://www.semanticscholar.org/paper/5972c3d8507359a6cff6ef17c4af206ec76b32bb",
"referenceCount": 109,
"citationCount": 2459,
"influentialCitationCount": 164,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Computer Science",
"Medicine"
]
},
{
"title": "Get Your Atoms in Order - An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm",
"abstract": "Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.",
"year": 2015,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Nadine Schneider",
"R. Sayle",
"G. Landrum"
],
"externalIds": {
"DBLP": "journals/jcisd/SchneiderSL15",
"MAG": "2405035126",
"DOI": "10.1021/acs.jcim.5b00543",
"CorpusId": 206609033,
"PubMed": "26441310"
},
"url": "https://www.semanticscholar.org/paper/2b4b57de760e8796eb1d909d177b0f113db04441",
"referenceCount": 31,
"citationCount": 90,
"influentialCitationCount": 6,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "ChEMBL web services: streamlining access to drug discovery data and utilities",
"abstract": "ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology.",
"year": 2015,
"venue": "Nucleic Acids Res.",
"authors": [
"M. Davies",
"M. Nowotka",
"G. Papadatos",
"Nathan Dedman",
"A. Gaulton",
"Francis Atkinson",
"L. Bellis",
"John P. Overington"
],
"externalIds": {
"MAG": "2155478691",
"PubMedCentral": "4489243",
"DBLP": "journals/nar/DaviesNPDGABO15",
"DOI": "10.1093/nar/gkv352",
"CorpusId": 6675645,
"PubMed": "25883136"
},
"url": "https://www.semanticscholar.org/paper/87c83e5a08df6ae710b1911d68896398528329f5",
"referenceCount": 16,
"citationCount": 462,
"influentialCitationCount": 37,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology",
"Computer Science"
]
},
{
"title": "Efficient Estimation of Word Representations in Vector Space",
"abstract": "We propose two novel model architectures for computing continuous vector\nrepresentations of words from very large data sets. The quality of these\nrepresentations is measured in a word similarity task, and the results are\ncompared to the previously best performing techniques based on different types\nof neural networks. We observe large improvements in accuracy at much lower\ncomputational cost, i.e. it takes less than a day to learn high quality word\nvectors from a 1.6 billion words data set. Furthermore, we show that these\nvectors provide state-of-the-art performance on our test set for measuring\nsyntactic and semantic word similarities.",
"year": 2013,
"venue": "International Conference on Learning Representations",
"authors": [
"Tomas Mikolov",
"Kai Chen",
"G. Corrado",
"J. Dean"
],
"externalIds": {
"MAG": "2950577311",
"DBLP": "journals/corr/abs-1301-3781",
"ArXiv": "1301.3781",
"CorpusId": 5959482
},
"url": "https://www.semanticscholar.org/paper/f6b51c8753a871dc94ff32152c00c01e94f90f09",
"referenceCount": 36,
"citationCount": 29657,
"influentialCitationCount": 4070,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Extended-Connectivity Fingerprints",
"abstract": "Extended-connectivity fingerprints (ECFPs) are a novel class of topological fingerprints for molecular characterization. Historically, topological fingerprints were developed for substructure and similarity searching. ECFPs were developed specifically for structure-activity modeling. ECFPs are circular fingerprints with a number of useful qualities: they can be very rapidly calculated; they are not predefined and can represent an essentially infinite number of different molecular features (including stereochemical information); their features represent the presence of particular substructures, allowing easier interpretation of analysis results; and the ECFP algorithm can be tailored to generate different types of circular fingerprints, optimized for different uses. While the use of ECFPs has been widely adopted and validated, a description of their implementation has not previously been presented in the literature.",
"year": 2010,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"David Rogers",
"M. Hahn"
],
"externalIds": {
"DBLP": "journals/jcisd/RogersH10",
"MAG": "1988037271",
"DOI": "10.1021/ci100050t",
"CorpusId": 5132461,
"PubMed": "20426451"
},
"url": "https://www.semanticscholar.org/paper/6420a334687d290d77c6b5ec99ca17f9d069df4a",
"referenceCount": 0,
"citationCount": 4794,
"influentialCitationCount": 212,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Levenshtein Distance: Information theory, Computer science, String (computer science), String metric, Damerau?Levenshtein distance, Spell checker, Hamming distance",
"abstract": "In information theory and computer science, the Levenshtein distance is a metric for measuring the amount of difference between two sequences (i.e., the so called edit distance). The Levenshtein distance between two strings is given by the minimum number of operations needed to transform one string into the other, where an operation is an insertion, deletion, or substitution of a single character. A generalization of the Levenshtein distance (Damerau?Levenshtein distance) allows the transposition of two characters as an operation. Some Translation Environment Tools, such as translation memory leveraging applications, use the Levenhstein algorithm to measure the edit distance between two fuzzy matching content segments.The metric is named after Vladimir Levenshtein, who considered this distance in 1965. It is often used in applications that need to determine how similar, or different, two strings are, such as spell checkers",
"year": 2009,
"venue": "",
"authors": [
"Frederic P. Miller",
"Agnes F. Vandome",
"John McBrewster"
],
"externalIds": {
"MAG": "2461708070",
"CorpusId": 64130157
},
"url": "https://www.semanticscholar.org/paper/2d7397588131b52dfc3c215b9cdd23ee2f9cf95c",
"referenceCount": 0,
"citationCount": 99,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Commercializing Successful Biomedical Technologies",
"abstract": null,
"year": 2008,
"venue": "",
"authors": [
"A. Meyers"
],
"externalIds": {
"MAG": "1595403923",
"DOI": "10.1057/JCB.2008.20",
"CorpusId": 153712283
},
"url": "https://www.semanticscholar.org/paper/1183fd9a4f7ca0109261abbb338214f592e5ec5e",
"referenceCount": 0,
"citationCount": 8,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Business"
]
},
{
"title": "DrugBank: a comprehensive resource for in silico drug discovery and exploration",
"abstract": "DrugBank is a unique bioinformatics/cheminformatics resource that combines detailed drug (i.e. chemical) data with comprehensive drug target (i.e. protein) information. The database contains >4100 drug entries including >800 FDA approved small molecule and biotech drugs as well as >3200 experimental drugs. Additionally, >14,000 protein or drug target sequences are linked to these drug entries. Each DrugCard entry contains >80 data fields with half of the information being devoted to drug/chemical data and the other half devoted to drug target or protein data. Many data fields are hyperlinked to other databases (KEGG, PubChem, ChEBI, PDB, Swiss-Prot and GenBank) and a variety of structure viewing applets. The database is fully searchable supporting extensive text, sequence, chemical structure and relational query searches. Potential applications of DrugBank include in silico drug target discovery, drug design, drug docking or screening, drug metabolism prediction, drug interaction prediction and general pharmaceutical education. DrugBank is available at http://redpoll.pharmacy.ualberta.ca/drugbank/.",
"year": 2005,
"venue": "Nucleic Acids Res.",
"authors": [
"D. Wishart",
"Craig Knox",
"Anchi Guo",
"S. Shrivastava",
"Murtaza Hassanali",
"P. Stothard",
"Zhan Chang",
"Jennifer Woolsey"
],
"externalIds": {
"DBLP": "journals/nar/WishartKGSHSCW06",
"MAG": "2170146596",
"PubMedCentral": "1347430",
"DOI": "10.1093/nar/gkj067",
"CorpusId": 856614,
"PubMed": "16381955"
},
"url": "https://www.semanticscholar.org/paper/69bbc118f14aad3a51209235ced268494779f2ef",
"referenceCount": 12,
"citationCount": 3265,
"influentialCitationCount": 294,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science",
"Biology"
]
},
{
"title": "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments",
"abstract": "We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules.",
"year": 2005,
"venue": "IEEvaluation@ACL",
"authors": [
"Satanjeev Banerjee",
"A. Lavie"
],
"externalIds": {
"ACL": "W05-0909",
"MAG": "2123301721",
"DBLP": "conf/acl/BanerjeeL05",
"CorpusId": 7164502
},
"url": "https://www.semanticscholar.org/paper/7533d30329cfdbf04ee8ee82bfef792d08015ee5",
"referenceCount": 8,
"citationCount": 5351,
"influentialCitationCount": 856,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ROUGE: A Package for Automatic Evaluation of Summaries",
"abstract": "ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluations. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.",
"year": 2004,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Chin-Yew Lin"
],
"externalIds": {
"MAG": "2154652894",
"ACL": "W04-1013",
"CorpusId": 964287
},
"url": "https://www.semanticscholar.org/paper/60b05f32c32519a809f21642ef1eb3eaf3848008",
"referenceCount": 13,
"citationCount": 13205,
"influentialCitationCount": 2401,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics",
"abstract": "In this paper we describe two new objective automatic evaluation methods for machine translation. The first method is based on longest common subsequence between a candidate translation and a set of reference translations. Longest common subsequence takes into account sentence level structure similarity naturally and identifies longest co-occurring in-sequence n-grams automatically. The second method relaxes strict n-gram matching to skip-bigram matching. Skip-bigram is any pair of words in their sentence order. Skip-bigram cooccurrence statistics measure the overlap of skip-bigrams between a candidate translation and a set of reference translations. The empirical results show that both methods correlate with human judgments very well in both adequacy and fluency.",
"year": 2004,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Chin-Yew Lin",
"F. Och"
],
"externalIds": {
"ACL": "P04-1077",
"MAG": "2108325777",
"DBLP": "conf/acl/LinO04",
"DOI": "10.3115/1218955.1219032",
"CorpusId": 1586456
},
"url": "https://www.semanticscholar.org/paper/74d2ad28be32a5802a1b15d4e9a430db2234a3dd",
"referenceCount": 21,
"citationCount": 742,
"influentialCitationCount": 77,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics",
"abstract": "Following the recent adoption by the machine translation community of automatic evaluation using the BLEU/NIST scoring process, we conduct an in-depth study of a similar idea for evaluating summaries. The results show that automatic evaluation using unigram co-occurrences between summary pairs correlates surprising well with human evaluations, based on various statistical metrics; while direct application of the BLEU evaluation procedure does not always give good results.",
"year": 2003,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Chin-Yew Lin",
"E. Hovy"
],
"externalIds": {
"DBLP": "conf/naacl/LinH03",
"MAG": "2150824314",
"ACL": "N03-1020",
"DOI": "10.3115/1073445.1073465",
"CorpusId": 16292125
},
"url": "https://www.semanticscholar.org/paper/c63bb976dc0d3a897f3b0920170a4c573ef904c6",
"referenceCount": 22,
"citationCount": 1835,
"influentialCitationCount": 254,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Reoptimization of MDL Keys for Use in Drug Discovery",
"abstract": "For a number of years MDL products have exposed both 166 bit and 960 bit keysets based on 2D descriptors. These keysets were originally constructed and optimized for substructure searching. We report on improvements in the performance of MDL keysets which are reoptimized for use in molecular similarity. Classification performance for a test data set of 957 compounds was increased from 0.65 for the 166 bit keyset and 0.67 for the 960 bit keyset to 0.71 for a surprisal S/N pruned keyset containing 208 bits and 0.71 for a genetic algorithm optimized keyset containing 548 bits. We present an overview of the underlying technology supporting the definition of descriptors and the encoding of these descriptors into keysets. This technology allows definition of descriptors as combinations of atom properties, bond properties, and atomic neighborhoods at various topological separations as well as supporting a number of custom descriptors. These descriptors can then be used to set one or more bits in a keyset. We constructed various keysets and optimized their performance in clustering bioactive substances. Performance was measured using methodology developed by Briem and Lessel. \"Directed pruning\" was carried out by eliminating bits from the keysets on the basis of random selection, values of the surprisal of the bit, or values of the surprisal S/N ratio of the bit. The random pruning experiment highlighted the insensitivity of keyset performance for keyset lengths of more than 1000 bits. Contrary to initial expectations, pruning on the basis of the surprisal values of the various bits resulted in keysets which underperformed those resulting from random pruning. In contrast, pruning on the basis of the surprisal S/N ratio was found to yield keysets which performed better than those resulting from random pruning. We also explored the use of genetic algorithms in the selection of optimal keysets. Once more the performance was only a weak function of keyset size, and the optimizations failed to identify a single globally optimal keyset. Instead multiple, equally optimal keysets could be produced which had relatively low overlap of the descriptors they encoded.",
"year": 2002,
"venue": "Journal of chemical information and computer sciences",
"authors": [
"J. L. Durant",
"B. A. Leland",
"D. Henry",
"J. Nourse"
],
"externalIds": {
"DBLP": "journals/jcisd/DurantLHN02",
"MAG": "2200017991",
"DOI": "10.1021/ci010132r",
"CorpusId": 22752474,
"PubMed": "12444722"
},
"url": "https://www.semanticscholar.org/paper/ad40b25e38314f39a82f193dc4806e6a1c2c6b69",
"referenceCount": 31,
"citationCount": 1192,
"influentialCitationCount": 43,
"isOpenAccess": true,
"fieldsOfStudy": [
"Mathematics",
"Computer Science",
"Medicine"
]
},
{
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"abstract": "Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.",
"year": 2002,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"K. Papineni",
"Salim Roukos",
"T. Ward",
"Wei-Jing Zhu"
],
"externalIds": {
"DBLP": "conf/acl/PapineniRWZ02",
"MAG": "2101105183",
"ACL": "P02-1040",
"DOI": "10.3115/1073083.1073135",
"CorpusId": 11080756
},
"url": "https://www.semanticscholar.org/paper/d7da009f457917aa381619facfa5ffae9329a6e9",
"referenceCount": 5,
"citationCount": 24976,
"influentialCitationCount": 5731,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules",
"abstract": "18-24.",
"year": 1988,
"venue": "Journal of chemical information and computer sciences",
"authors": [
"D. Weininger"
],
"externalIds": {
"MAG": "1975147762",
"DBLP": "journals/jcisd/Weininger88",
"DOI": "10.1021/ci00057a005",
"CorpusId": 5445756
},
"url": "https://www.semanticscholar.org/paper/3f7983818b76a5f1b5daf9b605877ed401c8e73c",
"referenceCount": 18,
"citationCount": 5268,
"influentialCitationCount": 339,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The Generation of a Unique Machine Description for Chemical Structures-A Technique Developed at Chemical Abstracts Service.",
"abstract": null,
"year": 1965,
"venue": "",
"authors": [
"H. L. Morgan"
],
"externalIds": {
"MAG": "2044834685",
"DOI": "10.1021/C160017A018",
"CorpusId": 62164893
},
"url": "https://www.semanticscholar.org/paper/69b316e545a7d7eefa9d9ce510cdd17601daaff0",
"referenceCount": 0,
"citationCount": 1195,
"influentialCitationCount": 35,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "Meta-MolNet: A Cross-Domain Benchmark for Few Examples Drug Discovery",
"abstract": null,
"year": 2024,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"authors": [
"Qiujie Lv",
"Guanxing Chen",
"Ziduo Yang",
"Weihe Zhong",
"C. Chen"
],
"externalIds": {
"DOI": "10.1109/tnnls.2024.3359657",
"CorpusId": 267686294
},
"url": "https://www.semanticscholar.org/paper/6e04f8742ea8e87eec181ff3185fad9130bf852b",
"referenceCount": 0,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "Harnessing LLMs for Temporal Data - A Study on Explainable Financial Time Series Forecasting",
"abstract": ",",
"year": 2023,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Xinli Yu",
"Zheng Chen",
"Yanbin Lu"
],
"externalIds": {
"DBLP": "conf/emnlp/YuCL23",
"DOI": "10.18653/v1/2023.emnlp-industry.69",
"CorpusId": 265817515
},
"url": "https://www.semanticscholar.org/paper/59ff5490b905c1253d7fcd285c15925681ee6eb0",
"referenceCount": 69,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries",
"abstract": "We propose a new task, Text2Mol, to retrieve molecules using natural language descriptions as queries. Natural language and molecules encode information in very different ways, which leads to the exciting but challenging problem of integrating these two very different modalities. Although some work has been done on text-based retrieval and structure-based retrieval, this new task requires integrating molecules and natural language more directly. Moreover, this can be viewed as an especially challenging cross-lingual retrieval problem by considering the molecules as a language with a very unique grammar. We construct a paired dataset of molecules and their corresponding text descriptions, which we use to learn an aligned common semantic embedding space for retrieval. We extend this to create a cross-modal attention-based model for explainability and reranking by interpreting the attentions as association rules. We also employ an ensemble approach to integrate our different architectures, which significantly improves results from 0.372 to 0.499 MRR. This new multimodal approach opens a new perspective on solving problems in chemistry literature understanding and molecular machine learning.",
"year": 2021,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carl N. Edwards",
"Chengxiang Zhai",
"Heng Ji"
],
"externalIds": {
"DBLP": "conf/emnlp/EdwardsZJ21",
"ACL": "2021.emnlp-main.47",
"DOI": "10.18653/v1/2021.emnlp-main.47",
"CorpusId": 243865204
},
"url": "https://www.semanticscholar.org/paper/57651d65078818821234d13544ac1f29858dcd67",
"referenceCount": 70,
"citationCount": 93,
"influentialCitationCount": 24,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Large-scale self-supervised pretraining for molecular property prediction",
"abstract": null,
"year": 2020,
"venue": "Chemberta",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Artificial Intelligence in Drug Discovery and Development",
"abstract": "Artificial Intelligence (AI) has recently been developed into a sizzling topic in the area of medical care industry. The biopharmaceutical industries are making efforts to approach AI to enhance drug discovery process, reduce research and development expenses, diminish failure rates in clinical trials and ultimately generate superior medicines. The accessibility of immense statistics in life sciences and a speedy development in machine learning algorithms led to an evolution of AI-based start-up companies focused on drug discovery over the recent years [1]. Numerous remarkable AIbiopharmaceutical alliance were declared in 2016-2017 that include Pfizer and IBM Watson, Sanofi Genzyme and Recursion Pharmaceuticals, AstraZeneca, Abbvie, Merck, Novartis, GSK and Exscientia, etc.",
"year": 2018,
"venue": "",
"authors": [
"Prashansa Agrawal"
],
"externalIds": {
"MAG": "2801769491",
"DOI": "10.4172/2329-6887.1000E173",
"CorpusId": 55630975
},
"url": "https://www.semanticscholar.org/paper/4e3cf1f761b8749afbac46ab949ed30896d3f44a",
"referenceCount": 11,
"citationCount": 215,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "CHAPTER 28 – Drug Discovery",
"abstract": null,
"year": 2007,
"venue": "",
"authors": [],
"externalIds": {
"MAG": "108696026",
"DOI": "10.1016/B978-012369417-1/50068-7",
"CorpusId": 81936782
},
"url": "https://www.semanticscholar.org/paper/62e1b0d85ceae9ff9d3fd7931078d2dda3aaab00",
"referenceCount": 0,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Biology"
]
},
{
"title": "An Elementary Mathematical Theory of Classification and Prediction",
"abstract": null,
"year": 1958,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Chatgpt sets record for fastest-growing user base - analyst note",
"abstract": null,
"year": null,
"venue": "www",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Linear-time sequence modeling with selective state spaces (2023)",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Chatgpt continues to be one of the fastest-growing services ever",
"abstract": null,
"year": null,
"venue": "theverge",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "All other authors declare no competing to Y.W. 10/10",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "OpenAI et al.",
"abstract": null,
"year": null,
"venue": "Gpt-4",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"GP-GPT: Large Language Model for Gene-Phenotype Mapping": {
"paper_title": "GP-GPT: Large Language Model for Gene-Phenotype Mapping",
"arxiv_id": "2409.09825",
"authors": [
"Yanjun Lyu",
"Zihao Wu",
"Lu Zhang",
"Jing Zhang",
"Yiwei Li",
"Wei Ruan",
"Zheng Liu",
"Xiao-Xing Yu",
"Chao-Yang Cao",
"Tong Chen",
"Minheng Chen",
"Zhuang Yan",
"Xiang Li",
"Rongjie Liu",
"Chao Huang",
"Wentao Li",
"Tianming Liu",
"Dajiang Zhu"
],
"year": 2024,
"venue": "",
"abstract": "Pre-trained large language models(LLMs) have attracted increasing attention in biomedical domains due to their success in natural language processing. However, the complex traits and heterogeneity of multi-sources genomics data pose significant challenges when adapting these models to the bioinformatics and biomedical field. To address these challenges, we present GP-GPT, the first specialized large language model for genetic-phenotype knowledge representation and genomics relation analysis. Our model is fine-tuned in two stages on a comprehensive corpus composed of over 3,000,000 terms in genomics, proteomics, and medical genetics, derived from multiple large-scale validated datasets and scientific publications. GP-GPT demonstrates proficiency in accurately retrieving medical genetics information and performing common genomics analysis tasks, such as genomics information retrieval and relationship determination. Comparative experiments across domain-specific tasks reveal that GP-GPT outperforms state-of-the-art LLMs, including Llama2, Llama3 and GPT-4. These results highlight GP-GPT's potential to enhance genetic disease relation research and facilitate accurate and efficient analysis in the fields of genomics and medical genetics. Our investigation demonstrated the subtle changes of bio-factor entities' representations in the GP-GPT, which suggested the opportunities for the application of LLMs to advancing gene-phenotype research.",
"references": [
{
"title": "Large language model based framework for automated extraction of genetic interactions from unstructured data",
"abstract": "Extracting biological interactions from published literature helps us understand complex biological systems, accelerate research, and support decision-making in drug or treatment development. Despite efforts to automate the extraction of biological relations using text mining tools and machine learning pipelines, manual curation continues to serve as the gold standard. However, the rapidly increasing volume of literature pertaining to biological relations poses challenges in its manual curation and refinement. These challenges are further compounded because only a small fraction of the published literature is relevant to biological relation extraction, and the embedded sentences of relevant sections have complex structures, which can lead to incorrect inference of relationships. To overcome these challenges, we propose GIX, an automated and robust Gene Interaction Extraction framework, based on pre-trained Large Language models fine-tuned through extensive evaluations on various gene/protein interaction corpora including LLL and RegulonDB. GIX identifies relevant publications with minimal keywords, optimises sentence selection to reduce computational overhead, simplifies sentence structure while preserving meaning, and provides a confidence factor indicating the reliability of extracted relations. GIX’s Stage-2 relation extraction method performed well on benchmark protein/gene interaction datasets, assessed using 10-fold cross-validation, surpassing state-of-the-art approaches. We demonstrated that the proposed method, although fully automated, performs as well as manual relation extraction, with enhanced robustness. We also observed GIX’s capability to augment existing datasets with new sentences, incorporating newly discovered biological terms and processes. Further, we demonstrated GIX’s real-world applicability in inferring E. coli gene circuits.",
"year": 2024,
"venue": "PLoS ONE",
"authors": [
"Jaskaran Gill",
"Madhu Chetty",
"Suryani Lim",
"Jennifer Hallinan"
],
"externalIds": {
"PubMedCentral": "11108146",
"DOI": "10.1371/journal.pone.0303231",
"CorpusId": 269947847,
"PubMed": "38771886"
},
"url": "https://www.semanticscholar.org/paper/e2accdea0eba27a1d5716bd83be1b3eb06c0cc0b",
"referenceCount": 56,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Chameleon: Mixed-Modal Early-Fusion Foundation Models",
"abstract": "We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Chameleon Team"
],
"externalIds": {
"DBLP": "journals/corr/abs-2405-09818",
"ArXiv": "2405.09818",
"DOI": "10.48550/arXiv.2405.09818",
"CorpusId": 269791516
},
"url": "https://www.semanticscholar.org/paper/32112b798f70faab00e14806f51d46058cf5e597",
"referenceCount": 60,
"citationCount": 33,
"influentialCitationCount": 6,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Multimodal Learning for Mapping the Genotype-Phenotype Dynamics",
"abstract": "How complex phenotypes emerge from intricate gene expression patterns is a fundamental question in biology. Quantitative characterization of this relationship, however, is challenging due to the vast combinatorial possibilities and dynamic interplay between genotype and phenotype landscapes. Integrating high-content genotyping approaches such as single-cell RNA sequencing and advanced learning methods such as language models offers an opportunity for dissecting this complex relationship. Here, we present a computational integrated genetics framework designed to analyze and interpret the high-dimensional landscape of genotypes and their associated phenotypes simultaneously. We applied this approach to develop a multimodal foundation model to explore the genotype-phenotype relationship manifold for human transcriptomics at the cellular level. Analyzing this joint manifold showed a refined resolution of cellular heterogeneity, enhanced precision in phenotype annotating, and uncovered potential cross-tissue biomarkers that are undetectable through conventional gene expression analysis alone. Moreover, our results revealed that the gene networks are characterized by scale-free patterns and show context-dependent gene-gene interactions, both of which result in significant variations in the topology of the gene network, particularly evident during aging. Finally, utilizing contextualized embeddings, we investigated gene polyfunctionality which illustrates the multifaceted roles that genes play in different biological processes, and demonstrated that for VWF gene in endothelial cells. Overall, this study advances our understanding of the dynamic interplay between gene expression and phenotypic manifestation and demonstrates the potential of integrated genetics in uncovering new dimensions of cellular function and complexity.",
"year": 2024,
"venue": "Research Square",
"authors": [
"Farhan Khodaee",
"Rohola Zandie",
"E. Edelman"
],
"externalIds": {
"PubMedCentral": "11118704",
"DOI": "10.21203/rs.3.rs-4355413/v1",
"CorpusId": 270000607,
"PubMed": "38798675"
},
"url": "https://www.semanticscholar.org/paper/ab4827f60ad81afcd6b7c4af4bb58fc23e2809c1",
"referenceCount": 0,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Accurate structure prediction of biomolecular interactions with AlphaFold 3",
"abstract": null,
"year": 2024,
"venue": "Nature",
"authors": [
"Josh Abramson",
"Jonas Adler",
"Jack Dunger",
"Richard Evans",
"Tim Green",
"A. Pritzel",
"Olaf Ronneberger",
"Lindsay Willmore",
"Andrew J Ballard",
"Joshua Bambrick",
"Sebastian W Bodenstein",
"David A Evans",
"Chia-Chun Hung",
"Michael O’Neill",
"D. Reiman",
"Kathryn Tunyasuvunakool",
"Zachary Wu",
"Akvilė Žemgulytė",
"Eirini Arvaniti",
"Charles Beattie",
"Ottavia Bertolli",
"Alex Bridgland",
"Alexey Cherepanov",
"Miles Congreve",
"A. Cowen-Rivers",
"Andrew Cowie",
"Michael Figurnov",
"Fabian B Fuchs",
"Hannah Gladman",
"Rishub Jain",
"Yousuf A. Khan",
"Caroline M R Low",
"Kuba Perlin",
"Anna Potapenko",
"Pascal Savy",
"Sukhdeep Singh",
"A. Stecula",
"Ashok Thillaisundaram",
"Catherine Tong",
"Sergei Yakneen",
"Ellen D. Zhong",
"Michal Zielinski",
"Augustin Žídek",
"V. Bapst",
"Pushmeet Kohli",
"Max Jaderberg",
"D. Hassabis",
"J. Jumper"
],
"externalIds": {
"PubMedCentral": "11168924",
"DOI": "10.1038/s41586-024-07487-w",
"CorpusId": 269633210,
"PubMed": "38718835"
},
"url": "https://www.semanticscholar.org/paper/7572ba7f604ef95d7acdd657ebac458106bd35df",
"referenceCount": 67,
"citationCount": 558,
"influentialCitationCount": 21,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Advancing Multimodal Medical Capabilities of Gemini",
"abstract": "Many clinical tasks require an understanding of specialized data, such as medical images and genomics, which is not typically found in general-purpose large multimodal models. Building upon Gemini's multimodal models, we develop several models within the new Med-Gemini family that inherit core capabilities of Gemini and are optimized for medical use via fine-tuning with 2D and 3D radiology, histopathology, ophthalmology, dermatology and genomic data. Med-Gemini-2D sets a new standard for AI-based chest X-ray (CXR) report generation based on expert evaluation, exceeding previous best results across two separate datasets by an absolute margin of 1% and 12%, where 57% and 96% of AI reports on normal cases, and 43% and 65% on abnormal cases, are evaluated as\"equivalent or better\"than the original radiologists' reports. We demonstrate the first ever large multimodal model-based report generation for 3D computed tomography (CT) volumes using Med-Gemini-3D, with 53% of AI reports considered clinically acceptable, although additional research is needed to meet expert radiologist reporting quality. Beyond report generation, Med-Gemini-2D surpasses the previous best performance in CXR visual question answering (VQA) and performs well in CXR classification and radiology VQA, exceeding SoTA or baselines on 17 of 20 tasks. In histopathology, ophthalmology, and dermatology image classification, Med-Gemini-2D surpasses baselines across 18 out of 20 tasks and approaches task-specific model performance. Beyond imaging, Med-Gemini-Polygenic outperforms the standard linear polygenic risk score-based approach for disease risk prediction and generalizes to genetically correlated diseases for which it has never been trained. Although further development and evaluation are necessary in the safety-critical medical domain, our results highlight the potential of Med-Gemini across a wide range of medical tasks.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Lin Yang",
"Shawn Xu",
"Andrew Sellergren",
"Timo Kohlberger",
"Yuchen Zhou",
"Ira Ktena",
"A. Kiraly",
"Faruk Ahmed",
"F. Hormozdiari",
"Tiam Jaroensri",
"Eric Wang",
"Ellery Wulczyn",
"Fayaz Jamil",
"Theo Guidroz",
"Charles Lau",
"Siyuan Qiao",
"Yun Liu",
"Akshay Goel",
"Kendall Park",
"Arnav Agharwal",
"Nick George",
"Yang Wang",
"Ryutaro Tanno",
"D. G. Barrett",
"Wei-Hung Weng",
"S. Mahdavi",
"Khaled Saab",
"Tao Tu",
"Sreenivasa Raju Kalidindi",
"M. Etemadi",
"Jorge Cuadros",
"Gregory Sorensen",
"Yossi Matias",
"Katherine Chou",
"Greg C. Corrado",
"Joelle Barral",
"S. Shetty",
"David Fleet",
"S. Eslami",
"Daniel Tse",
"Shruthi Prabhakara",
"Cory Y. McLean",
"David Steiner",
"Rory Pilgrim",
"Christopher Kelly",
"Shekoofeh Azizi",
"Daniel Golden"
],
"externalIds": {
"DBLP": "journals/corr/abs-2405-03162",
"ArXiv": "2405.03162",
"DOI": "10.48550/arXiv.2405.03162",
"CorpusId": 269605403
},
"url": "https://www.semanticscholar.org/paper/cf4d2cc2270e9b48f5fc94ce26ee702697b9c79d",
"referenceCount": 0,
"citationCount": 18,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Bilingual Language Model for Protein Sequence and Structure",
"abstract": "Adapting large language models (LLMs) to protein sequences spawned the development of powerful protein language models (pLMs). Concurrently, AlphaFold2 broke through in protein structure prediction. Now we can systematically and comprehensively explore the dual nature of proteins that act and exist as three-dimensional (3D) machines and evolve as linear strings of one-dimensional (1D) sequences. Here, we leverage pLMs to simultaneously model both modalities by combining 1D sequences with 3D structure in a single model. We encode protein structures as token sequences using the 3Di-alphabet introduced by the 3D-alignment method Foldseek. This new foundation pLM extracts the features and patterns of the resulting “structure-sequence” representation. Toward this end, we built a non-redundant dataset from AlphaFoldDB and fine-tuned an existing pLM (ProtT5) to translate between 3Di and amino acid sequences. As a proof-of-concept for our novel approach, dubbed Protein structure-sequence T5 (ProstT5), we showed improved performance for subsequent prediction tasks, and for “inverse folding”, namely the generation of novel protein sequences adopting a given structural scaffold (“fold”). Our work showcased the potential of pLMs to tap into the information-rich protein structure revolution fueled by AlphaFold2. ProstT5 paves the way to develop new tools integrating the vast resource of 3D predictions, and opens new research avenues in the post-AlphaFold2 era. Our model is freely available for all at https://github.com/mheinzinger/ProstT5.",
"year": 2024,
"venue": "bioRxiv",
"authors": [
"M. Heinzinger",
"Konstantin Weissenow",
"Joaquin Gomez Sanchez",
"Adrian Henkel",
"Martin Steinegger",
"B. Rost"
],
"externalIds": {
"DOI": "10.1101/2023.07.23.550085",
"CorpusId": 260166745
},
"url": "https://www.semanticscholar.org/paper/138e7602a528b84964b2ba912e464315c1a34e40",
"referenceCount": 95,
"citationCount": 38,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology"
]
},
{
"title": "AI for Biomedicine in the Era of Large Language Models",
"abstract": "The capabilities of AI for biomedicine span a wide spectrum, from the atomic level, where it solves partial differential equations for quantum systems, to the molecular level, predicting chemical or protein structures, and further extending to societal predictions like infectious disease outbreaks. Recent advancements in large language models, exemplified by models like ChatGPT, have showcased significant prowess in natural language tasks, such as translating languages, constructing chatbots, and answering questions. When we consider biomedical data, we observe a resemblance to natural language in terms of sequences: biomedical literature and health records presented as text, biological sequences or sequencing data arranged in sequences, or sensor data like brain signals as time series. The question arises: Can we harness the potential of recent large language models to drive biomedical knowledge discoveries? In this survey, we will explore the application of large language models to three crucial categories of biomedical data: 1) textual data, 2) biological sequences, and 3) brain signals. Furthermore, we will delve into large language model challenges in biomedical research, including ensuring trustworthiness, achieving personalization, and adapting to multi-modal data representation",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Zhenyu Bi",
"Sajib Acharjee Dip",
"Daniel Hajialigol",
"Sindhura Kommu",
"Hanwen Liu",
"Meng Lu",
"Xuan Wang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2403-15673",
"ArXiv": "2403.15673",
"DOI": "10.48550/arXiv.2403.15673",
"CorpusId": 268681084
},
"url": "https://www.semanticscholar.org/paper/3f68ce247d93fae97f62f68fce175f0c3fdb425b",
"referenceCount": 111,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning",
"abstract": "Recent research trends in computational biology have increasingly focused on integrating text and bio-entity modeling, especially in the context of molecules and proteins. However, previous efforts like BioT5 faced challenges in generalizing across diverse tasks and lacked a nuanced understanding of molecular structures, particularly in their textual representations (e.g., IUPAC). This paper introduces BioT5+, an extension of the BioT5 framework, tailored to enhance biological research and drug discovery. BioT5+ incorporates several novel features: integration of IUPAC names for molecular understanding, inclusion of extensive bio-text and molecule data from sources like bioRxiv and PubChem, the multi-task instruction tuning for generality across tasks, and a numerical tokenization technique for improved processing of numerical data. These enhancements allow BioT5+ to bridge the gap between molecular representations and their textual descriptions, providing a more holistic understanding of biological entities, and largely improving the grounded reasoning of bio-text and bio-sequences. The model is pre-trained and fine-tuned with a large number of experiments, including \\emph{3 types of problems (classification, regression, generation), 15 kinds of tasks, and 21 total benchmark datasets}, demonstrating the remarkable performance and state-of-the-art results in most cases. BioT5+ stands out for its ability to capture intricate relationships in biological data, thereby contributing significantly to bioinformatics and computational biology. Our code is available at \\url{https://github.com/QizhiPei/BioT5}.",
"year": 2024,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Qizhi Pei",
"Lijun Wu",
"Kaiyuan Gao",
"Xiaozhuan Liang",
"Yin Fang",
"Jinhua Zhu",
"Shufang Xie",
"Tao Qin",
"Rui Yan"
],
"externalIds": {
"DBLP": "journals/corr/abs-2402-17810",
"ArXiv": "2402.17810",
"DOI": "10.48550/arXiv.2402.17810",
"CorpusId": 268041632
},
"url": "https://www.semanticscholar.org/paper/f740a2474b52675287166a003bd1313f8aabcd68",
"referenceCount": 89,
"citationCount": 13,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models",
"abstract": "Sora is a text-to-video generative AI model, released by OpenAI in February 2024. The model is trained to generate videos of realistic or imaginative scenes from text instructions and show potential in simulating the physical world. Based on public technical reports and reverse engineering, this paper presents a comprehensive review of the model's background, related technologies, applications, remaining challenges, and future directions of text-to-video AI models. We first trace Sora's development and investigate the underlying technologies used to build this\"world simulator\". Then, we describe in detail the applications and potential impact of Sora in multiple industries ranging from film-making and education to marketing. We discuss the main challenges and limitations that need to be addressed to widely deploy Sora, such as ensuring safe and unbiased video generation. Lastly, we discuss the future development of Sora and video generation models in general, and how advancements in the field could enable new ways of human-AI interaction, boosting productivity and creativity of video generation.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Yixin Liu",
"Kai Zhang",
"Yuan Li",
"Zhiling Yan",
"Chujie Gao",
"Ruoxi Chen",
"Zhengqing Yuan",
"Yue Huang",
"Hanchi Sun",
"Jianfeng Gao",
"Lifang He",
"Lichao Sun"
],
"externalIds": {
"DBLP": "journals/corr/abs-2402-17177",
"ArXiv": "2402.17177",
"DOI": "10.48550/arXiv.2402.17177",
"CorpusId": 268032569
},
"url": "https://www.semanticscholar.org/paper/a6f7485dfdf45320e82d84bcfdc51bcd52dff18b",
"referenceCount": 192,
"citationCount": 100,
"influentialCitationCount": 5,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "scGPT: toward building a foundation model for single-cell multi-omics using generative AI.",
"abstract": null,
"year": 2024,
"venue": "Nature Methods",
"authors": [
"Haotian Cui",
"Chloe X. Wang",
"Hassaan Maan",
"Kuan Pang",
"Fengning Luo",
"Nan Duan",
"Bo Wang"
],
"externalIds": {
"DOI": "10.1038/s41592-024-02201-0",
"CorpusId": 268028472,
"PubMed": "38409223"
},
"url": "https://www.semanticscholar.org/paper/13dc81fce2c73de67dbe3829a32ec23d663cec89",
"referenceCount": 51,
"citationCount": 108,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Surviving ChatGPT in healthcare",
"abstract": "At the dawn of of Artificial General Intelligence (AGI), the emergence of large language models such as ChatGPT show promise in revolutionizing healthcare by improving patient care, expanding medical access, and optimizing clinical processes. However, their integration into healthcare systems requires careful consideration of potential risks, such as inaccurate medical advice, patient privacy violations, the creation of falsified documents or images, overreliance on AGI in medical education, and the perpetuation of biases. It is crucial to implement proper oversight and regulation to address these risks, ensuring the safe and effective incorporation of AGI technologies into healthcare systems. By acknowledging and mitigating these challenges, AGI can be harnessed to enhance patient care, medical knowledge, and healthcare processes, ultimately benefiting society as a whole.",
"year": 2024,
"venue": "Frontiers in Radiology",
"authors": [
"Zheng Liu",
"Lu Zhang",
"Zihao Wu",
"Xiao-Xing Yu",
"Chao-Yang Cao",
"Haixing Dai",
"Ning Liu",
"Jun Liu",
"W. Liu",
"Quanzhen Li",
"Dinggang Shen",
"Xiang Li",
"Dajiang Zhu",
"Tianming Liu"
],
"externalIds": {
"PubMedCentral": "10920216",
"DOI": "10.3389/fradi.2023.1224682",
"CorpusId": 260140287,
"PubMed": "38464946"
},
"url": "https://www.semanticscholar.org/paper/563e28205659432494240e06ec5a1d7aeb5f0275",
"referenceCount": 76,
"citationCount": 7,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Large Language Models for Robotics: Opportunities, Challenges, and Perspectives",
"abstract": "Large language models (LLMs) have undergone significant expansion and have been increasingly integrated across various domains. Notably, in the realm of robot task planning, LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions. However, for embodied tasks, where robots interact with complex environments, text-only LLMs often face challenges due to a lack of compatibility with robotic visual perception. This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks. Additionally, we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions. Our results, based on diverse datasets, indicate that GPT-4V effectively enhances robot performance in embodied tasks. This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights toward bridging the gap in Human-Robot-Environment interaction.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Jiaqi Wang",
"Zihao Wu",
"Yiwei Li",
"Hanqi Jiang",
"Peng Shu",
"Enze Shi",
"Huawen Hu",
"Chong-Yi Ma",
"Yi-Hsueh Liu",
"Xuhui Wang",
"Yincheng Yao",
"Xuan Liu",
"Huaqin Zhao",
"Zheng Liu",
"Haixing Dai",
"Lin Zhao",
"Bao Ge",
"Xiang Li",
"Tianming Liu",
"Shu Zhang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2401-04334",
"ArXiv": "2401.04334",
"DOI": "10.48550/arXiv.2401.04334",
"CorpusId": 266899905
},
"url": "https://www.semanticscholar.org/paper/8296eef3797afd1515021ff568a694412c38101b",
"referenceCount": 118,
"citationCount": 33,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Understanding LLMs: A Comprehensive Overview from Training to Inference",
"abstract": "The introduction of ChatGPT has led to a significant increase in the utilization of Large Language Models (LLMs) for addressing downstream tasks. There's an increasing focus on cost-efficient training and deployment within this context. Low-cost training and deployment of LLMs represent the future development trend. This paper reviews the evolution of large language model training techniques and inference deployment technologies aligned with this emerging trend. The discussion on training includes various aspects, including data preprocessing, training architecture, pre-training tasks, parallel training, and relevant content related to model fine-tuning. On the inference side, the paper covers topics such as model compression, parallel computation, memory scheduling, and structural optimization. It also explores LLMs' utilization and provides insights into their future development.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Yi-Hsueh Liu",
"Haoyang He",
"Tianle Han",
"Xu Zhang",
"Mengyuan Liu",
"Jiaming Tian",
"Yutong Zhang",
"Jiaqi Wang",
"Xiaohui Gao",
"Tianyang Zhong",
"Yi Pan",
"Shaochen Xu",
"Zihao Wu",
"Zheng Liu",
"Xin Zhang",
"Shu Zhang",
"Xintao Hu",
"Tuo Zhang",
"Ning Qiang",
"Tianming Liu",
"Bao Ge"
],
"externalIds": {
"DBLP": "journals/corr/abs-2401-02038",
"ArXiv": "2401.02038",
"DOI": "10.48550/arXiv.2401.02038",
"CorpusId": 266755678
},
"url": "https://www.semanticscholar.org/paper/efc5e94635a850ede9c1f8dbce65d5dc536f3bfb",
"referenceCount": 203,
"citationCount": 23,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Multimodality of AI for Education: Towards Artificial General Intelligence",
"abstract": "This paper presents a comprehensive examination of how multimodal artificial intelligence (AI) approaches are paving the way towards the realization of Artificial General Intelligence (AGI) in educational contexts. It scrutinizes the evolution and integration of AI in educational systems, emphasizing the crucial role of multimodality, which encompasses auditory, visual, kinesthetic, and linguistic modes of learning. This research delves deeply into the key facets of AGI, including cognitive frameworks, advanced knowledge representation, adaptive learning mechanisms, strategic planning, sophisticated language processing, and the integration of diverse multimodal data sources. It critically assesses AGI's transformative potential in reshaping educational paradigms, focusing on enhancing teaching and learning effectiveness, filling gaps in existing methodologies, and addressing ethical considerations and responsible usage of AGI in educational settings. The paper also discusses the implications of multimodal AI's role in education, offering insights into future directions and challenges in AGI development. This exploration aims to provide a nuanced understanding of the intersection between AI, multimodality, and education, setting a foundation for future research and development in AGI.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Gyeong-Geon Lee",
"Lehong Shi",
"Ehsan Latif",
"Yizhu Gao",
"Arne Bewersdorff",
"Matthew Nyaaba",
"Shuchen Guo",
"Zihao Wu",
"Zheng Liu",
"Hui Wang",
"Gengchen Mai",
"Tiaming Liu",
"Xiaoming Zhai"
],
"externalIds": {
"DBLP": "journals/corr/abs-2312-06037",
"ArXiv": "2312.06037",
"DOI": "10.48550/arXiv.2312.06037",
"CorpusId": 266162896
},
"url": "https://www.semanticscholar.org/paper/00b89844abecc9fb9ea687bedcd42c44421dfa23",
"referenceCount": 252,
"citationCount": 20,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Disease2Vec: Encoding Alzheimer’s progression via disease embedding tree",
"abstract": null,
"year": 2023,
"venue": "Pharmacological Research",
"authors": [
"Luwen Zhang",
"Li Wang",
"Tianming Liu",
"Dajiang Zhu"
],
"externalIds": {
"PubMedCentral": "11334056",
"DOI": "10.1016/j.phrs.2023.107038",
"CorpusId": 266202279,
"PubMed": "38072216"
},
"url": "https://www.semanticscholar.org/paper/1862ee09f789e576d011d26c0466473357a26525",
"referenceCount": 49,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Holistic Evaluation of GPT-4V for Biomedical Imaging",
"abstract": "In this paper, we present a large-scale evaluation probing GPT-4V's capabilities and limitations for biomedical image analysis. GPT-4V represents a breakthrough in artificial general intelligence (AGI) for computer vision, with applications in the biomedical domain. We assess GPT-4V's performance across 16 medical imaging categories, including radiology, oncology, ophthalmology, pathology, and more. Tasks include modality recognition, anatomy localization, disease diagnosis, report generation, and lesion detection. The extensive experiments provide insights into GPT-4V's strengths and weaknesses. Results show GPT-4V's proficiency in modality and anatomy recognition but difficulty with disease diagnosis and localization. GPT-4V excels at diagnostic report generation, indicating strong image captioning skills. While promising for biomedical imaging AI, GPT-4V requires further enhancement and validation before clinical deployment. We emphasize responsible development and testing for trustworthy integration of biomedical AGI. This rigorous evaluation of GPT-4V on diverse medical images advances understanding of multimodal large language models (LLMs) and guides future work toward impactful healthcare applications.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Zheng Liu",
"Hanqi Jiang",
"Tianyang Zhong",
"Zihao Wu",
"Chong-Yi Ma",
"Yiwei Li",
"Xiao-Xing Yu",
"Yutong Zhang",
"Yi Pan",
"Peng Shu",
"Yanjun Lyu",
"Lu Zhang",
"Junjie Yao",
"Peixin Dong",
"Chao-Yang Cao",
"Zhe Xiao",
"Jiaqi Wang",
"Huan Zhao",
"Shaochen Xu",
"Yaonai Wei",
"Jingyuan Chen",
"Haixing Dai",
"Peilong Wang",
"Haoyang He",
"Zewei Wang",
"Xinyu Wang",
"Xu Zhang",
"Lin Zhao",
"Yi-Hsueh Liu",
"Kai Zhang",
"Li Yan",
"Lichao Sun",
"Jun Liu",
"Ning Qiang",
"Bao Ge",
"Xiaoyan Cai",
"Shijie Zhao",
"Xintao Hu",
"Yi Yuan",
"Gang Li",
"Shu Zhang",
"Xin Zhang",
"Xi Jiang",
"Tuo Zhang",
"Dinggang Shen",
"Quanzheng Li",
"Wei Liu",
"Xiang Li",
"Dajiang Zhu",
"Tianming Liu"
],
"externalIds": {
"ArXiv": "2312.05256",
"DBLP": "journals/corr/abs-2312-05256",
"DOI": "10.48550/arXiv.2312.05256",
"CorpusId": 266162722
},
"url": "https://www.semanticscholar.org/paper/3004ffec00059d9c268aa396fff1ac93ec17157d",
"referenceCount": 0,
"citationCount": 13,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Engineering"
]
},
{
"title": "Structure Mapping Generative Adversarial Network for Multi-View Information Mapping Pattern Mining",
"abstract": "Multi-view learning is dedicated to integrating information from different views and improving the generalization performance of models. However, in most current works, learning under different views has significant independency, overlooking common information mapping patterns that exist between these views. This paper proposes a Structure Mapping Generative adversarial network (SM-GAN) framework, which utilizes the consistency and complementarity of multi-view data from the innovative perspective of information mapping. Specifically, based on network-structured multi-view data, a structural information mapping model is proposed to capture hierarchical interaction patterns among views. Subsequently, three different types of graph convolutional operations are designed in SM-GAN based on the model. Compared with regular GAN, we add a structural information mapping module between the encoder and decoder wthin the generator, completing the structural information mapping from the micro-view to the macro-view. This paper conducted sufficient validation experiments using public imaging genetics data in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. It is shown that SM-GAN outperforms baseline and advanced methods in multi-label classification and evolution prediction tasks.",
"year": 2023,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"authors": [
"Xia-an Bi",
"YangJun Huang",
"Zicheng Yang",
"Ke Chen",
"Zhaoxu Xing",
"Luyun Xu",
"Xiang Li",
"Zheng Liu",
"Tianming Liu"
],
"externalIds": {
"DBLP": "journals/pami/BiHYCXXLLL24",
"DOI": "10.1109/TPAMI.2023.3330795",
"CorpusId": 265042298,
"PubMed": "37930908"
},
"url": "https://www.semanticscholar.org/paper/9b28882daf514a8fbda80b4aabca8633e863fbd4",
"referenceCount": 60,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations",
"abstract": "Recent advancements in biological research leverage the integration of molecules, proteins, and natural language to enhance drug discovery. However, current models exhibit several limitations, such as the generation of invalid molecular SMILES, underutilization of contextual information, and equal treatment of structured and unstructured knowledge. To address these issues, we propose $\\mathbf{BioT5}$, a comprehensive pre-training framework that enriches cross-modal integration in biology with chemical knowledge and natural language associations. $\\mathbf{BioT5}$ utilizes SELFIES for $100%$ robust molecular representations and extracts knowledge from the surrounding context of bio-entities in unstructured biological literature. Furthermore, $\\mathbf{BioT5}$ distinguishes between structured and unstructured knowledge, leading to more effective utilization of information. After fine-tuning, BioT5 shows superior performance across a wide range of tasks, demonstrating its strong capability of capturing underlying relations and properties of bio-entities. Our code is available at $\\href{https://github.com/QizhiPei/BioT5}{Github}$.",
"year": 2023,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Qizhi Pei",
"Wei Zhang",
"Jinhua Zhu",
"Kehan Wu",
"Kaiyuan Gao",
"Lijun Wu",
"Yingce Xia",
"Rui Yan"
],
"externalIds": {
"DBLP": "conf/emnlp/PeiZZWGWXY23",
"ArXiv": "2310.07276",
"DOI": "10.48550/arXiv.2310.07276",
"CorpusId": 263834780
},
"url": "https://www.semanticscholar.org/paper/c3382fd533b9dd7f8ed7ba7766159079bc1d3935",
"referenceCount": 101,
"citationCount": 36,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "GraphLLM: Boosting Graph Reasoning Ability of Large Language Model",
"abstract": "The advancement of Large Language Models (LLMs) has remarkably pushed the boundaries towards artificial general intelligence (AGI), with their exceptional ability on understanding diverse types of information, including but not limited to images and audio. Despite this progress, a critical gap remains in empowering LLMs to proficiently understand and reason on graph data. Recent studies underscore LLMs' underwhelming performance on fundamental graph reasoning tasks. In this paper, we endeavor to unearth the obstacles that impede LLMs in graph reasoning, pinpointing the common practice of converting graphs into natural language descriptions (Graph2Text) as a fundamental bottleneck. To overcome this impediment, we introduce GraphLLM, a pioneering end-to-end approach that synergistically integrates graph learning models with LLMs. This synergy equips LLMs with the ability to proficiently interpret and reason on graph data, harnessing the superior expressive power of graph learning models. Our empirical evaluations across four fundamental graph reasoning tasks validate the effectiveness of GraphLLM. The results exhibit a substantial average accuracy enhancement of 54.44%, alongside a noteworthy context reduction of 96.45% across various graph reasoning tasks.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Ziwei Chai",
"Tianjie Zhang",
"Liang Wu",
"Kaiqiao Han",
"Xiaohai Hu",
"Xuanwen Huang",
"Yang Yang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2310-05845",
"ArXiv": "2310.05845",
"DOI": "10.48550/arXiv.2310.05845",
"CorpusId": 263830019
},
"url": "https://www.semanticscholar.org/paper/062fab31d30478b57457c8b7a94d7467f5bd770c",
"referenceCount": 38,
"citationCount": 36,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Beyond Text: A Deep Dive into Large Language Models' Ability on Understanding Graph Data",
"abstract": "Large language models (LLMs) have achieved impressive performance on many natural language processing tasks. However, their capabilities on graph-structured data remain relatively unexplored. In this paper, we conduct a series of experiments benchmarking leading LLMs on diverse graph prediction tasks spanning node, edge, and graph levels. We aim to assess whether LLMs can effectively process graph data and leverage topological structures to enhance performance, compared to specialized graph neural networks. Through varied prompt formatting and task/dataset selection, we analyze how well LLMs can interpret and utilize graph structures. By comparing LLMs' performance with specialized graph models, we offer insights into the strengths and limitations of employing LLMs for graph analytics. Our findings provide insights into LLMs' capabilities and suggest avenues for further exploration in applying them to graph analytics.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Yuntong Hu",
"Zhengwu Zhang",
"Liang Zhao"
],
"externalIds": {
"DBLP": "journals/corr/abs-2310-04944",
"ArXiv": "2310.04944",
"DOI": "10.48550/arXiv.2310.04944",
"CorpusId": 263831119
},
"url": "https://www.semanticscholar.org/paper/4ae7c4decd1df71c466f19d66d69b555945098c4",
"referenceCount": 48,
"citationCount": 16,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics",
"abstract": "Closing the gap between measurable genetic information and observable traits is a longstand-ing challenge in genomics. Yet, the prediction of molecular phenotypes from DNA sequences alone remains limited and inaccurate, often driven by the scarcity of annotated data and the inability to transfer learnings between prediction tasks. Here, we present an extensive study of foundation models pre-trained on DNA sequences, named the Nucleotide Transformer, rang-ing from 50M up to 2.5B parameters and integrating information from 3,202 diverse human genomes, as well as 850 genomes selected across diverse phyla, including both model and non-model organisms. These transformer models yield transferable, context-specific representations of nucleotide sequences, which allow for accurate molecular phenotype prediction even in low-data settings. We show that the developed models can be fine-tuned at low cost and despite low available data regime to solve a variety of genomics applications. Despite no supervision, the transformer models learned to focus attention on key genomic elements, including those that regulate gene expression, such as enhancers. Lastly, we demonstrate that utilizing model rep-resentations can improve the prioritization of functional genetic variants. The training and ap-plication of foundational models in genomics explored in this study provide a widely applicable stepping stone to bridge the gap of accurate molecular phenotype prediction from DNA sequence. Code and weights available at: https://github.com/instadeepai/nucleotide-transformer in Jax and https://huggingface.co/InstaDeepAI in Pytorch. Example notebooks to apply these models to any downstream task are available on HuggingFace.",
"year": 2023,
"venue": "bioRxiv",
"authors": [
"Hugo Dalla-Torre",
"Liam Gonzalez",
"Javier Mendoza Revilla",
"Nicolás López Carranza",
"Adam Henryk Grzywaczewski",
"Francesco Oteri",
"Christian Dallago",
"Evan Trop",
"Hassan Sirelkhatim",
"Guillaume Richard",
"Marcin J. Skwark",
"Karim Beguir",
"Marie Lopez",
"Thomas Pierrot"
],
"externalIds": {
"DOI": "10.1101/2023.01.11.523679",
"CorpusId": 255943445
},
"url": "https://www.semanticscholar.org/paper/b06e1a2c84fb3bff03b10283bc863f007f5483b6",
"referenceCount": 76,
"citationCount": 117,
"influentialCitationCount": 13,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology"
]
},
{
"title": "RadOnc-GPT: A Large Language Model for Radiation Oncology",
"abstract": "This paper presents RadOnc-GPT, a large language model specialized for radiation oncology through advanced tuning methods. RadOnc-GPT was finetuned on a large dataset of radiation oncology patient records and clinical notes from the Mayo Clinic in Arizona. The model employs instruction tuning on three key tasks - generating radiotherapy treatment regimens, determining optimal radiation modalities, and providing diagnostic descriptions/ICD codes based on patient diagnostic details. Evaluations conducted by comparing RadOnc-GPT outputs to general large language model outputs showed that RadOnc-GPT generated outputs with significantly improved clarity, specificity, and clinical relevance. The study demonstrated the potential of using large language models fine-tuned using domain-specific knowledge like RadOnc-GPT to achieve transformational capabilities in highly specialized healthcare fields such as radiation oncology.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Zheng Liu",
"Peilong Wang",
"Yiwei Li",
"J. Holmes",
"Peng Shu",
"Lian-Cheng Zhang",
"Chenbin Liu",
"Ninghao Liu",
"Dajiang Zhu",
"Xiang Li",
"Quanzheng Li",
"Samir H. Patel",
"Terence T. Sio",
"Tianming Liu",
"W. Liu"
],
"externalIds": {
"ArXiv": "2309.10160",
"DBLP": "journals/corr/abs-2309-10160",
"DOI": "10.48550/arXiv.2309.10160",
"CorpusId": 262054007
},
"url": "https://www.semanticscholar.org/paper/5ac14b8867686fb603e95033e94255a9e1efde27",
"referenceCount": 60,
"citationCount": 13,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges",
"abstract": "Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas. This fascination extends particularly to the Internet of Things (IoT), a landscape characterized by the interconnection of countless devices, sensors, and systems, collectively gathering and sharing data to enable intelligent decision-making and automation. This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the IoT. Specifically, it starts by outlining the fundamental principles of IoT and the critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it delves into AGI fundamentals, culminating in the formulation of a conceptual framework for AGI's seamless integration within IoT. The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education. However, adapting AGI to resource-constrained IoT settings necessitates dedicated research efforts. Furthermore, the paper addresses constraints imposed by limited computing resources, intricacies associated with large-scale IoT communication, as well as the critical concerns pertaining to security and privacy.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Fei Dou",
"Jin Ye",
"Geng Yuan",
"Qin Lu",
"Wei Niu",
"Haijian Sun",
"Le Guan",
"Guoyu Lu",
"Gengchen Mai",
"Ninghao Liu",
"Jin Lu",
"Zheng Liu",
"Zihao Wu",
"Chenjiao Tan",
"Shaochen Xu",
"Xianqiao Wang",
"Guoming Li",
"Lilong Chai",
"Sheng Li",
"Jin Sun",
"Hongyue Sun",
"Yunli Shao",
"Changying Li",
"Tianming Liu",
"Wenzhan Song"
],
"externalIds": {
"DBLP": "journals/corr/abs-2309-07438",
"ArXiv": "2309.07438",
"DOI": "10.48550/arXiv.2309.07438",
"CorpusId": 261822539
},
"url": "https://www.semanticscholar.org/paper/576a3159d6c3f646d6fda6d047dfece4ea941fdd",
"referenceCount": 523,
"citationCount": 13,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Radiology-Llama2: Best-in-Class Large Language Model for Radiology",
"abstract": "This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning. Radiology-Llama2 is based on the Llama2 architecture and further trained on a large dataset of radiology reports to generate coherent and clinically useful impressions from radiological findings. Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance compared to other generative language models, with a Rouge-1 score of 0.4834 on MIMIC-CXR and 0.4185 on OpenI. Additional assessments by radiology experts highlight the model's strengths in understandability, coherence, relevance, conciseness, and clinical utility. The work illustrates the potential of localized language models designed and tuned for specialized domains like radiology. When properly evaluated and deployed, such models can transform fields like radiology by automating rote tasks and enhancing human expertise.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Zheng Liu",
"Yiwei Li",
"Peng Shu",
"Aoxiao Zhong",
"Longtao Yang",
"Chao Ju",
"Zihao Wu",
"Chong-Yi Ma",
"Jie Luo",
"Cheng Chen",
"Sekeun Kim",
"Jiang Hu",
"Haixing Dai",
"Lin Zhao",
"Dajiang Zhu",
"Jun Liu",
"W. Liu",
"Dinggang Shen",
"Tianming Liu",
"Quanzheng Li",
"Xiang Li"
],
"externalIds": {
"DBLP": "journals/corr/abs-2309-06419",
"ArXiv": "2309.06419",
"DOI": "10.48550/arXiv.2309.06419",
"CorpusId": 261696494
},
"url": "https://www.semanticscholar.org/paper/420d6754315ac5db8a040386245cd15b9fe5b459",
"referenceCount": 66,
"citationCount": 24,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Llama 2: Open Foundation and Fine-Tuned Chat Models",
"abstract": "In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Louis Martin",
"Kevin R. Stone",
"Peter Albert",
"Amjad Almahairi",
"Yasmine Babaei",
"Nikolay Bashlykov",
"Soumya Batra",
"Prajjwal Bhargava",
"Shruti Bhosale",
"D. Bikel",
"Lukas Blecher",
"Cristian Cantón Ferrer",
"Moya Chen",
"Guillem Cucurull",
"David Esiobu",
"Jude Fernandes",
"Jeremy Fu",
"Wenyin Fu",
"Brian Fuller",
"Cynthia Gao",
"Vedanuj Goswami",
"Naman Goyal",
"A. Hartshorn",
"Saghar Hosseini",
"Rui Hou",
"Hakan Inan",
"Marcin Kardas",
"Viktor Kerkez",
"Madian Khabsa",
"Isabel M. Kloumann",
"A. Korenev",
"Punit Singh Koura",
"Marie-Anne Lachaux",
"Thibaut Lavril",
"Jenya Lee",
"Diana Liskovich",
"Yinghai Lu",
"Yuning Mao",
"Xavier Martinet",
"Todor Mihaylov",
"Pushkar Mishra",
"Igor Molybog",
"Yixin Nie",
"Andrew Poulton",
"Jeremy Reizenstein",
"Rashi Rungta",
"Kalyan Saladi",
"Alan Schelten",
"Ruan Silva",
"Eric Michael Smith",
"R. Subramanian",
"Xia Tan",
"Binh Tang",
"Ross Taylor",
"Adina Williams",
"Jian Xiang Kuan",
"Puxin Xu",
"Zhengxu Yan",
"Iliyan Zarov",
"Yuchen Zhang",
"Angela Fan",
"Melanie Kambadur",
"Sharan Narang",
"Aurelien Rodriguez",
"Robert Stojnic",
"Sergey Edunov",
"Thomas Scialom"
],
"externalIds": {
"ArXiv": "2307.09288",
"DBLP": "journals/corr/abs-2307-09288",
"CorpusId": 259950998
},
"url": "https://www.semanticscholar.org/paper/104b0bb1da562d53cbda87aec79ef6a2827d191a",
"referenceCount": 131,
"citationCount": 7147,
"influentialCitationCount": 1094,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Exploring Multimodal Approaches for Alzheimer's Disease Detection Using Patient Speech Transcript and Audio Data",
"abstract": "Alzheimer's disease (AD) is a common form of dementia that severely impacts patient health. As AD impairs the patient's language understanding and expression ability, the speech of AD patients can serve as an indicator of this disease. This study investigates various methods for detecting AD using patients' speech and transcripts data from the DementiaBank Pitt database. The proposed approach involves pre-trained language models and Graph Neural Network (GNN) that constructs a graph from the speech transcript, and extracts features using GNN for AD detection. Data augmentation techniques, including synonym replacement, GPT-based augmenter, and so on, were used to address the small dataset size. Audio data was also introduced, and WavLM model was used to extract audio features. These features were then fused with text features using various methods. Finally, a contrastive learning approach was attempted by converting speech transcripts back to audio and using it for contrastive learning with the original audio. We conducted intensive experiments and analysis on the above methods. Our findings shed light on the challenges and potential solutions in AD detection using speech and audio data.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hongmin Cai",
"Xiaoke Huang",
"Zheng Liu",
"Wenxiong Liao",
"Haixing Dai",
"Zihao Wu",
"Dajiang Zhu",
"Hui Ren",
"Quanzheng Li",
"Tianming Liu",
"Xiang Li"
],
"externalIds": {
"ArXiv": "2307.02514",
"DBLP": "journals/corr/abs-2307-02514",
"DOI": "10.48550/arXiv.2307.02514",
"CorpusId": 259360421
},
"url": "https://www.semanticscholar.org/paper/0590ec99d2b36b8922139078ac1a91fd62eeda61",
"referenceCount": 38,
"citationCount": 11,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Engineering",
"Computer Science"
]
},
{
"title": "HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution",
"abstract": "Genomic (DNA) sequences encode an enormous amount of information for gene regulation and protein synthesis. Similar to natural language models, researchers have proposed foundation models in genomics to learn generalizable features from unlabeled genome data that can then be fine-tuned for downstream tasks such as identifying regulatory elements. Due to the quadratic scaling of attention, previous Transformer-based genomic models have used 512 to 4k tokens as context (<0.001% of the human genome), significantly limiting the modeling of long-range interactions in DNA. In addition, these methods rely on tokenizers to aggregate meaningful DNA units, losing single nucleotide resolution where subtle genetic variations can completely alter protein function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large language model based on implicit convolutions was shown to match attention in quality while allowing longer context lengths and lower time complexity. Leveraging Hyenas new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level, an up to 500x increase over previous dense attention-based models. HyenaDNA scales sub-quadratically in sequence length (training up to 160x faster than Transformer), uses single nucleotide tokens, and has full global context at each layer. We explore what longer context enables - including the first use of in-context learning in genomics for simple adaptation to novel tasks without updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets using a model with orders of magnitude less parameters and pretraining data. On the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by +9 accuracy points.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Eric D Nguyen",
"Michael Poli",
"Marjan Faizi",
"A. Thomas",
"Callum Birch-Sykes",
"Michael Wornow",
"Aman Patel",
"Clayton M. Rabideau",
"Stefano Massaroli",
"Y. Bengio",
"Stefano Ermon",
"S. Baccus",
"Christopher Ré"
],
"externalIds": {
"DBLP": "conf/nips/NguyenPFTWBMPRB23",
"ArXiv": "2306.15794",
"DOI": "10.48550/arXiv.2306.15794",
"CorpusId": 259274952,
"PubMed": "37426456"
},
"url": "https://www.semanticscholar.org/paper/bfd2b76998a0521c12903ef5ced517adf70ad2ba",
"referenceCount": 59,
"citationCount": 112,
"influentialCitationCount": 22,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology",
"Medicine"
]
},
{
"title": "Large Scale Foundation Model on Single-cell Transcriptomics",
"abstract": "Large-scale pretrained models have become foundation models leading to breakthroughs in natural language processing and related fields. Developing foundation models in life science for deciphering the “languages” of cells and facilitating biomedical research is promising yet challenging. We developed a large-scale pretrained model scFoundation with 100M parameters for this purpose. scFoundation was trained on over 50 million human single-cell transcriptomics data, which contain high-throughput observations on the complex molecular features in all known types of cells. scFoundation is currently the largest model in terms of the size of trainable parameters, dimensionality of genes and the number of cells used in the pre-training. Experiments showed that scFoundation can serve as a foundation model for single-cell transcriptomics and achieve state-of-the-art performances in a diverse array of downstream tasks, such as gene expression enhancement, tissue drug response prediction, single-cell drug response classification, and single-cell perturbation prediction.",
"year": 2023,
"venue": "bioRxiv",
"authors": [
"Minsheng Hao",
"Jing Gong",
"Xin Zeng",
"Chiming Liu",
"Yucheng Guo",
"Xingyi Cheng",
"Taifeng Wang",
"Jianzhu Ma",
"Leo T. Song",
"Xuegong Zhang"
],
"externalIds": {
"DOI": "10.1101/2023.05.29.542705",
"CorpusId": 259025739,
"PubMed": "38844628"
},
"url": "https://www.semanticscholar.org/paper/c520d8a888355f7abb7728b2e2510fe7bc63f814",
"referenceCount": 94,
"citationCount": 59,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "AD-AutoGPT: An Autonomous GPT for Alzheimer's Disease Infodemiology",
"abstract": "In this pioneering study, inspired by AutoGPT, the state-of-the-art open-source application based on the GPT-4 large language model, we develop a novel tool called AD-AutoGPT which can conduct data collection, processing, and analysis about complex health narratives of Alzheimer's Disease in an autonomous manner via users' textual prompts. We collated comprehensive data from a variety of news sources, including the Alzheimer's Association, BBC, Mayo Clinic, and the National Institute on Aging since June 2022, leading to the autonomous execution of robust trend analyses, intertopic distance maps visualization, and identification of salient terms pertinent to Alzheimer's Disease. This approach has yielded not only a quantifiable metric of relevant discourse but also valuable insights into public focus on Alzheimer's Disease. This application of AD-AutoGPT in public health signifies the transformative potential of AI in facilitating a data-rich understanding of complex health narratives like Alzheimer's Disease in an autonomous manner, setting the groundwork for future AI-driven investigations in global health landscapes.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Haixing Dai",
"Yiwei Li",
"Zheng Liu",
"Lin Zhao",
"Zihao Wu",
"Suhang Song",
"Ye Shen",
"Dajiang Zhu",
"Xiang Li",
"Sheng Li",
"Xiaobai Yao",
"Lu Shi",
"Quanzheng Li",
"Zhuo Chen",
"D. Zhang",
"Gengchen Mai",
"Tianming Liu"
],
"externalIds": {
"ArXiv": "2306.10095",
"DBLP": "journals/corr/abs-2306-10095",
"DOI": "10.48550/arXiv.2306.10095",
"CorpusId": 259203752
},
"url": "https://www.semanticscholar.org/paper/7d5657c78f3fee9756061c6a82db44db9d413e0b",
"referenceCount": 55,
"citationCount": 22,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Artificial General Intelligence for Medical Imaging",
"abstract": "In this review, we explore the potential applications of Artificial General Intelligence (AGI) models in healthcare, focusing on foundational Large Language Models (LLMs), Large Vision Models, and Large Multimodal Models. We emphasize the importance of integrating clinical expertise, domain knowledge, and multimodal capabilities into AGI models. In addition, we lay out key roadmaps that guide the development and deployment of healthcare AGI models. Throughout the review, we provide critical perspectives on the potential challenges and pitfalls associated with deploying large-scale AGI models in the medical field. This comprehensive review aims to offer insights into the future implications of AGI in medical imaging, healthcare and beyond.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Xiang Li",
"Lu Zhang",
"Zihao Wu",
"Zheng Liu",
"Lin Zhao",
"Yixuan Yuan",
"Jun Liu",
"Gang Li",
"Dajiang Zhu",
"Pingkuan Yan",
"Quanzheng Li",
"W. Liu",
"Tianming Liu",
"Dinggang Shen"
],
"externalIds": {
"DBLP": "journals/corr/abs-2306-05480",
"ArXiv": "2306.05480",
"DOI": "10.48550/arXiv.2306.05480",
"CorpusId": 259129468
},
"url": "https://www.semanticscholar.org/paper/d818f40ea693a335e02f32dab520351d271c58bf",
"referenceCount": 165,
"citationCount": 26,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Transfer learning enables predictions in network biology",
"abstract": null,
"year": 2023,
"venue": "Nature",
"authors": [
"Christina V. Theodoris",
"Ling Xiao",
"Anant Chopra",
"M. Chaffin",
"Zeina R. Al Sayed",
"M. C. Hill",
"Helene Mantineo",
"Elizabeth M Brydon",
"Zexian Zeng",
"X. S. Liu",
"P. Ellinor"
],
"externalIds": {
"DOI": "10.1038/s41586-023-06139-9",
"CorpusId": 259002047,
"PubMed": "37258680"
},
"url": "https://www.semanticscholar.org/paper/7d1e59ce254bea5228da634dbe7c5c4160df6f98",
"referenceCount": 158,
"citationCount": 234,
"influentialCitationCount": 25,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "QLoRA: Efficient Finetuning of Quantized LLMs",
"abstract": "We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters~(LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLoRA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) double quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) paged optimziers to manage memory spikes. We use QLoRA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results, even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore, we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates where Guanaco fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Tim Dettmers",
"Artidoro Pagnoni",
"Ari Holtzman",
"Luke Zettlemoyer"
],
"externalIds": {
"ArXiv": "2305.14314",
"DBLP": "conf/nips/DettmersPHZ23",
"DOI": "10.48550/arXiv.2305.14314",
"CorpusId": 258841328
},
"url": "https://www.semanticscholar.org/paper/32ac52069e562d4f900afee70bdca63f53461481",
"referenceCount": 73,
"citationCount": 1385,
"influentialCitationCount": 172,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Community Graph Convolution Neural Network for Alzheimer's Disease Classification and Pathogenetic Factors Identification.",
"abstract": "As a complex neural network system, the brain regions and genes collaborate to effectively store and transmit information. We abstract the collaboration correlations as the brain region gene community network (BG-CN) and present a new deep learning approach, such as the community graph convolutional neural network (Com-GCN), for investigating the transmission of information within and between communities. The results can be used for diagnosing and extracting causal factors for Alzheimer's disease (AD). First, an affinity aggregation model for BG-CN is developed to describe intercommunity and intracommunity information transmission. Second, we design the Com-GCN architecture with intercommunity convolution and intracommunity convolution operations based on the affinity aggregation model. Through sufficient experimental validation on the AD neuroimaging initiative (ADNI) dataset, the design of Com-GCN matches the physiological mechanism better and improves the interpretability and classification performance. Furthermore, Com-GCN can identify lesioned brain regions and disease-causing genes, which may assist precision medicine and drug design in AD and serve as a valuable reference for other neurological disorders.",
"year": 2023,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"authors": [
"Xia-an Bi",
"Ke Chen",
"Siyu Jiang",
"Sheng Luo",
"W. Zhou",
"Zhaoxu Xing",
"Luyun Xu",
"Zheng Liu",
"Tianming Liu"
],
"externalIds": {
"DOI": "10.1109/TNNLS.2023.3269446",
"CorpusId": 258807857,
"PubMed": "37204952"
},
"url": "https://www.semanticscholar.org/paper/e0172192d7e813bcac0b997a9c6acd4cf663237e",
"referenceCount": 0,
"citationCount": 14,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Interpretability at Scale: Identifying Causal Mechanisms in Alpaca",
"abstract": "Obtaining human-interpretable explanations of large, general-purpose language models is an urgent goal for AI safety. However, it is just as important that our interpretability methods are faithful to the causal dynamics underlying model behavior and able to robustly generalize to unseen inputs. Distributed Alignment Search (DAS) is a powerful gradient descent method grounded in a theory of causal abstraction that has uncovered perfect alignments between interpretable symbolic algorithms and small deep learning models fine-tuned for specific tasks. In the present paper, we scale DAS significantly by replacing the remaining brute-force search steps with learned parameters -- an approach we call Boundless DAS. This enables us to efficiently search for interpretable causal structure in large language models while they follow instructions. We apply Boundless DAS to the Alpaca model (7B parameters), which, off the shelf, solves a simple numerical reasoning problem. With Boundless DAS, we discover that Alpaca does this by implementing a causal model with two interpretable boolean variables. Furthermore, we find that the alignment of neural representations with these variables is robust to changes in inputs and instructions. These findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models. Our tool is extensible to larger LLMs and is released publicly at `https://github.com/stanfordnlp/pyvene`.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Zhengxuan Wu",
"Atticus Geiger",
"Christopher Potts",
"Noah D. Goodman"
],
"externalIds": {
"DBLP": "journals/corr/abs-2305-08809",
"ArXiv": "2305.08809",
"DOI": "10.48550/arXiv.2305.08809",
"CorpusId": 258685390
},
"url": "https://www.semanticscholar.org/paper/3f7fa58806614a1f38ae760c25c1305e3a87d4ce",
"referenceCount": 61,
"citationCount": 60,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT",
"abstract": "Prompts have been proven to play a crucial role in large language models, and in recent years, vision models have also been using prompts to improve scalability for multiple downstream tasks. In this paper, we focus on adapting prompt design based on instruction tuning into a visual transformer model for image classification which we called Instruction-ViT. The key idea is to implement multi-modal prompts (text or image prompt) related to category information to guide the fine-tuning of the model. Based on the experiments of several image captionining tasks, the performance and domain adaptability were improved. Our work provided an innovative strategy to fuse multi-modal prompts with better performance and faster adaptability for visual classification models.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Zhe Xiao",
"Yuzhong Chen",
"Lu Zhang",
"Jun Yao",
"Zihao Wu",
"Xiao-Xing Yu",
"Yirong Pan",
"Lin Zhao",
"Chonghe Ma",
"Xinyu Liu",
"W. Liu",
"Xiang Li",
"Yixuan Yuan",
"Dinggang Shen",
"Dajiang Zhu",
"Tianming Liu",
"Xi Jiang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2305-00201",
"ArXiv": "2305.00201",
"DOI": "10.48550/arXiv.2305.00201",
"CorpusId": 258426716
},
"url": "https://www.semanticscholar.org/paper/a677938545f63ad44c87d09f85dd231980a8476f",
"referenceCount": 48,
"citationCount": 13,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Prompt Engineering for Healthcare: Methodologies and Applications",
"abstract": "Prompt engineering is a critical technique in the field of natural language processing that involves designing and optimizing the prompts used to input information into models, aiming to enhance their performance on specific tasks. With the recent advancements in large language models, prompt engineering has shown significant superiority across various domains and has become increasingly important in the healthcare domain. However, there is a lack of comprehensive reviews specifically focusing on prompt engineering in the medical field. This review will introduce the latest advances in prompt engineering in the field of natural language processing for the medical field. First, we will provide the development of prompt engineering and emphasize its significant contributions to healthcare natural language processing applications such as question-answering systems, text summarization, and machine translation. With the continuous improvement of general large language models, the importance of prompt engineering in the healthcare domain is becoming increasingly prominent. The aim of this article is to provide useful resources and bridges for healthcare natural language processing researchers to better explore the application of prompt engineering in this field. We hope that this review can provide new ideas and inspire for research and application in medical natural language processing.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Jiaqi Wang",
"Enze Shi",
"Sigang Yu",
"Zihao Wu",
"Chong Ma",
"Haixing Dai",
"Qiushi Yang",
"Yanqing Kang",
"Jinru Wu",
"Huawen Hu",
"Chenxi Yue",
"Haiyang Zhang",
"Yi-Hsueh Liu",
"Xiang Li",
"Bao Ge",
"Dajiang Zhu",
"Yixuan Yuan",
"Dinggang Shen",
"Tianming Liu",
"Shu Zhang"
],
"externalIds": {
"ArXiv": "2304.14670",
"DBLP": "journals/corr/abs-2304-14670",
"DOI": "10.48550/arXiv.2304.14670",
"CorpusId": 258418353
},
"url": "https://www.semanticscholar.org/paper/385376b8aa48c25403f17d6206db7c09b67e1314",
"referenceCount": 140,
"citationCount": 65,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Differentiating ChatGPT-Generated and Human-Written Medical Texts: Quantitative Study",
"abstract": "Background Large language models, such as ChatGPT, are capable of generating grammatically perfect and human-like text content, and a large number of ChatGPT-generated texts have appeared on the internet. However, medical texts, such as clinical notes and diagnoses, require rigorous validation, and erroneous medical content generated by ChatGPT could potentially lead to disinformation that poses significant harm to health care and the general public. Objective This study is among the first on responsible artificial intelligence–generated content in medicine. We focus on analyzing the differences between medical texts written by human experts and those generated by ChatGPT and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT. Methods We first constructed a suite of data sets containing medical texts written by human experts and generated by ChatGPT. We analyzed the linguistic features of these 2 types of content and uncovered differences in vocabulary, parts-of-speech, dependency, sentiment, perplexity, and other aspects. Finally, we designed and implemented machine learning methods to detect medical text generated by ChatGPT. The data and code used in this paper are published on GitHub. Results Medical texts written by humans were more concrete, more diverse, and typically contained more useful information, while medical texts generated by ChatGPT paid more attention to fluency and logic and usually expressed general terminologies rather than effective information specific to the context of the problem. A bidirectional encoder representations from transformers–based model effectively detected medical texts generated by ChatGPT, and the F1 score exceeded 95%. Conclusions Although text generated by ChatGPT is grammatically perfect and human-like, the linguistic characteristics of generated medical texts were different from those written by human experts. Medical text generated by ChatGPT could be effectively detected by the proposed machine learning algorithms. This study provides a pathway toward trustworthy and accountable use of large language models in medicine.",
"year": 2023,
"venue": "JMIR Medical Education",
"authors": [
"Wenxiong Liao",
"Zheng Liu",
"Haixing Dai",
"Shaochen Xu",
"Zihao Wu",
"Yiyang Zhang",
"Xiaoke Huang",
"Dajiang Zhu",
"Hongmin Cai",
"Tianming Liu",
"Xiang Li"
],
"externalIds": {
"DBLP": "journals/corr/abs-2304-11567",
"PubMedCentral": "10784984",
"ArXiv": "2304.11567",
"DOI": "10.2196/48904",
"CorpusId": 258298336,
"PubMed": "38153785"
},
"url": "https://www.semanticscholar.org/paper/286756b2b02d6a7bc49a7ad66686f30831f26c25",
"referenceCount": 61,
"citationCount": 39,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information",
"abstract": "While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.",
"year": 2023,
"venue": "Bioinform.",
"authors": [
"Qiao Jin",
"Yifan Yang",
"Qingyu Chen",
"Zhiyong Lu"
],
"externalIds": {
"PubMedCentral": "10153281",
"ArXiv": "2304.09667",
"DBLP": "journals/corr/abs-2304-09667",
"DOI": "10.1093/bioinformatics/btae075",
"CorpusId": 258298113,
"PubMed": "38341654"
},
"url": "https://www.semanticscholar.org/paper/05e003a34148d4663734d3f39deefa0979d2a0e6",
"referenceCount": 41,
"citationCount": 95,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology",
"Medicine"
]
},
{
"title": "An Iterative Optimizing Framework for Radiology Report Summarization With ChatGPT",
"abstract": "The “Impression” section of a radiology report is a critical basis for communication between radiologists and other physicians. Typically written by radiologists, this part is derived from the “Findings” section, which can be laborious and error-prone. Although deep-learning-based models, such as bidirectional encoder representation from transformers (BERT), have achieved promising results in automatic impression generation (AIG), such models often require substantial amounts of medical data and have poor generalization performance. Recently, large language models (LLMs) like Chat Generative Pre-trained Transformer (ChatGPT) have shown strong generalization capabilities and performance, but their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, leveraging the contextual learning capabilities of LLMs through our dynamic prompt and iterative optimization algorithm to accomplish the AIG task. ImpressionGPT initially employs a small amount of domain-specific data to create a dynamic prompt, extracting contextual semantic information closely related to the test data. Subsequently, the iterative optimization algorithm automatically evaluates the output of LLMs and provides optimization suggestions, continuously refining the output results. The proposed ImpressionGPT model achieves superior performance of AIG task on both the Medical Information Mart for Intensive Care - Chest X-ray database (MIMIC-CXR) and Open Access Biomedical Image Search Engine (OpenI) datasets without requiring additional training data or fine-tuning the LLMs. This work presents a paradigm for localizing LLMs that can be applied in a wide range of similar application scenarios, bridging the gap between general-purpose LLMs and the specific language processing needs of various domains.",
"year": 2023,
"venue": "IEEE Transactions on Artificial Intelligence",
"authors": [
"Chong Ma",
"Zihao Wu",
"Jiaqi Wang",
"Shaochen Xu",
"Yaonai Wei",
"Zheng Liu",
"Lei Guo",
"Xiaoya Cai",
"Shu Zhang",
"Tuo Zhang",
"Dajiang Zhu",
"Dinggang Shen",
"Tianming Liu",
"Xiang Li"
],
"externalIds": {
"ArXiv": "2304.08448",
"DBLP": "journals/tai/MaWWXWLZJGCZZZSLL24",
"DOI": "10.1109/TAI.2024.3364586",
"CorpusId": 258180358
},
"url": "https://www.semanticscholar.org/paper/848909fbae167f21589bfc7a54fbf27e306b883c",
"referenceCount": 48,
"citationCount": 72,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Biological Factor Regulatory Neural Network",
"abstract": "Genes are fundamental for analyzing biological systems and many recent works proposed to utilize gene expression for various biological tasks by deep learning models. Despite their promising performance, it is hard for deep neural networks to provide biological insights for humans due to their black-box nature. Recently, some works integrated biological knowledge with neural networks to improve the transparency and performance of their models. However, these methods can only incorporate partial biological knowledge, leading to suboptimal performance. In this paper, we propose the Biological Factor Regulatory Neural Network (BFReg-NN), a generic framework to model relations among biological factors in cell systems. BFReg-NN starts from gene expression data and is capable of merging most existing biological knowledge into the model, including the regulatory relations among genes or proteins (e.g., gene regulatory networks (GRN), protein-protein interaction networks (PPI)) and the hierarchical relations among genes, proteins and pathways (e.g., several genes/proteins are contained in a pathway). Moreover, BFReg-NN also has the ability to provide new biologically meaningful insights because of its white-box characteristics. Experimental results on different gene expression-based tasks verify the superiority of BFReg-NN compared with baselines. Our case studies also show that the key insights found by BFReg-NN are consistent with the biological literature.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Xinnan Dai",
"Caihua Shan",
"Jie Zheng",
"Xiaoxiao Li",
"Dongsheng Li"
],
"externalIds": {
"DBLP": "journals/corr/abs-2304-04982",
"ArXiv": "2304.04982",
"DOI": "10.48550/arXiv.2304.04982",
"CorpusId": 258059932
},
"url": "https://www.semanticscholar.org/paper/5eb0c101cbb75b61cb86158183a84ea3689a0117",
"referenceCount": 52,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Evaluating large language models on a highly-specialized topic, radiation oncology physics",
"abstract": "Purpose We present the first study to investigate Large Language Models (LLMs) in answering radiation oncology physics questions. Because popular exams like AP Physics, LSAT, and GRE have large test-taker populations and ample test preparation resources in circulation, they may not allow for accurately assessing the true potential of LLMs. This paper proposes evaluating LLMs on a highly-specialized topic, radiation oncology physics, which may be more pertinent to scientific and medical communities in addition to being a valuable benchmark of LLMs. Methods We developed an exam consisting of 100 radiation oncology physics questions based on our expertise. Four LLMs, ChatGPT (GPT-3.5), ChatGPT (GPT-4), Bard (LaMDA), and BLOOMZ, were evaluated against medical physicists and non-experts. The performance of ChatGPT (GPT-4) was further explored by being asked to explain first, then answer. The deductive reasoning capability of ChatGPT (GPT-4) was evaluated using a novel approach (substituting the correct answer with “None of the above choices is the correct answer.”). A majority vote analysis was used to approximate how well each group could score when working together. Results ChatGPT GPT-4 outperformed all other LLMs and medical physicists, on average, with improved accuracy when prompted to explain before answering. ChatGPT (GPT-3.5 and GPT-4) showed a high level of consistency in its answer choices across a number of trials, whether correct or incorrect, a characteristic that was not observed in the human test groups or Bard (LaMDA). In evaluating deductive reasoning ability, ChatGPT (GPT-4) demonstrated surprising accuracy, suggesting the potential presence of an emergent ability. Finally, although ChatGPT (GPT-4) performed well overall, its intrinsic properties did not allow for further improvement when scoring based on a majority vote across trials. In contrast, a team of medical physicists were able to greatly outperform ChatGPT (GPT-4) using a majority vote. Conclusion This study suggests a great potential for LLMs to work alongside radiation oncology experts as highly knowledgeable assistants.",
"year": 2023,
"venue": "Frontiers in Oncology",
"authors": [
"J. Holmes",
"Zheng Liu",
"Lian-Cheng Zhang",
"Yuzhen Ding",
"T. Sio",
"L. Mcgee",
"J. Ashman",
"Xiang Li",
"Tianming Liu",
"Jiajian Shen",
"W. Liu"
],
"externalIds": {
"ArXiv": "2304.01938",
"PubMedCentral": "10388568",
"DBLP": "journals/corr/abs-2304-01938",
"DOI": "10.3389/fonc.2023.1219326",
"CorpusId": 257921233,
"PubMed": "37529688"
},
"url": "https://www.semanticscholar.org/paper/9ec42d155e2014e86ab49adcf76fd40a41a867ea",
"referenceCount": 76,
"citationCount": 86,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Computer Science",
"Medicine"
]
},
{
"title": "When Brain-inspired AI Meets AGI",
"abstract": "Artificial General Intelligence (AGI) has been a long-standing goal of humanity, with the aim of creating machines capable of performing any intellectual task that humans can do. To achieve this, AGI researchers draw inspiration from the human brain and seek to replicate its principles in intelligent machines. Brain-inspired artificial intelligence is a field that has emerged from this endeavor, combining insights from neuroscience, psychology, and computer science to develop more efficient and powerful AI systems. In this article, we provide a comprehensive overview of brain-inspired AI from the perspective of AGI. We begin with the current progress in brain-inspired AI and its extensive connection with AGI. We then cover the important characteristics for both human intelligence and AGI (e.g., scaling, multimodality, and reasoning). We discuss important technologies toward achieving AGI in current AI systems, such as in-context learning and prompt tuning. We also investigate the evolution of AGI systems from both algorithmic and infrastructural perspectives. Finally, we explore the limitations and future of AGI.",
"year": 2023,
"venue": "Meta-Radiology",
"authors": [
"Lin Zhao",
"Lu Zhang",
"Zihao Wu",
"Yuzhong Chen",
"Haixing Dai",
"Xiao-Xing Yu",
"Zheng Liu",
"Tuo Zhang",
"Xintao Hu",
"Xi Jiang",
"Xiang Li",
"Dajiang Zhu",
"Dinggang Shen",
"Tianming Liu"
],
"externalIds": {
"ArXiv": "2303.15935",
"DBLP": "journals/corr/abs-2303-15935",
"DOI": "10.48550/arXiv.2303.15935",
"CorpusId": 257771872
},
"url": "https://www.semanticscholar.org/paper/3cf78486fcee84f3f6898f0aaf194b7c28b92459",
"referenceCount": 130,
"citationCount": 63,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GPT-4 Technical Report",
"abstract": "We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.",
"year": 2023,
"venue": "",
"authors": [
"OpenAI Josh Achiam",
"Steven Adler",
"Sandhini Agarwal",
"Lama Ahmad",
"Ilge Akkaya",
"Florencia Leoni Aleman",
"Diogo Almeida",
"Janko Altenschmidt",
"Sam Altman",
"Shyamal Anadkat",
"Red Avila",
"Igor Babuschkin",
"S. Balaji",
"Valerie Balcom",
"Paul Baltescu",
"Haim-ing Bao",
"Mo Bavarian",
"Jeff Belgum",
"Irwan Bello",
"Jake Berdine",
"Gabriel Bernadett-Shapiro",
"Christopher Berner",
"Lenny Bogdonoff",
"Oleg Boiko",
"Madelaine Boyd",
"Anna-Luisa Brakman",
"Greg Brockman",
"Tim Brooks",
"Miles Brundage",
"Kevin Button",
"Trevor Cai",
"Rosie Campbell",
"Andrew Cann",
"Brittany Carey",
"Chelsea Carlson",
"Rory Carmichael",
"Brooke Chan",
"Che Chang",
"Fotis Chantzis",
"Derek Chen",
"Sully Chen",
"Ruby Chen",
"Jason Chen",
"Mark Chen",
"B. Chess",
"Chester Cho",
"Casey Chu",
"Hyung Won Chung",
"Dave Cummings",
"Jeremiah Currier",
"Yunxing Dai",
"Cory Decareaux",
"Thomas Degry",
"Noah Deutsch",
"Damien Deville",
"Arka Dhar",
"David Dohan",
"Steve Dowling",
"Sheila Dunning",
"Adrien Ecoffet",
"Atty Eleti",
"Tyna Eloundou",
"David Farhi",
"Liam Fedus",
"Niko Felix",
"Sim'on Posada Fishman",
"Juston Forte",
"Is-abella Fulford",
"Leo Gao",
"Elie Georges",
"C. Gibson",
"Vik Goel",
"Tarun Gogineni",
"Gabriel Goh",
"Raphael Gontijo-Lopes",
"Jonathan Gordon",
"Morgan Grafstein",
"Scott Gray",
"Ryan Greene",
"Joshua Gross",
"S. Gu",
"Yufei Guo",
"Chris Hallacy",
"Jesse Han",
"Jeff Harris",
"Yuchen He",
"Mike Heaton",
"Johannes Heidecke",
"Chris Hesse",
"Alan Hickey",
"Wade Hickey",
"Peter Hoeschele",
"Brandon Houghton",
"Kenny Hsu",
"Shengli Hu",
"Xin Hu",
"Joost Huizinga",
"Shantanu Jain",
"Shawn Jain",
"Joanne Jang",
"Angela Jiang",
"Roger Jiang",
"Haozhun Jin",
"Denny Jin",
"Shino Jomoto",
"B. Jonn",
"Heewoo Jun",
"Tomer Kaftan",
"Lukasz Kaiser",
"Ali Kamali",
"I. Kanitscheider",
"N. Keskar",
"Tabarak Khan",
"Logan Kilpatrick",
"Jong Wook Kim",
"Christina Kim",
"Yongjik Kim",
"Hendrik Kirchner",
"J. Kiros",
"Matthew Knight",
"Daniel Kokotajlo",
"Lukasz Kondraciuk",
"A. Kondrich",
"Aris Konstantinidis",
"Kyle Kosic",
"Gretchen Krueger",
"Vishal Kuo",
"Michael Lampe",
"Ikai Lan",
"Teddy Lee",
"J. Leike",
"Jade Leung",
"Daniel Levy",
"Chak Ming Li",
"Rachel Lim",
"Molly Lin",
"Stephanie Lin",
"Ma-teusz Litwin",
"Theresa Lopez",
"Ryan Lowe",
"Patricia Lue",
"A. Makanju",
"Kim Malfacini",
"Sam Manning",
"Todor Markov",
"Yaniv Markovski",
"Bianca Martin",
"Katie Mayer",
"Andrew Mayne",
"Bob McGrew",
"S. McKinney",
"C. McLeavey",
"Paul McMillan",
"Jake McNeil",
"David Medina",
"Aalok Mehta",
"Jacob Menick",
"Luke Metz",
"Andrey Mishchenko",
"Pamela Mishkin",
"Vinnie Monaco",
"Evan Morikawa",
"Daniel P. Mossing",
"Tong Mu",
"Mira Murati",
"O. Murk",
"David M'ely",
"Ashvin Nair",
"Reiichiro Nakano",
"Rajeev Nayak",
"Arvind Neelakantan",
"Richard Ngo",
"Hyeonwoo Noh",
"Ouyang Long",
"Cullen O'Keefe",
"J. Pachocki",
"Alex Paino",
"Joe Palermo",
"Ashley Pantuliano",
"Giambattista Parascandolo",
"Joel Parish",
"Emy Parparita",
"Alexandre Passos",
"Mikhail Pavlov",
"Andrew Peng",
"Adam Perelman",
"Filipe de Avila Belbute Peres",
"Michael Petrov",
"Henrique Pondé de Oliveira Pinto",
"Michael Pokorny",
"Michelle Pokrass",
"Vitchyr H. Pong",
"Tolly Powell",
"Alethea Power",
"Boris Power",
"Elizabeth Proehl",
"Raul Puri",
"Alec Radford",
"Jack W. Rae",
"Aditya Ramesh",
"Cameron Raymond",
"Francis Real",
"Kendra Rimbach",
"Carl Ross",
"Bob Rotsted",
"Henri Roussez",
"Nick Ryder",
"M. Saltarelli",
"Ted Sanders",
"Shibani Santurkar",
"Girish Sastry",
"Heather Schmidt",
"David Schnurr",
"John Schulman",
"Daniel Selsam",
"Kyla Sheppard",
"Toki Sherbakov",
"Jessica Shieh",
"Sarah Shoker",
"Pranav Shyam",
"Szymon Sidor",
"Eric Sigler",
"Maddie Simens",
"Jordan Sitkin",
"Katarina Slama",
"Ian Sohl",
"Benjamin D. Sokolowsky",
"Yang Song",
"Natalie Staudacher",
"F. Such",
"Natalie Summers",
"I. Sutskever",
"Jie Tang",
"N. Tezak",
"Madeleine Thompson",
"Phil Tillet",
"Amin Tootoonchian",
"Elizabeth Tseng",
"Preston Tuggle",
"Nick Turley",
"Jerry Tworek",
"Juan Felipe Cer'on Uribe",
"Andrea Vallone",
"Arun Vijayvergiya",
"Chelsea Voss",
"Carroll L. Wainwright",
"Justin Jay Wang",
"Alvin Wang",
"Ben Wang",
"Jonathan Ward",
"Jason Wei",
"CJ Weinmann",
"Akila Welihinda",
"P. Welinder",
"Jiayi Weng",
"Lilian Weng",
"Matt Wiethoff",
"Dave Willner",
"Clemens Winter",
"Samuel Wolrich",
"Hannah Wong",
"Lauren Workman",
"Sherwin Wu",
"Jeff Wu",
"Michael Wu",
"Kai Xiao",
"Tao Xu",
"Sarah Yoo",
"Kevin Yu",
"Qim-ing Yuan",
"Wojciech Zaremba",
"Rowan Zellers",
"Chong Zhang",
"Marvin Zhang",
"Shengjia Zhao",
"Tianhao Zheng",
"Juntang Zhuang",
"William Zhuk",
"Barret Zoph"
],
"externalIds": {
"ArXiv": "2303.08774",
"CorpusId": 257532815
},
"url": "https://www.semanticscholar.org/paper/163b4d6a79a5b19af88b8585456363340d9efd04",
"referenceCount": 0,
"citationCount": 7060,
"influentialCitationCount": 1037,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GeneTuring tests GPT models in genomics",
"abstract": "Generative Pre-trained Transformers (GPT) are powerful language models that have great potential to transform biomedical research. However, they are known to suffer from artificial hallucinations and provide false answers that are seemingly correct in some situations. We developed GeneTuring, a comprehensive QA database with 600 questions in genomics, and manually scored 10,800 answers returned by six GPT models, including GPT-3, ChatGPT, and New Bing. New Bing has the best overall performance and significantly reduces the level of AI hallucination compared to other models, thanks to its ability to recognize its incapacity in answering questions. We argue that improving incapacity awareness is equally important as improving model accuracy to address AI hallucination.",
"year": 2023,
"venue": "bioRxiv",
"authors": [
"Wenpin Hou",
"Zhicheng Ji"
],
"externalIds": {
"PubMedCentral": "10054955",
"DOI": "10.1101/2023.03.11.532238",
"CorpusId": 257535768,
"PubMed": "36993670"
},
"url": "https://www.semanticscholar.org/paper/6470b561d3426714847fd9201c8ea4ab8585fb96",
"referenceCount": 20,
"citationCount": 26,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "ProtST: Multi-Modality Learning of Protein Sequences and Biomedical Texts",
"abstract": "Current protein language models (PLMs) learn protein representations mainly based on their sequences, thereby well capturing co-evolutionary information, but they are unable to explicitly acquire protein functions, which is the end goal of protein representation learning. Fortunately, for many proteins, their textual property descriptions are available, where their various functions are also described. Motivated by this fact, we first build the ProtDescribe dataset to augment protein sequences with text descriptions of their functions and other important properties. Based on this dataset, we propose the ProtST framework to enhance Protein Sequence pre-training and understanding by biomedical Texts. During pre-training, we design three types of tasks, i.e., unimodal mask prediction, multimodal representation alignment and multimodal mask prediction, to enhance a PLM with protein property information with different granularities and, at the same time, preserve the PLM's original representation power. On downstream tasks, ProtST enables both supervised learning and zero-shot prediction. We verify the superiority of ProtST-induced PLMs over previous ones on diverse representation learning benchmarks. Under the zero-shot setting, we show the effectiveness of ProtST on zero-shot protein classification, and ProtST also enables functional protein retrieval from a large-scale database without any function annotation.",
"year": 2023,
"venue": "International Conference on Machine Learning",
"authors": [
"Minghao Xu",
"Xinyu Yuan",
"Santiago Miret",
"Jian Tang"
],
"externalIds": {
"DBLP": "conf/icml/XuYM023",
"ArXiv": "2301.12040",
"DOI": "10.48550/arXiv.2301.12040",
"CorpusId": 256390530
},
"url": "https://www.semanticscholar.org/paper/4ea1f64c13280ef13f506eef4b3dd2395d1cf171",
"referenceCount": 66,
"citationCount": 54,
"influentialCitationCount": 9,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Context Matters: A Strategy to Pre-train Language Model for Science Education",
"abstract": "This study aims at improving the performance of scoring student responses in science education automatically. BERT-based language models have shown significant superiority over traditional NLP models in various language-related tasks. However, science writing of students, including argumentation and explanation, is domain-specific. In addition, the language used by students is different from the language in journals and Wikipedia, which are training sources of BERT and its existing variants. All these suggest that a domain-specific model pre-trained using science education data may improve model performance. However, the ideal type of data to contextualize pre-trained language model and improve the performance in automatically scoring student written responses remains unclear. Therefore, we employ different data in this study to contextualize both BERT and SciBERT models and compare their performance on automatic scoring of assessment tasks for scientific argumentation. We use three datasets to pre-train the model: 1) journal articles in science education, 2) a large dataset of students' written responses (sample size over 50,000), and 3) a small dataset of students' written responses of scientific argumentation tasks. Our experimental results show that in-domain training corpora constructed from science questions and responses improve language model performance on a wide variety of downstream tasks. Our study confirms the effectiveness of continual pre-training on domain-specific data in the education domain and demonstrates a generalizable strategy for automating science education tasks with high accuracy. We plan to release our data and SciEdBERT models for public use and community engagement.",
"year": 2023,
"venue": "International Conference on Artificial Intelligence in Education",
"authors": [
"Zhengliang Liu",
"Xinyue He",
"Lei Liu",
"Tianming Liu",
"Xiaoming Zhai"
],
"externalIds": {
"DBLP": "journals/corr/abs-2301-12031",
"ArXiv": "2301.12031",
"DOI": "10.2139/ssrn.4339205",
"CorpusId": 256349068
},
"url": "https://www.semanticscholar.org/paper/71dc990592911c454714e6fbe680dadf0cae1e45",
"referenceCount": 17,
"citationCount": 29,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics",
"abstract": "We seek to transform how new and emergent variants of pandemic-causing viruses, specifically SARS-CoV-2, are identified and classified. By adapting large language models (LLMs) for genomic data, we build genome-scale language models (GenSLMs) which can learn the evolutionary landscape of SARS-CoV-2 genomes. By pre-training on over 110 million prokaryotic gene sequences and fine-tuning a SARS-CoV-2-specific model on 1.5 million genomes, we show that GenSLMs can accurately and rapidly identify variants of concern. Thus, to our knowledge, GenSLMs represents one of the first whole-genome scale foundation models which can generalize to other prediction tasks. We demonstrate scaling of GenSLMs on GPU-based supercomputers and AI-hardware accelerators utilizing 1.63 Zettaflops in training runs with a sustained performance of 121 PFLOPS in mixed precision and peak of 850 PFLOPS. We present initial scientific insights from examining GenSLMs in tracking evolutionary dynamics of SARS-CoV-2, paving the path to realizing this on large biological data.",
"year": 2022,
"venue": "bioRxiv",
"authors": [
"Max Zvyagin",
"Alexander Brace",
"Kyle Hippe",
"Yuntian Deng",
"Bin Zhang",
"Cindy Orozco Bohorquez",
"Austin R. Clyde",
"B. Kale",
"Danilo Perez-Rivera",
"Heng Ma",
"Carla M. Mann",
"Michael W. Irvin",
"J. G. Pauloski",
"Logan T. Ward",
"Valerie Hayot",
"M. Emani",
"Sam Foreman",
"Zhen Xie",
"Diangen Lin",
"Maulik Shukla",
"Weili Nie",
"Josh Romero",
"Christian Dallago",
"Arash Vahdat",
"Chaowei Xiao",
"Tom Gibbs",
"Ian T. Foster",
"James J. Davis",
"M. Papka",
"T. Brettin",
"Rick L. Stevens",
"Anima Anandkumar",
"V. Vishwanath",
"Arvind Ramanathan"
],
"externalIds": {
"DBLP": "journals/ijhpca/ZvyaginBHDZBCKPMMIOVPWHEFXLSNRDV23",
"PubMedCentral": "9709791",
"DOI": "10.1101/2022.10.10.511571",
"CorpusId": 252899108,
"PubMed": "36451881"
},
"url": "https://www.semanticscholar.org/paper/62a45ab7b676f3877d41f66f6c9ddf1ec44a1c5f",
"referenceCount": 80,
"citationCount": 55,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine",
"Computer Science"
]
},
{
"title": "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model",
"abstract": "Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Teven Le Scao",
"Angela Fan",
"Christopher Akiki",
"Ellie Pavlick",
"Suzana Ili'c",
"Daniel Hesslow",
"Roman Castagn'e",
"A. Luccioni",
"François Yvon",
"Matthias Gallé",
"J. Tow",
"Alexander M. Rush",
"Stella Biderman",
"Albert Webson",
"Pawan Sasanka Ammanamanchi",
"Thomas Wang",
"Benoît Sagot",
"Niklas Muennighoff",
"Albert Villanova del Moral",
"Olatunji Ruwase",
"Rachel Bawden",
"Stas Bekman",
"Angelina McMillan-Major",
"Iz Beltagy",
"Huu Nguyen",
"Lucile Saulnier",
"Samson Tan",
"Pedro Ortiz Suarez",
"Victor Sanh",
"Hugo Laurenccon",
"Yacine Jernite",
"Julien Launay",
"Margaret Mitchell",
"Colin Raffel",
"Aaron Gokaslan",
"Adi Simhi",
"Aitor Soroa Etxabe",
"Alham Fikri Aji",
"Amit Alfassy",
"Anna Rogers",
"Ariel Kreisberg Nitzav",
"Canwen Xu",
"Chenghao Mou",
"Chris C. Emezue",
"Christopher Klamm",
"Colin Leong",
"Daniel Alexander van Strien",
"David Ifeoluwa Adelani",
"Dragomir R. Radev",
"E. G. Ponferrada",
"Efrat Levkovizh",
"Ethan Kim",
"Eyal Natan",
"F. Toni",
"Gérard Dupont",
"Germán Kruszewski",
"Giada Pistilli",
"Hady ElSahar",
"Hamza Benyamina",
"H. Tran",
"Ian Yu",
"Idris Abdulmumin",
"Isaac Johnson",
"Itziar Gonzalez-Dios",
"Javier de la Rosa",
"Jenny Chim",
"Jesse Dodge",
"Jian Zhu",
"Jonathan Chang",
"Jorg Frohberg",
"Josephine Tobing",
"J. Bhattacharjee",
"Khalid Almubarak",
"Kimbo Chen",
"Kyle Lo",
"Leandro von Werra",
"Leon Weber",
"Long Phan",
"Loubna Ben Allal",
"Ludovic Tanguy",
"Manan Dey",
"M. Muñoz",
"Maraim Masoud",
"Mar'ia Grandury",
"Mario vSavsko",
"Max Huang",
"Maximin Coavoux",
"Mayank Singh",
"Mike Tian-Jian Jiang",
"Minh Chien Vu",
"M. A. Jauhar",
"Mustafa Ghaleb",
"Nishant Subramani",
"Nora Kassner",
"Nurulaqilla Khamis",
"Olivier Nguyen",
"Omar Espejel",
"Ona de Gibert",
"Paulo Villegas",
"Peter Henderson",
"Pierre Colombo",
"Priscilla Amuok",
"Quentin Lhoest",
"Rheza Harliman",
"Rishi Bommasani",
"R. L'opez",
"Rui Ribeiro",
"Salomey Osei",
"S. Pyysalo",
"Sebastian Nagel",
"Shamik Bose",
"Shamsuddeen Hassan Muhammad",
"Shanya Sharma",
"S. Longpre",
"Somaieh Nikpoor",
"S. Silberberg",
"S. Pai",
"S. Zink",
"Tiago Timponi Torrent",
"Timo Schick",
"Tristan Thrush",
"V. Danchev",
"Vassilina Nikoulina",
"Veronika Laippala",
"Violette Lepercq",
"V. Prabhu",
"Zaid Alyafeai",
"Zeerak Talat",
"Arun Raja",
"Benjamin Heinzerling",
"Chenglei Si",
"Elizabeth Salesky",
"Sabrina J. Mielke",
"Wilson Y. Lee",
"Abheesht Sharma",
"Andrea Santilli",
"Antoine Chaffin",
"Arnaud Stiegler",
"Debajyoti Datta",
"Eliza Szczechla",
"Gunjan Chhablani",
"Han Wang",
"Harshit Pandey",
"Hendrik Strobelt",
"Jason Alan Fries",
"Jos Rozen",
"Leo Gao",
"Lintang Sutawika",
"M Saiful Bari",
"Maged S. Al-Shaibani",
"Matteo Manica",
"Nihal V. Nayak",
"Ryan Teehan",
"Samuel Albanie",
"Sheng Shen",
"Srulik Ben-David",
"Stephen H. Bach",
"Taewoon Kim",
"T. Bers",
"Thibault Févry",
"Trishala Neeraj",
"Urmish Thakker",
"Vikas Raunak",
"Xiang Tang",
"Zheng-Xin Yong",
"Zhiqing Sun",
"Shaked Brody",
"Y. Uri",
"Hadar Tojarieh",
"Adam Roberts",
"Hyung Won Chung",
"Jaesung Tae",
"Jason Phang",
"Ofir Press",
"Conglong Li",
"D. Narayanan",
"Hatim Bourfoune",
"J. Casper",
"Jeff Rasley",
"Max Ryabinin",
"Mayank Mishra",
"Minjia Zhang",
"M. Shoeybi",
"Myriam Peyrounette",
"N. Patry",
"Nouamane Tazi",
"Omar Sanseviero",
"Patrick von Platen",
"Pierre Cornette",
"Pierre Franccois Lavall'ee",
"R. Lacroix",
"Samyam Rajbhandari",
"Sanchit Gandhi",
"Shaden Smith",
"S. Requena",
"Suraj Patil",
"Tim Dettmers",
"Ahmed Baruwa",
"Amanpreet Singh",
"Anastasia Cheveleva",
"Anne-Laure Ligozat",
"Arjun Subramonian",
"Aur'elie N'ev'eol",
"Charles Lovering",
"Daniel H Garrette",
"D. Tunuguntla",
"Ehud Reiter",
"Ekaterina Taktasheva",
"E. Voloshina",
"Eli Bogdanov",
"Genta Indra Winata",
"Hailey Schoelkopf",
"Jan-Christoph Kalo",
"Jekaterina Novikova",
"J. Forde",
"Xiangru Tang",
"Jungo Kasai",
"Ken Kawamura",
"Liam Hazan",
"Marine Carpuat",
"Miruna Clinciu",
"Najoung Kim",
"Newton Cheng",
"O. Serikov",
"Omer Antverg",
"Oskar van der Wal",
"Rui Zhang",
"Ruochen Zhang",
"Sebastian Gehrmann",
"Shachar Mirkin",
"S. Pais",
"Tatiana Shavrina",
"Thomas Scialom",
"Tian Yun",
"Tomasz Limisiewicz",
"Verena Rieser",
"Vitaly Protasov",
"V. Mikhailov",
"Yada Pruksachatkun",
"Yonatan Belinkov",
"Zachary Bamberger",
"Zdenvek Kasner",
"Zdeněk Kasner",
"A. Pestana",
"A. Feizpour",
"Ammar Khan",
"Amy Faranak",
"A. Santos",
"Anthony Hevia",
"Antigona Unldreaj",
"Arash Aghagol",
"Arezoo Abdollahi",
"A. Tammour",
"A. HajiHosseini",
"Bahareh Behroozi",
"Benjamin Ayoade Ajibade",
"B. Saxena",
"Carlos Muñoz Ferrandis",
"Danish Contractor",
"D. Lansky",
"Davis David",
"Douwe Kiela",
"D. A. Nguyen",
"Edward Tan",
"Emi Baylor",
"Ezinwanne Ozoani",
"F. Mirza",
"Frankline Ononiwu",
"Habib Rezanejad",
"H.A. Jones",
"Indrani Bhattacharya",
"Irene Solaiman",
"Irina Sedenko",
"I. Nejadgholi",
"J. Passmore",
"Joshua Seltzer",
"Julio Bonis Sanz",
"Karen Fort",
"Lívia Dutra",
"Mairon Samagaio",
"Maraim Elbadri",
"Margot Mieskes",
"Marissa Gerchick",
"Martha Akinlolu",
"Michael McKenna",
"Mike Qiu",
"M. Ghauri",
"Mykola Burynok",
"Nafis Abrar",
"Nazneen Rajani",
"Nour Elkott",
"N. Fahmy",
"Olanrewaju Samuel",
"Ran An",
"R. Kromann",
"Ryan Hao",
"S. Alizadeh",
"Sarmad Shubber",
"Silas L. Wang",
"Sourav Roy",
"S. Viguier",
"Thanh-Cong Le",
"Tobi Oyebade",
"T. Le",
"Yoyo Yang",
"Zach Nguyen",
"Abhinav Ramesh Kashyap",
"Alfredo Palasciano",
"A. Callahan",
"Anima Shukla",
"Antonio Miranda-Escalada",
"A. Singh",
"Benjamin Beilharz",
"Bo Wang",
"C. Brito",
"Chenxi Zhou",
"Chirag Jain",
"Chuxin Xu",
"Clémentine Fourrier",
"Daniel Le'on Perin'an",
"Daniel Molano",
"Dian Yu",
"Enrique Manjavacas",
"Fabio Barth",
"Florian Fuhrimann",
"Gabriel Altay",
"Giyaseddin Bayrak",
"Gully Burns",
"Helena U. Vrabec",
"I. Bello",
"Isha Dash",
"J. Kang",
"John Giorgi",
"Jonas Golde",
"J. Posada",
"Karthi Sivaraman",
"Lokesh Bulchandani",
"Lu Liu",
"Luisa Shinzato",
"Madeleine Hahn de Bykhovetz",
"Maiko Takeuchi",
"Marc Pàmies",
"M. A. Castillo",
"Marianna Nezhurina",
"Mario Sanger",
"M. Samwald",
"Michael Cullan",
"Michael Weinberg",
"M. Wolf",
"Mina Mihaljcic",
"Minna Liu",
"M. Freidank",
"Myungsun Kang",
"Natasha Seelam",
"N. Dahlberg",
"N. Broad",
"N. Muellner",
"Pascale Fung",
"Patricia Haller",
"Patrick Haller",
"R. Eisenberg",
"Robert Martin",
"Rodrigo Canalli",
"Rosaline Su",
"Ruisi Su",
"Samuel Cahyawijaya",
"Samuele Garda",
"Shlok S Deshmukh",
"Shubhanshu Mishra",
"Sid Kiblawi",
"Simon Ott",
"Sinee Sang-aroonsiri",
"Srishti Kumar",
"Stefan Schweter",
"S. Bharati",
"Tanmay Laud",
"Théo Gigant",
"Tomoya Kainuma",
"Wojciech Kusa",
"Yanis Labrak",
"Yashasvi Bajaj",
"Y. Venkatraman",
"Yifan Xu",
"Ying Xu",
"Yu Xu",
"Z. Tan",
"Zhongli Xie",
"Zifan Ye",
"M. Bras",
"Younes Belkada",
"Thomas Wolf"
],
"externalIds": {
"DBLP": "journals/corr/abs-2211-05100",
"ArXiv": "2211.05100",
"DOI": "10.48550/arXiv.2211.05100",
"CorpusId": 253420279
},
"url": "https://www.semanticscholar.org/paper/964bd39b546f0f6625ff3b9ef1083f797807ef2e",
"referenceCount": 171,
"citationCount": 1861,
"influentialCitationCount": 196,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining",
"abstract": "Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.",
"year": 2022,
"venue": "Briefings Bioinform.",
"authors": [
"Renqian Luo",
"Liai Sun",
"Yingce Xia",
"Tao Qin",
"Sheng Zhang",
"Hoifung Poon",
"Tie-Yan Liu"
],
"externalIds": {
"ArXiv": "2210.10341",
"DBLP": "journals/bib/LuoSXQZPL22",
"DOI": "10.1093/bib/bbac409",
"CorpusId": 252542956,
"PubMed": "36156661"
},
"url": "https://www.semanticscholar.org/paper/44279244407a64431810f982be6d0c7da4429dd7",
"referenceCount": 59,
"citationCount": 549,
"influentialCitationCount": 52,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Annotation of biologically relevant ligands in UniProtKB using ChEBI",
"abstract": "Motivation To provide high quality, computationally tractable annotation of binding sites for biologically relevant (cognate) ligands in UniProtKB using the chemical ontology ChEBI (Chemical Entities of Biological Interest), to better support efforts to study and predict functionally relevant interactions between proteins and small molecule ligands. Results We structured the data model for cognate ligand binding site annotations in UniProtKB and performed a complete reannotation of all cognate ligand binding sites using stable unique identifiers from ChEBI, which we now use as the reference vocabulary for all such annotations. We developed improved search and query facilities for cognate ligands in the UniProt website, REST API and SPARQL endpoint that leverage the chemical structure data, nomenclature, and classification that ChEBI provides. Availability Binding site annotations for cognate ligands described using ChEBI are available for UniProtKB protein sequence records in several formats (text, XML, and RDF), and are freely available to query and download through the UniProt website (www.uniprot.org), REST API (www.uniprot.org/help/api), SPARQL endpoint (sparql.uniprot.org/), and FTP site (https://ftp.uniprot.org/pub/databases/uniprot/). Contact alan.bridge@sib.swiss Supplementary information Supplementary Table 1.",
"year": 2022,
"venue": "bioRxiv",
"authors": [
"E. Coudert",
"S. Gehant",
"Eduoard de Castro",
"Monica Pozzato",
"Delphine Baratin",
"T. Neto",
"Christian J. A. Sigrist",
"Nicole Redaschi",
"A. Bridge"
],
"externalIds": {
"DBLP": "journals/bioinformatics/CoudertGCPBNSRBBAAAABBNB23",
"PubMedCentral": "9825770",
"DOI": "10.1093/bioinformatics/btac793",
"CorpusId": 251815490,
"PubMed": "36484697"
},
"url": "https://www.semanticscholar.org/paper/9241c2abeca22f3f46b6c0798509d94fd9323377",
"referenceCount": 36,
"citationCount": 102,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology",
"Medicine"
]
},
{
"title": "BioRED: a rich biomedical relation extraction dataset",
"abstract": "Abstract Automated relation extraction (RE) from biomedical literature is critical for many downstream text mining applications in both research and real-world settings. However, most existing benchmarking datasets for biomedical RE only focus on relations of a single type (e.g. protein–protein interactions) at the sentence level, greatly limiting the development of RE systems in biomedicine. In this work, we first review commonly used named entity recognition (NER) and RE datasets. Then, we present a first-of-its-kind biomedical relation extraction dataset (BioRED) with multiple entity types (e.g. gene/protein, disease, chemical) and relation pairs (e.g. gene–disease; chemical–chemical) at the document level, on a set of 600 PubMed abstracts. Furthermore, we label each relation as describing either a novel finding or previously known background knowledge, enabling automated algorithms to differentiate between novel and background information. We assess the utility of BioRED by benchmarking several existing state-of-the-art methods, including Bidirectional Encoder Representations from Transformers (BERT)-based models, on the NER and RE tasks. Our results show that while existing approaches can reach high performance on the NER task (F-score of 89.3%), there is much room for improvement for the RE task, especially when extracting novel relations (F-score of 47.7%). Our experiments also demonstrate that such a rich dataset can successfully facilitate the development of more accurate, efficient and robust RE systems for biomedicine. Availability: The BioRED dataset and annotation guidelines are freely available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BioRED/.",
"year": 2022,
"venue": "Briefings Bioinform.",
"authors": [
"Ling Luo",
"Po-Ting Lai",
"Chih-Hsuan Wei",
"C. Arighi",
"Zhiyong Lu"
],
"externalIds": {
"DBLP": "journals/bib/LuoLWAL22",
"PubMedCentral": "9487702",
"ArXiv": "2204.04263",
"DOI": "10.1093/bib/bbac282",
"CorpusId": 249921231,
"PubMed": "35849818"
},
"url": "https://www.semanticscholar.org/paper/175dcaf0e3a8a6c702e13f1d1d656ff31484b66a",
"referenceCount": 79,
"citationCount": 86,
"influentialCitationCount": 10,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Interpretable RNA Foundation Model from Unannotated Data for Highly Accurate RNA Structure and Function Predictions",
"abstract": "Non-coding RNA structure and function are essential to understanding various biological processes, such as cell signaling, gene expression, and post-transcriptional regulations. These are all among the core problems in the RNA field. With the rapid growth of sequencing technology, we have accumulated a massive amount of unannotated RNA sequences. On the other hand, expensive experimental observatory results in only limited numbers of annotated data and 3D structures. Hence, it is still challenging to design computational methods for predicting their structures and functions. The lack of annotated data and systematic study causes inferior performance. To resolve the issue, we propose a novel RNA foundation model (RNA-FM) to take advantage of all the 23 million non-coding RNA sequences through self-supervised learning. Within this approach, we discover that the pre-trained RNA-FM could infer sequential and evolutionary information of non-coding RNAs without using any labels. Furthermore, we demonstrate RNA-FM’s effectiveness by applying it to the downstream secondary/3D structure prediction, SARS-CoV-2 genome structure and evolution prediction, protein-RNA binding preference modeling, and gene expression regulation modeling. The comprehensive experiments show that the proposed method improves the RNA structural and functional modelling results significantly and consistently. Despite only being trained with unlabelled data, RNA-FM can serve as the foundational model for the field.",
"year": 2022,
"venue": "bioRxiv",
"authors": [
"Jiayang Chen",
"Zhihang Hu",
"Siqi Sun",
"Qingxiong Tan",
"Yixuan Wang",
"Qinze Yu",
"Licheng Zong",
"Liang Hong",
"Jin Xiao",
"Irwin King",
"Yu Li"
],
"externalIds": {
"ArXiv": "2204.00300",
"DOI": "10.1101/2022.08.06.503062",
"CorpusId": 247922548
},
"url": "https://www.semanticscholar.org/paper/07264347e959913a6ea37953d9c0e30ed4efb3ba",
"referenceCount": 70,
"citationCount": 69,
"influentialCitationCount": 17,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology"
]
},
{
"title": "TBGA: a large-scale Gene-Disease Association dataset for Biomedical Relation Extraction",
"abstract": null,
"year": 2022,
"venue": "BMC Bioinformatics",
"authors": [
"S. Marchesin",
"G. Silvello"
],
"externalIds": {
"PubMedCentral": "8973894",
"DBLP": "journals/bmcbi/MarchesinS22",
"DOI": "10.1186/s12859-022-04646-6",
"CorpusId": 247846651,
"PubMed": "35361129"
},
"url": "https://www.semanticscholar.org/paper/36812ad807a493aac6143d1f8fc7aea16992de94",
"referenceCount": 59,
"citationCount": 19,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Training language models to follow instructions with human feedback",
"abstract": "Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Long Ouyang",
"Jeff Wu",
"Xu Jiang",
"Diogo Almeida",
"Carroll L. Wainwright",
"Pamela Mishkin",
"Chong Zhang",
"Sandhini Agarwal",
"Katarina Slama",
"Alex Ray",
"John Schulman",
"Jacob Hilton",
"Fraser Kelton",
"Luke E. Miller",
"Maddie Simens",
"Amanda Askell",
"P. Welinder",
"P. Christiano",
"J. Leike",
"Ryan J. Lowe"
],
"externalIds": {
"DBLP": "conf/nips/Ouyang0JAWMZASR22",
"ArXiv": "2203.02155",
"CorpusId": 246426909
},
"url": "https://www.semanticscholar.org/paper/d766bffc357127e0dc86dd69561d5aeb520d6f4c",
"referenceCount": 83,
"citationCount": 8501,
"influentialCitationCount": 1117,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Informative RNA base embedding for RNA structural alignment and clustering by deep representation learning",
"abstract": "Abstract Effective embedding is actively conducted by applying deep learning to biomolecular information. Obtaining better embeddings enhances the quality of downstream analyses, such as DNA sequence motif detection and protein function prediction. In this study, we adopt a pre-training algorithm for the effective embedding of RNA bases to acquire semantically rich representations and apply this algorithm to two fundamental RNA sequence problems: structural alignment and clustering. By using the pre-training algorithm to embed the four bases of RNA in a position-dependent manner using a large number of RNA sequences from various RNA families, a context-sensitive embedding representation is obtained. As a result, not only base information but also secondary structure and context information of RNA sequences are embedded for each base. We call this ‘informative base embedding’ and use it to achieve accuracies superior to those of existing state-of-the-art methods on RNA structural alignment and RNA family clustering tasks. Furthermore, upon performing RNA sequence alignment by combining this informative base embedding with a simple Needleman–Wunsch alignment algorithm, we succeed in calculating structural alignments with a time complexity of O(n2) instead of the O(n6) time complexity of the naive implementation of Sankoff-style algorithm for input RNA sequence of length n.",
"year": 2022,
"venue": "NAR Genomics and Bioinformatics",
"authors": [
"Manato Akiyama",
"Y. Sakakibara"
],
"externalIds": {
"PubMedCentral": "8862729",
"DOI": "10.1093/nargab/lqac012",
"CorpusId": 247064566,
"PubMed": "35211670"
},
"url": "https://www.semanticscholar.org/paper/6538421d45c27cbb9b655d86d1974d7886f8895e",
"referenceCount": 42,
"citationCount": 30,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Interpreting Language Models Through Knowledge Graph Extraction",
"abstract": "Transformer-based language models trained on large text corpora have enjoyed immense popularity in the natural language processing community and are commonly used as a starting point for downstream tasks. While these models are undeniably useful, it is a challenge to quantify their performance beyond traditional accuracy metrics. In this paper, we compare BERT-based language models through snapshots of acquired knowledge at sequential stages of the training process. Structured relationships from training corpora may be uncovered through querying a masked language model with probing tasks. We present a methodology to unveil a knowledge acquisition timeline by generating knowledge graph extracts from cloze\"fill-in-the-blank\"statements at various stages of RoBERTa's early training. We extend this analysis to a comparison of pretrained variations of BERT models (DistilBERT, BERT-base, RoBERTa). This work proposes a quantitative framework to compare language models through knowledge graph extraction (GED, Graph2Vec) and showcases a part-of-speech analysis (POSOR) to identify the linguistic strengths of each model variant. Using these metrics, machine learning practitioners can compare models, diagnose their models' behavioral strengths and weaknesses, and identify new targeted datasets to improve model performance.",
"year": 2021,
"venue": "arXiv.org",
"authors": [
"Vinitra Swamy",
"Angelika Romanou",
"Martin Jaggi"
],
"externalIds": {
"ArXiv": "2111.08546",
"DBLP": "journals/corr/abs-2111-08546",
"CorpusId": 244129987
},
"url": "https://www.semanticscholar.org/paper/a0118fc91478bde959d41c4e2231f1767915d207",
"referenceCount": 44,
"citationCount": 18,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "HumanNet v3: an improved database of human gene networks for disease research",
"abstract": "Abstract Network medicine has proven useful for dissecting genetic organization of complex human diseases. We have previously published HumanNet, an integrated network of human genes for disease studies. Since the release of the last version of HumanNet, many large-scale protein–protein interaction datasets have accumulated in public depositories. Additionally, the numbers of research papers and functional annotations for gene–phenotype associations have increased significantly. Therefore, updating HumanNet is a timely task for further improvement of network-based research into diseases. Here, we present HumanNet v3 (https://www.inetbio.org/humannet/, covering 99.8% of human protein coding genes) constructed by means of the expanded data with improved network inference algorithms. HumanNet v3 supports a three-tier model: HumanNet-PI (a protein–protein physical interaction network), HumanNet-FN (a functional gene network), and HumanNet-XC (a functional network extended by co-citation). Users can select a suitable tier of HumanNet for their study purpose. We showed that on disease gene predictions, HumanNet v3 outperforms both the previous HumanNet version and other integrated human gene networks. Furthermore, we demonstrated that HumanNet provides a feasible approach for selecting host genes likely to be associated with COVID-19.",
"year": 2021,
"venue": "Nucleic Acids Res.",
"authors": [
"Chan Yeong Kim",
"S. Baek",
"Junha Cha",
"Sunmo Yang",
"Eiru Kim",
"E. Marcotte",
"Traver Hart",
"Insuk Lee"
],
"externalIds": {
"PubMedCentral": "8728227",
"DBLP": "journals/nar/KimBCYKMHL22",
"DOI": "10.1093/nar/gkab1048",
"CorpusId": 243846739,
"PubMed": "34747468"
},
"url": "https://www.semanticscholar.org/paper/6790960bfe5fa7daa97542ae7c9aa575e876660f",
"referenceCount": 39,
"citationCount": 68,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Improved prediction of protein-protein interactions using AlphaFold2",
"abstract": null,
"year": 2021,
"venue": "Nature Communications",
"authors": [
"P. Bryant",
"G. Pozzati",
"A. Elofsson"
],
"externalIds": {
"PubMedCentral": "8913741",
"DOI": "10.1038/s41467-022-28865-w",
"CorpusId": 237550334,
"PubMed": "35273146"
},
"url": "https://www.semanticscholar.org/paper/dfabd934d91231c536712271a8627e8eaa84ade7",
"referenceCount": 59,
"citationCount": 487,
"influentialCitationCount": 32,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Biologically informed deep neural network for prostate cancer discovery",
"abstract": null,
"year": 2021,
"venue": "Nature",
"authors": [
"Haitham A. Elmarakeby",
"Justin H Hwang",
"Rand Arafeh",
"J. Crowdis",
"Sydney Gang",
"David Liu",
"S. AlDubayan",
"K. Salari",
"Steven Kregel",
"Camden Richter",
"Taylor E. Arnoff",
"Jihye Park",
"W. Hahn",
"Eliezer M Van Allen"
],
"externalIds": {
"PubMedCentral": "8514339",
"DBLP": "journals/nature/ElmarakebyHACGL21",
"DOI": "10.1038/s41586-021-03922-4",
"CorpusId": 237607862,
"PubMed": "34552244"
},
"url": "https://www.semanticscholar.org/paper/2b24bb6cbfa2b3118b01a47aebf58308f4eb8ad5",
"referenceCount": 57,
"citationCount": 198,
"influentialCitationCount": 11,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "LoRA: Low-Rank Adaptation of Large Language Models",
"abstract": "An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.",
"year": 2021,
"venue": "International Conference on Learning Representations",
"authors": [
"J. E. Hu",
"Yelong Shen",
"Phillip Wallis",
"Zeyuan Allen-Zhu",
"Yuanzhi Li",
"Shean Wang",
"Weizhu Chen"
],
"externalIds": {
"DBLP": "conf/iclr/HuSWALWWC22",
"ArXiv": "2106.09685",
"CorpusId": 235458009
},
"url": "https://www.semanticscholar.org/paper/a8ca46b171467ceb2d7652fbfb67fe701ad86092",
"referenceCount": 65,
"citationCount": 5654,
"influentialCitationCount": 1004,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "HUMANNET—A Two-Tiered Deep Neural Network Architecture for Self-Occluding Humanoid Pose Reconstruction",
"abstract": "Majority of current research focuses on a single static object reconstruction from a given pointcloud. However, the existing approaches are not applicable to real world applications such as dynamic and morphing scene reconstruction. To solve this, we propose a novel two-tiered deep neural network architecture, which is capable of reconstructing self-obstructed human-like morphing shapes from a depth frame in conjunction with cameras intrinsic parameters. The tests were performed using on custom dataset generated using a combination of AMASS and MoVi datasets. The proposed network achieved Jaccards’ Index of 0.7907 for the first tier, which is used to extract region of interest from the point cloud. The second tier of the network has achieved Earth Mover’s distance of 0.0256 and Chamfer distance of 0.276, indicating good experimental results. Further, subjective reconstruction results inspection shows strong predictive capabilities of the network, with the solution being able to reconstruct limb positions from very few object details.",
"year": 2021,
"venue": "Italian National Conference on Sensors",
"authors": [
"A. Kulikajevas",
"R. Maskeliūnas",
"R. Damaševičius",
"R. Scherer"
],
"externalIds": {
"PubMedCentral": "8229438",
"DBLP": "journals/sensors/KulikajevasMDS21",
"DOI": "10.3390/s21123945",
"CorpusId": 235642496,
"PubMed": "34201039"
},
"url": "https://www.semanticscholar.org/paper/f995127537275fb5764e8c20510c70ce1842c4e7",
"referenceCount": 63,
"citationCount": 11,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "SciFive: a text-to-text transformer model for biomedical literature",
"abstract": "In this report, we introduce SciFive, a domain-specific T5 model that has been pre-trained on large biomedical corpora. Our model outperforms the current SOTA methods (i.e. BERT, BioBERT, Base T5) on tasks in named entity relation, relation extraction, natural language inference, and question-answering. We show that text-generation methods have significant potential in a broad array of biomedical NLP tasks, particularly those requiring longer, more complex outputs. Our results support the exploration of more difficult text generation tasks and the development of new methods in this area",
"year": 2021,
"venue": "arXiv.org",
"authors": [
"Long Phan",
"J. Anibal",
"H. Tran",
"Shaurya Chanana",
"Erol Bahadroglu",
"Alec Peltekian",
"G. Altan-Bonnet"
],
"externalIds": {
"DBLP": "journals/corr/abs-2106-03598",
"ArXiv": "2106.03598",
"CorpusId": 235358786
},
"url": "https://www.semanticscholar.org/paper/6003d268e9b5230dbc3e320497b50329d6186816",
"referenceCount": 23,
"citationCount": 134,
"influentialCitationCount": 25,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Effective gene expression prediction from sequence by integrating long-range interactions",
"abstract": null,
"year": 2021,
"venue": "Nature Methods",
"authors": [
"Žiga Avsec",
"Vikram Agarwal",
"D. Visentin",
"J. Ledsam",
"A. Grabska-Barwinska",
"Kyle R. Taylor",
"Yannis Assael",
"J. Jumper",
"Pushmeet Kohli",
"David R. Kelley"
],
"externalIds": {
"PubMedCentral": "8490152",
"DOI": "10.1038/s41592-021-01252-x",
"CorpusId": 233245955,
"PubMed": "34608324"
},
"url": "https://www.semanticscholar.org/paper/e12e837cb2e9baeaefdcab06fe1c75add8f46389",
"referenceCount": 54,
"citationCount": 521,
"influentialCitationCount": 68,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "Learning Transferable Visual Models From Natural Language Supervision",
"abstract": "State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.",
"year": 2021,
"venue": "International Conference on Machine Learning",
"authors": [
"Alec Radford",
"Jong Wook Kim",
"Chris Hallacy",
"A. Ramesh",
"Gabriel Goh",
"Sandhini Agarwal",
"Girish Sastry",
"Amanda Askell",
"Pamela Mishkin",
"Jack Clark",
"Gretchen Krueger",
"I. Sutskever"
],
"externalIds": {
"ArXiv": "2103.00020",
"DBLP": "conf/icml/RadfordKHRGASAM21",
"CorpusId": 231591445
},
"url": "https://www.semanticscholar.org/paper/6f870f7f02a8c59c3e23f407f3ef00dd1dcf8fc4",
"referenceCount": 220,
"citationCount": 18891,
"influentialCitationCount": 5011,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Making Pre-trained Language Models Better Few-shot Learners",
"abstract": "The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance solely by leveraging a natural-language prompt and a few task demonstrations as input context. Inspired by their findings, we study few-shot learning in a more practical scenario, where we use smaller language models for which fine-tuning is computationally efficient. We present LM-BFF—better few-shot fine-tuning of language models—a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples. Our approach includes (1) prompt-based fine-tuning together with a novel pipeline for automating prompt generation; and (2) a refined strategy for dynamically and selectively incorporating demonstrations into each context. Finally, we present a systematic evaluation for analyzing few-shot performance on a range of NLP tasks, including classification and regression. Our experiments demonstrate that our methods combine to dramatically outperform standard fine-tuning procedures in this low resource setting, achieving up to 30% absolute improvement, and 11% on average across all tasks. Our approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning.",
"year": 2021,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Tianyu Gao",
"Adam Fisch",
"Danqi Chen"
],
"externalIds": {
"DBLP": "journals/corr/abs-2012-15723",
"ACL": "2021.acl-long.295",
"ArXiv": "2012.15723",
"DOI": "10.18653/v1/2021.acl-long.295",
"CorpusId": 229923710
},
"url": "https://www.semanticscholar.org/paper/85e7d63f75c0916bd350a229e040c5fbb1472e7a",
"referenceCount": 60,
"citationCount": 1655,
"influentialCitationCount": 219,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The complex relationship between genotype, pathology and phenotype in familial dementia",
"abstract": null,
"year": 2020,
"venue": "Neurobiology of Disease",
"authors": [
"J. Kwok",
"C. Loy",
"C. Dobson-Stone",
"G. Halliday"
],
"externalIds": {
"MAG": "3084680571",
"DOI": "10.1016/j.nbd.2020.105082",
"CorpusId": 221618977,
"PubMed": "32927063"
},
"url": "https://www.semanticscholar.org/paper/8c6de8cf0ea22d76512ddb0c9a2eac1e8a0abd33",
"referenceCount": 124,
"citationCount": 10,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Recent Advances in Network-based Methods for Disease Gene Prediction",
"abstract": "Disease-gene association through genome-wide association study (GWAS) is an arduous task for researchers. Investigating single nucleotide polymorphisms that correlate with specific diseases needs statistical analysis of associations. Considering the huge number of possible mutations, in addition to its high cost, another important drawback of GWAS analysis is the large number of false positives. Thus, researchers search for more evidence to cross-check their results through different sources. To provide the researchers with alternative and complementary low-cost disease-gene association evidence, computational approaches come into play. Since molecular networks are able to capture complex interplay among molecules in diseases, they become one of the most extensively used data for disease-gene association prediction. In this survey, we aim to provide a comprehensive and up-to-date review of network-based methods for disease gene prediction. We also conduct an empirical analysis on 14 state-of-the-art methods. To summarize, we first elucidate the task definition for disease gene prediction. Secondly, we categorize existing network-based efforts into network diffusion methods, traditional machine learning methods with handcrafted graph features and graph representation learning methods. Thirdly, an empirical analysis is conducted to evaluate the performance of the selected methods across seven diseases. We also provide distinguishing findings about the discussed methods based on our empirical analysis. Finally, we highlight potential research directions for future studies on disease gene prediction.",
"year": 2020,
"venue": "Briefings Bioinform.",
"authors": [
"S. Ata",
"Min Wu",
"Yuan Fang",
"Le Ou-Yang",
"C. Kwoh",
"Xiaoli Li"
],
"externalIds": {
"ArXiv": "2007.10848",
"DBLP": "journals/bib/AtaWFOKL21",
"MAG": "3045143113",
"DOI": "10.1093/bib/bbaa303",
"CorpusId": 220665489,
"PubMed": "33276376"
},
"url": "https://www.semanticscholar.org/paper/41da11e78994d1dc4feaf09b3bc46b1f05cdf228",
"referenceCount": 125,
"citationCount": 49,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Biology"
]
},
{
"title": "Language Models are Few-Shot Learners",
"abstract": "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.",
"year": 2020,
"venue": "Neural Information Processing Systems",
"authors": [
"Tom B. Brown",
"Benjamin Mann",
"Nick Ryder",
"Melanie Subbiah",
"J. Kaplan",
"Prafulla Dhariwal",
"Arvind Neelakantan",
"Pranav Shyam",
"Girish Sastry",
"Amanda Askell",
"Sandhini Agarwal",
"Ariel Herbert-Voss",
"Gretchen Krueger",
"T. Henighan",
"R. Child",
"A. Ramesh",
"Daniel M. Ziegler",
"Jeff Wu",
"Clemens Winter",
"Christopher Hesse",
"Mark Chen",
"Eric Sigler",
"Ma-teusz Litwin",
"Scott Gray",
"B. Chess",
"Jack Clark",
"Christopher Berner",
"Sam McCandlish",
"Alec Radford",
"I. Sutskever",
"Dario Amodei"
],
"externalIds": {
"ArXiv": "2005.14165",
"DBLP": "conf/nips/BrownMRSKDNSSAA20",
"MAG": "3030163527",
"CorpusId": 218971783
},
"url": "https://www.semanticscholar.org/paper/90abbc2cf38462b954ae1b772fac9532e2ccd8b0",
"referenceCount": 146,
"citationCount": 30859,
"influentialCitationCount": 3528,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The DisGeNET knowledge platform for disease genomics: 2019 update",
"abstract": "Abstract One of the most pressing challenges in genomic medicine is to understand the role played by genetic variation in health and disease. Thanks to the exploration of genomic variants at large scale, hundreds of thousands of disease-associated loci have been uncovered. However, the identification of variants of clinical relevance is a significant challenge that requires comprehensive interrogation of previous knowledge and linkage to new experimental results. To assist in this complex task, we created DisGeNET (http://www.disgenet.org/), a knowledge management platform integrating and standardizing data about disease associated genes and variants from multiple sources, including the scientific literature. DisGeNET covers the full spectrum of human diseases as well as normal and abnormal traits. The current release covers more than 24 000 diseases and traits, 17 000 genes and 117 000 genomic variants. The latest developments of DisGeNET include new sources of data, novel data attributes and prioritization metrics, a redesigned web interface and recently launched APIs. Thanks to the data standardization, the combination of expert curated information with data automatically mined from the scientific literature, and a suite of tools for accessing its publicly available data, DisGeNET is an interoperable resource supporting a variety of applications in genomic medicine and drug R&D.",
"year": 2019,
"venue": "Nucleic Acids Res.",
"authors": [
"J. González",
"J. Ramírez-Anguita",
"Josep Saüch-Pitarch",
"Francesco Ronzano",
"Emilio Centeno",
"F. Sanz",
"L. Furlong"
],
"externalIds": {
"PubMedCentral": "7145631",
"DBLP": "journals/nar/GonzalezRSRCSF20",
"MAG": "2983166786",
"DOI": "10.1093/nar/gkz1021",
"CorpusId": 207904552,
"PubMed": "31680165"
},
"url": "https://www.semanticscholar.org/paper/152ac06268f9eb385bfc3f3d9a43e89c81db4c5b",
"referenceCount": 35,
"citationCount": 1644,
"influentialCitationCount": 133,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Biology"
]
},
{
"title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.",
"year": 2019,
"venue": "Journal of machine learning research",
"authors": [
"Colin Raffel",
"Noam M. Shazeer",
"Adam Roberts",
"Katherine Lee",
"Sharan Narang",
"Michael Matena",
"Yanqi Zhou",
"Wei Li",
"Peter J. Liu"
],
"externalIds": {
"MAG": "2981852735",
"DBLP": "journals/corr/abs-1910-10683",
"ArXiv": "1910.10683",
"CorpusId": 204838007
},
"url": "https://www.semanticscholar.org/paper/6c4b76232bb72897685d19b3d264c6ee3005bc2b",
"referenceCount": 134,
"citationCount": 15989,
"influentialCitationCount": 2031,
"isOpenAccess": false,
"fieldsOfStudy": [
"Mathematics",
"Computer Science"
]
},
{
"title": "A global overview of pleiotropy and genetic architecture in complex traits",
"abstract": null,
"year": 2019,
"venue": "Nature Genetics",
"authors": [
"Kyoko Watanabe",
"Sven Stringer",
"O. Frei",
"M. Umiċeviċ Mirkov",
"C. D. de Leeuw",
"T. Polderman",
"S. van der Sluis",
"O. Andreassen",
"B. Neale",
"D. Posthuma"
],
"externalIds": {
"MAG": "2969996045",
"DOI": "10.1038/s41588-019-0481-0",
"CorpusId": 91312947,
"PubMed": "31427789"
},
"url": "https://www.semanticscholar.org/paper/04856bfb2bd67f5ae0a17d314b686d8163b94c36",
"referenceCount": 42,
"citationCount": 815,
"influentialCitationCount": 48,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "Exploring genetic interaction manifolds constructed from rich single-cell phenotypes",
"abstract": "Manifold destiny Mapping of genetic interactions (GIs) is usually based on cell fitness as the phenotypic readout, which obscures the mechanistic origin of interactions. Norman et al. developed a framework for mapping and understanding GIs. This approach leverages high-dimensional single-cell RNA sequencing data gathered from CRISPR-mediated, pooled perturbation screens. Diverse transcriptomic phenotypes construct a “manifold” representing all possible states of the cell. Each perturbation and GI projects the cell state to a particular position on this manifold, enabling unbiased ordering of genes in pathways and systematic classifications of GIs. Science, this issue p. 786 Rich phenotyping with single-cell RNA sequencing reveals principles and mechanisms of genetic interactions in mammalian cells. How cellular and organismal complexity emerges from combinatorial expression of genes is a central question in biology. High-content phenotyping approaches such as Perturb-seq (single-cell RNA-sequencing pooled CRISPR screens) present an opportunity for exploring such genetic interactions (GIs) at scale. Here, we present an analytical framework for interpreting high-dimensional landscapes of cell states (manifolds) constructed from transcriptional phenotypes. We applied this approach to Perturb-seq profiling of strong GIs mined from a growth-based, gain-of-function GI map. Exploration of this manifold enabled ordering of regulatory pathways, principled classification of GIs (e.g., identifying suppressors), and mechanistic elucidation of synergistic interactions, including an unexpected synergy between CBL and CNN1 driving erythroid differentiation. Finally, we applied recommender system machine learning to predict interactions, facilitating exploration of vastly larger GI manifolds.",
"year": 2019,
"venue": "Science",
"authors": [
"Thomas M. Norman",
"Max A. Horlbeck",
"J. Replogle",
"A. Ge",
"Albert Xu",
"M. Jost",
"Luke A. Gilbert",
"J. Weissman"
],
"externalIds": {
"MAG": "2969093152",
"DOI": "10.1126/science.aax4438",
"CorpusId": 199507313,
"PubMed": "31395745"
},
"url": "https://www.semanticscholar.org/paper/73ba88115d9f51e7b0d325cd76d33ce096de9c5f",
"referenceCount": 47,
"citationCount": 189,
"influentialCitationCount": 20,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Publicly Available Clinical BERT Embeddings",
"abstract": "Contextual word embedding models such as ELMo and BERT have dramatically improved performance for many natural language processing (NLP) tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on 3/5 clinical NLP tasks, establishing a new state-of-the-art on the MedNLI dataset. We find that these domain-specific models are not as performant on 2 clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text.",
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"authors": [
"Emily Alsentzer",
"John R. Murphy",
"Willie Boag",
"W. Weng",
"Di Jin",
"Tristan Naumann",
"Matthew B. A. McDermott"
],
"externalIds": {
"MAG": "2925863688",
"DBLP": "journals/corr/abs-1904-03323",
"ArXiv": "1904.03323",
"ACL": "W19-1909",
"DOI": "10.18653/v1/W19-1909",
"CorpusId": 102352093
},
"url": "https://www.semanticscholar.org/paper/2a567ebd78939d0861d788f0fedff8d40ae62bf2",
"referenceCount": 26,
"citationCount": 1681,
"influentialCitationCount": 248,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SciBERT: A Pretrained Language Model for Scientific Text",
"abstract": "Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.",
"year": 2019,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Iz Beltagy",
"Kyle Lo",
"Arman Cohan"
],
"externalIds": {
"ACL": "D19-1371",
"DBLP": "conf/emnlp/BeltagyLC19",
"MAG": "2973154071",
"ArXiv": "1903.10676",
"DOI": "10.18653/v1/D19-1371",
"CorpusId": 202558505
},
"url": "https://www.semanticscholar.org/paper/156d217b0a911af97fa1b5a71dc909ccef7a8028",
"referenceCount": 32,
"citationCount": 2542,
"influentialCitationCount": 462,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Genome-wide analysis of insomnia in 1,331,010 individuals identifies new risk loci and functional pathways",
"abstract": null,
"year": 2019,
"venue": "Nature Genetics",
"authors": [
"P. Jansen",
"Kyoko Watanabe",
"Sven Stringer",
"N. Skene",
"J. Bryois",
"Anke R. Hammerschlag",
"C. D. de Leeuw",
"J. Benjamins",
"A. B. Muñoz-Manchado",
"M. Nagel",
"J. Savage",
"H. Tiemeier",
"T. White",
"J. Tung",
"D. Hinds",
"V. Vacic",
"Xin Wang",
"P. Sullivan",
"S. van der Sluis",
"T. Polderman",
"A. Smit",
"J. Hjerling-Leffler",
"E. V. van Someren",
"D. Posthuma"
],
"externalIds": {
"MAG": "2915402738",
"DOI": "10.1038/s41588-018-0333-3",
"CorpusId": 71144650,
"PubMed": "30804565"
},
"url": "https://www.semanticscholar.org/paper/0ad3e738abd756c70ab046e6f0c2847e5f312b55",
"referenceCount": 73,
"citationCount": 529,
"influentialCitationCount": 43,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "Parameter-Efficient Transfer Learning for NLP",
"abstract": "Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.",
"year": 2019,
"venue": "International Conference on Machine Learning",
"authors": [
"N. Houlsby",
"A. Giurgiu",
"Stanislaw Jastrzebski",
"Bruna Morrone",
"Quentin de Laroussilhe",
"Andrea Gesmundo",
"Mona Attariyan",
"S. Gelly"
],
"externalIds": {
"DBLP": "journals/corr/abs-1902-00751",
"ArXiv": "1902.00751",
"MAG": "2964303773",
"CorpusId": 59599816
},
"url": "https://www.semanticscholar.org/paper/29ddc1f43f28af7c846515e32cc167bc66886d0c",
"referenceCount": 57,
"citationCount": 3286,
"influentialCitationCount": 580,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "HumanNet v2: human gene networks for disease research",
"abstract": "Abstract Human gene networks have proven useful in many aspects of disease research, with numerous network-based strategies developed for generating hypotheses about gene-disease-drug associations. The ability to predict and organize genes most relevant to a specific disease has proven especially important. We previously developed a human functional gene network, HumanNet, by integrating diverse types of omics data using Bayesian statistics framework and demonstrated its ability to retrieve disease genes. Here, we present HumanNet v2 (http://www.inetbio.org/humannet), a database of human gene networks, which was updated by incorporating new data types, extending data sources and improving network inference algorithms. HumanNet now comprises a hierarchy of human gene networks, allowing for more flexible incorporation of network information into studies. HumanNet performs well in ranking disease-linked gene sets with minimal literature-dependent biases. We observe that incorporating model organisms’ protein–protein interactions does not markedly improve disease gene predictions, suggesting that many of the disease gene associations are now captured directly in human-derived datasets. With an improved interactive user interface for disease network analysis, we expect HumanNet will be a useful resource for network medicine.",
"year": 2018,
"venue": "Nucleic Acids Res.",
"authors": [
"Sohyun Hwang",
"Chan Yeong Kim",
"Sunmo Yang",
"Eiru Kim",
"Traver Hart",
"E. Marcotte",
"Insuk Lee"
],
"externalIds": {
"MAG": "2899728356",
"DBLP": "journals/nar/HwangKYKHML19",
"PubMedCentral": "6323914",
"DOI": "10.1093/nar/gky1126",
"CorpusId": 53285163,
"PubMed": "30418591"
},
"url": "https://www.semanticscholar.org/paper/0f156fd56d28e5a9af1442efa76009d3c7f50eb8",
"referenceCount": 48,
"citationCount": 158,
"influentialCitationCount": 10,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Biology"
]
},
{
"title": "Using deep learning to model the hierarchical structure and function of a cell",
"abstract": null,
"year": 2018,
"venue": "Nature Methods",
"authors": [
"Jianzhu Ma",
"M. Yu",
"Samson H. Fong",
"K. Ono",
"Eric Sage",
"Barry Demchak",
"R. Sharan",
"T. Ideker"
],
"externalIds": {
"MAG": "2790179710",
"PubMedCentral": "5882547",
"DOI": "10.1038/nmeth.4627",
"CorpusId": 3783024,
"PubMed": "29505029"
},
"url": "https://www.semanticscholar.org/paper/0a063d1ec630a7cecaad253c34f190cc2db2241e",
"referenceCount": 55,
"citationCount": 304,
"influentialCitationCount": 12,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "NetSig: network-based discovery from cancer genomes",
"abstract": null,
"year": 2017,
"venue": "Nature Methods",
"authors": [
"H. Horn",
"M. Lawrence",
"C. Chouinard",
"Y. Shrestha",
"J. X. Hu",
"Elizabeth Worstell",
"E. Shea",
"N. Ilić",
"Eejung Kim",
"A. Kamburov",
"Alireza Kashani",
"W. Hahn",
"Joshua D. Campbell",
"J. Boehm",
"G. Getz",
"K. Lage"
],
"externalIds": {
"PubMedCentral": "5985961",
"MAG": "2772435237",
"DOI": "10.1038/nmeth.4514",
"CorpusId": 3353267,
"PubMed": "29200198"
},
"url": "https://www.semanticscholar.org/paper/cd077eae35e787f45a312eeecc0f5ec77d734abc",
"referenceCount": 66,
"citationCount": 86,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "Neuropathology of genetic synucleinopathies with parkinsonism: Review of the literature",
"abstract": "Clinical–pathological studies remain the gold‐standard for the diagnosis of Parkinson's disease (PD). However, mounting data from genetic PD autopsies challenge the diagnosis of PD based on Lewy body pathology. Most of the confirmed genetic risks for PD show heterogenous neuropathology, even within kindreds, which may or may not include Lewy body pathology. We review the literature of genetic PD autopsies from cases with molecularly confirmed PD or parkinsonism and summarize main findings on SNCA (n = 25), Parkin (n = 20, 17 bi‐allelic and 3 heterozygotes), PINK1 (n = 5, 1 bi‐allelic and 4 heterozygotes), DJ‐1 (n = 1), LRRK2 (n = 55), GBA (n = 10 Gaucher disease patients with parkinsonism), DNAJC13, GCH1, ATP13A2, PLA2G6 (n = 8 patients, 2 with PD), MPAN (n = 2), FBXO7, RAB39B, and ATXN2 (SCA2), as well as on 22q deletion syndrome (n = 3). Findings from autopsies of heterozygous mutation carriers of genes that are traditionally considered recessively inherited are also discussed. Lewy bodies may be present in syndromes clinically distinctive from PD (eg, MPAN‐related neurodegeneration) and absent in patients with clinical PD syndrome (eg, LRRK2‐PD or Parkin‐PD). Therefore, the authors can conclude that the presence of Lewy bodies are not specific to the diagnosis of PD and that PD can be diagnosed even in the absence of Lewy body pathology. Interventions that reduce alpha‐synuclein load may be more justified in SNCA‐PD or GBA‐PD than in other genetic forms of PD. The number of reported genetic PD autopsies remains small, and there are limited genotype‐clinical‐pathological‐phenotype studies. Therefore, larger series of autopsies from genetic PD patients are required. © 2017 International Parkinson and Movement Disorder Society",
"year": 2017,
"venue": "Movement Disorders",
"authors": [
"S. Schneider",
"R. Alcalay"
],
"externalIds": {
"MAG": "2767687660",
"DOI": "10.1002/mds.27193",
"CorpusId": 31013612,
"PubMed": "29124790"
},
"url": "https://www.semanticscholar.org/paper/5a8d9e9dc4df2c664d4d2eb493bf98e64628eabc",
"referenceCount": 179,
"citationCount": 242,
"influentialCitationCount": 13,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "eDGAR: a database of Disease-Gene Associations with annotated Relationships among genes",
"abstract": null,
"year": 2017,
"venue": "BMC Genomics",
"authors": [
"G. Babbi",
"P. Martelli",
"Giuseppe Profiti",
"Samuele Bovo",
"Castrense Savojardo",
"R. Casadio"
],
"externalIds": {
"MAG": "2744684750",
"PubMedCentral": "5558190",
"DOI": "10.1186/s12864-017-3911-3",
"CorpusId": 3721337,
"PubMed": "28812536"
},
"url": "https://www.semanticscholar.org/paper/ea84101a0f0d0f15aa448bf50164f5134dd90201",
"referenceCount": 40,
"citationCount": 56,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "10 Years of GWAS Discovery: Biology, Function, and Translation.",
"abstract": null,
"year": 2017,
"venue": "American Journal of Human Genetics",
"authors": [
"P. Visscher",
"N. Wray",
"Qian Zhang",
"P. Sklar",
"M. McCarthy",
"M. Brown",
"Jian Yang"
],
"externalIds": {
"MAG": "2725988230",
"DOI": "10.1016/j.ajhg.2017.06.005",
"CorpusId": 13731116,
"PubMed": "28686856"
},
"url": "https://www.semanticscholar.org/paper/0b256820d1e7d321e227dc41cedf3bb0b0c5d8f4",
"referenceCount": 153,
"citationCount": 2699,
"influentialCitationCount": 108,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "Attention is All you Need",
"abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"year": 2017,
"venue": "Neural Information Processing Systems",
"authors": [
"Ashish Vaswani",
"Noam M. Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N. Gomez",
"Lukasz Kaiser",
"Illia Polosukhin"
],
"externalIds": {
"MAG": "2963403868",
"DBLP": "conf/nips/VaswaniSPUJGKP17",
"ArXiv": "1706.03762",
"CorpusId": 13756489
},
"url": "https://www.semanticscholar.org/paper/204e3073870fae3d05bcbc2f6a8e263d9b72e776",
"referenceCount": 41,
"citationCount": 105006,
"influentialCitationCount": 15361,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GWAB: a web server for the network-based boosting of human genome-wide association data",
"abstract": "Abstract During the last decade, genome-wide association studies (GWAS) have represented a major approach to dissect complex human genetic diseases. Due in part to limited statistical power, most studies identify only small numbers of candidate genes that pass the conventional significance thresholds (e.g. P ≤ 5 × 10−8). This limitation can be partly overcome by increasing the sample size, but this comes at a higher cost. Alternatively, weak association signals can be boosted by incorporating independent data. Previously, we demonstrated the feasibility of boosting GWAS disease associations using gene networks. Here, we present a web server, GWAB (www.inetbio.org/gwab), for the network-based boosting of human GWAS data. Using GWAS summary statistics (P-values) for SNPs along with reference genes for a disease of interest, GWAB reprioritizes candidate disease genes by integrating the GWAS and network data. We found that GWAB could more effectively retrieve disease-associated reference genes than GWAS could alone. As an example, we describe GWAB-boosted candidate genes for coronary artery disease and supporting data in the literature. These results highlight the inherent value in sub-threshold GWAS associations, which are often not publicly released. GWAB offers a feasible general approach to boost such associations for human disease genetics.",
"year": 2017,
"venue": "Nucleic Acids Res.",
"authors": [
"J. Shim",
"Changbae Bang",
"Sunmo Yang",
"Tak Lee",
"Sohyun Hwang",
"Chan Yeong Kim",
"U. M. Singh-Blom",
"E. Marcotte",
"Insuk Lee"
],
"externalIds": {
"MAG": "2609040378",
"DBLP": "journals/nar/ShimBYLHKSML17",
"PubMedCentral": "5793838",
"DOI": "10.1093/nar/gkx284",
"CorpusId": 742551,
"PubMed": "28449091"
},
"url": "https://www.semanticscholar.org/paper/fdbe04e58040bc5effaa6a7737e845b661745810",
"referenceCount": 42,
"citationCount": 28,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology",
"Medicine"
]
},
{
"title": "The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog)",
"abstract": "The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future.",
"year": 2016,
"venue": "Nucleic Acids Res.",
"authors": [
"J. MacArthur",
"Emily Bowler",
"M. Cerezo",
"Laurent Gil",
"Peggy Hall",
"Emma Hastings",
"Heather Junkins",
"A. McMahon",
"Annalisa Milano",
"Joannella Morales",
"Z. M. Pendlington",
"Danielle Welter",
"Tony Burdett",
"L. Hindorff",
"P. Flicek",
"Fiona Cunningham",
"H. Parkinson"
],
"externalIds": {
"PubMedCentral": "5210590",
"DBLP": "journals/nar/MacArthurBCGHHJ17",
"MAG": "2559028527",
"DOI": "10.1093/nar/gkw1133",
"CorpusId": 2106393,
"PubMed": "27899670"
},
"url": "https://www.semanticscholar.org/paper/947feba9136e877f50968b62fd2e6c6381376580",
"referenceCount": 23,
"citationCount": 1902,
"influentialCitationCount": 134,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Biology"
]
},
{
"title": "MUFFINN: cancer gene discovery via network analysis of somatic mutation data",
"abstract": null,
"year": 2016,
"venue": "Genome Biology",
"authors": [
"A-Ra Cho",
"J. Shim",
"Eiru Kim",
"F. Supek",
"Ben Lehner",
"Insuk Lee"
],
"externalIds": {
"MAG": "2442835468",
"PubMedCentral": "4918128",
"DOI": "10.1186/s13059-016-0989-x",
"CorpusId": 3158756,
"PubMed": "27333808"
},
"url": "https://www.semanticscholar.org/paper/8b1fbd5e27abbc7cffb393e774754a52860fb184",
"referenceCount": 187,
"citationCount": 132,
"influentialCitationCount": 18,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Translation of Genotype to Phenotype by a Hierarchy of Cell Subsystems",
"abstract": null,
"year": 2016,
"venue": "Cell Systems",
"authors": [
"M. Yu",
"Michael Kramer",
"Janusz Dutkowski",
"R. Srivas",
"Katherine Licon",
"J. Kreisberg",
"C. Ng",
"N. Krogan",
"R. Sharan",
"T. Ideker"
],
"externalIds": {
"PubMedCentral": "4772745",
"MAG": "2283795047",
"DOI": "10.1016/j.cels.2016.02.003",
"CorpusId": 7778274,
"PubMed": "26949740"
},
"url": "https://www.semanticscholar.org/paper/fa91bd530e136ed03a9469f371bce8179d1b4b8a",
"referenceCount": 99,
"citationCount": 72,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Genetic studies of quantitative MCI and AD phenotypes in ADNI: Progress, opportunities, and plans",
"abstract": null,
"year": 2015,
"venue": "Alzheimer's & Dementia",
"authors": [
"A. Saykin",
"Li Shen",
"Xiaohui Yao",
"Sungeun Kim",
"K. Nho",
"S. Risacher",
"V. Ramanan",
"T. Foroud",
"K. Faber",
"Nadeem Sarwar",
"L. Munsie",
"Xiaolan Hu",
"H. Soares",
"S. Potkin",
"P. Thompson",
"J. Kauwe",
"R. Kaddurah-Daouk",
"R. Green",
"A. Toga",
"M. Weiner"
],
"externalIds": {
"MAG": "2674531482",
"DOI": "10.1016/j.jalz.2015.05.009",
"CorpusId": 3704930,
"PubMed": "26194313"
},
"url": "https://www.semanticscholar.org/paper/50462abe3f7c05ab684966adc9f47b9e09b5d93c",
"referenceCount": 180,
"citationCount": 244,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Psychology",
"Biology",
"Medicine"
]
},
{
"title": "Understanding multicellular function and disease with human tissue-specific networks",
"abstract": null,
"year": 2015,
"venue": "Nature Genetics",
"authors": [
"C. Greene",
"Arjun Krishnan",
"A. Wong",
"E. Ricciotti",
"R. A. Zelaya",
"Daniel S. Himmelstein",
"Ran Zhang",
"Boris M. Hartmann",
"E. Zaslavsky",
"S. Sealfon",
"D. Chasman",
"G. FitzGerald",
"K. Dolinski",
"T. Grosser",
"O. Troyanskaya"
],
"externalIds": {
"PubMedCentral": "4828725",
"MAG": "1981409633",
"DOI": "10.1038/ng.3259",
"CorpusId": 205349775,
"PubMed": "25915600"
},
"url": "https://www.semanticscholar.org/paper/310ac755fb551b93dc72f03d650931cf0eff71df",
"referenceCount": 138,
"citationCount": 721,
"influentialCitationCount": 54,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "OMIM.org: Online Mendelian Inheritance in Man (OMIM®), an online catalog of human genes and genetic disorders",
"abstract": "Online Mendelian Inheritance in Man, OMIM®, is a comprehensive, authoritative and timely research resource of curated descriptions of human genes and phenotypes and the relationships between them. The new official website for OMIM, OMIM.org (http://omim.org), was launched in January 2011. OMIM is based on the published peer-reviewed biomedical literature and is used by overlapping and diverse communities of clinicians, molecular biologists and genome scientists, as well as by students and teachers of these disciplines. Genes and phenotypes are described in separate entries and are given unique, stable six-digit identifiers (MIM numbers). OMIM entries have a structured free-text format that provides the flexibility necessary to describe the complex and nuanced relationships between genes and genetic phenotypes in an efficient manner. OMIM also has a derivative table of genes and genetic phenotypes, the Morbid Map. OMIM.org has enhanced search capabilities such as genome coordinate searching and thesaurus-enhanced search term options. Phenotypic series have been created to facilitate viewing genetic heterogeneity of phenotypes. Clinical synopsis features are enhanced with UMLS, Human Phenotype Ontology and Elements of Morphology terms and image links. All OMIM data are available for FTP download and through an API. MIMmatch is a novel outreach feature to disseminate updates and encourage collaboration.",
"year": 2014,
"venue": "Nucleic Acids Res.",
"authors": [
"J. Amberger",
"Carol A. Bocchini",
"F. Schiettecatte",
"A. F. Scott",
"A. Hamosh"
],
"externalIds": {
"MAG": "2162151166",
"PubMedCentral": "4383985",
"DBLP": "journals/nar/AmbergerBSSH15",
"DOI": "10.1093/nar/gku1205",
"CorpusId": 10233595,
"PubMed": "25428349"
},
"url": "https://www.semanticscholar.org/paper/28a46457e8674b38076c763d67af6b7eaec28dee",
"referenceCount": 14,
"citationCount": 1856,
"influentialCitationCount": 106,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science",
"Biology"
]
},
{
"title": "FASTKD2 is associated with memory and hippocampal structure in older adults",
"abstract": null,
"year": 2014,
"venue": "Molecular Psychiatry",
"authors": [
"V. Ramanan",
"K. Nho",
"Li Shen",
"S. Risacher",
"Sungeun Kim",
"B. McDonald",
"M. Farlow",
"T. Foroud",
"Sujuan Gao",
"H. Soininen",
"I. Kloszewska",
"P. Mecocci",
"M. Tsolaki",
"B. Vellas",
"S. Lovestone",
"P. Aisen",
"R. Petersen",
"C. Jack",
"L. Shaw",
"J. Trojanowski",
"M. Weiner",
"R. Green",
"A. Toga",
"P. D. De Jager",
"Lei Yu",
"David A. Bennett",
"A. Saykin"
],
"externalIds": {
"PubMedCentral": "4427556",
"MAG": "2009600093",
"DOI": "10.1038/mp.2014.142",
"CorpusId": 18512092,
"PubMed": "25385369"
},
"url": "https://www.semanticscholar.org/paper/bd1ac9a05b6f415279095307314d2a477c6cd79f",
"referenceCount": 80,
"citationCount": 38,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology",
"Psychology"
]
},
{
"title": "Autosomal-dominant Alzheimer's disease: a review and proposal for the prevention of Alzheimer's disease",
"abstract": null,
"year": 2011,
"venue": "Alzheimer's Research & Therapy",
"authors": [
"R. Bateman",
"P. Aisen",
"B. de Strooper",
"Nick C Fox",
"C. Lemere",
"J. Ringman",
"S. Salloway",
"R. Sperling",
"M. Windisch",
"C. Xiong"
],
"externalIds": {
"PubMedCentral": "3109410",
"MAG": "2953777485",
"DOI": "10.1186/alzrt59",
"CorpusId": 4471317,
"PubMed": "21211070"
},
"url": "https://www.semanticscholar.org/paper/91881a4bfbf4c3d8aa931ffd476397a0793c922e",
"referenceCount": 101,
"citationCount": 480,
"influentialCitationCount": 8,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Imaging genetics—days of future past",
"abstract": null,
"year": 2010,
"venue": "NeuroImage",
"authors": [
"K. Bigos",
"D. Weinberger"
],
"externalIds": {
"DBLP": "journals/neuroimage/BigosW10",
"MAG": "1966777205",
"DOI": "10.1016/j.neuroimage.2010.01.035",
"CorpusId": 12611812,
"PubMed": "20080192"
},
"url": "https://www.semanticscholar.org/paper/2342874bd2bd3a8483e28b8b29bbc0799b956c0c",
"referenceCount": 43,
"citationCount": 118,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Psychology",
"Computer Science"
]
},
{
"title": "Overview of the protein-protein interaction annotation extraction task of BioCreative II",
"abstract": null,
"year": 2008,
"venue": "Genome Biology",
"authors": [
"Martin Krallinger",
"F. Leitner",
"Carlos Rodríguez-Penagos",
"A. Valencia"
],
"externalIds": {
"PubMedCentral": "2559988",
"MAG": "2064030835",
"DOI": "10.1186/gb-2008-9-s2-s4",
"CorpusId": 14555187,
"PubMed": "18834495"
},
"url": "https://www.semanticscholar.org/paper/78d0078bfc398008175faa2066ba833bca6883d7",
"referenceCount": 39,
"citationCount": 301,
"influentialCitationCount": 18,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "The NCBI dbGaP database of genotypes and phenotypes",
"abstract": null,
"year": 2007,
"venue": "Nature Genetics",
"authors": [
"M. Mailman",
"M. Feolo",
"Y. Jin",
"Masato Kimura",
"K. Tryka",
"Rinat Bagoutdinov",
"Luning Hao",
"A. Kiang",
"J. Paschall",
"Lon Phan",
"N. Popova",
"Stephanie Pretel",
"Lora Ziyabari",
"Moira Lee",
"Yu Shao",
"Zhen Y. Wang",
"K. Sirotkin",
"Minghong Ward",
"Michael Kholodov",
"Kerry Zbicz",
"J. Beck",
"Michael Kimelman",
"S. Shevelev",
"Don Preuss",
"E. Yaschenko",
"Alan Graeff",
"J. Ostell",
"S. Sherry"
],
"externalIds": {
"MAG": "2102729801",
"DOI": "10.1038/ng1007-1181",
"CorpusId": 850759,
"PubMed": "17898773"
},
"url": "https://www.semanticscholar.org/paper/a03d8f591bc3c2dbdecbd9d515e0469953a3f7ef",
"referenceCount": 11,
"citationCount": 1046,
"influentialCitationCount": 28,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "BioInfer: a corpus for information extraction in the biomedical domain",
"abstract": null,
"year": 2007,
"venue": "BMC Bioinformatics",
"authors": [
"S. Pyysalo",
"Filip Ginter",
"Juho Heimonen",
"Jari Björne",
"J. Boberg",
"Jouni Järvinen",
"T. Salakoski"
],
"externalIds": {
"DBLP": "journals/bmcbi/PyysaloGHBBJS07",
"PubMedCentral": "1808065",
"MAG": "1850865022",
"DOI": "10.1186/1471-2105-8-50",
"CorpusId": 8410430,
"PubMed": "17291334"
},
"url": "https://www.semanticscholar.org/paper/cb891d6e79057ac655ed852eef677f1ab07359fb",
"referenceCount": 46,
"citationCount": 509,
"influentialCitationCount": 54,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "ROUGE: A Package for Automatic Evaluation of Summaries",
"abstract": "ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluations. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.",
"year": 2004,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Chin-Yew Lin"
],
"externalIds": {
"MAG": "2154652894",
"ACL": "W04-1013",
"CorpusId": 964287
},
"url": "https://www.semanticscholar.org/paper/60b05f32c32519a809f21642ef1eb3eaf3848008",
"referenceCount": 13,
"citationCount": 13205,
"influentialCitationCount": 2401,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Positional effects of presenilin-1 mutations on tau phosphorylation in cortical plaques",
"abstract": null,
"year": 2004,
"venue": "Neurobiology of Disease",
"authors": [
"C. Shepherd",
"Gillian C. Gregory",
"J. Vickers",
"W. Brooks",
"J. Kwok",
"P. Schofield",
"J. Kril",
"G. Halliday"
],
"externalIds": {
"MAG": "2011554609",
"DOI": "10.1016/j.nbd.2003.10.008",
"CorpusId": 39500799,
"PubMed": "14751776"
},
"url": "https://www.semanticscholar.org/paper/9fcbe792a7ae3d445b0b855dfe163f9a9e1a8e12",
"referenceCount": 30,
"citationCount": 41,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "From gene networks to gene function.",
"abstract": "We propose a novel method to identify functionally related genes based on comparisons of neighborhoods in gene networks. This method does not rely on gene sequence or protein structure homologies, and it can be applied to any organism and a wide variety of experimental data sets. The character of the predicted gene relationships depends on the underlying networks;they concern biological processes rather than the molecular function. We used the method to analyze gene networks derived from genome-wide chromatin immunoprecipitation experiments, a large-scale gene deletion study, and from the genomic positions of consensus binding sites for transcription factors of the yeast Saccharomyces cerevisiae. We identified 816 functional relationships between 159 genes and show that these relationships correspond to protein-protein interactions, co-occurrence in the same protein complexes, and/or co-occurrence in abstracts of scientific articles. Our results suggest functions for seven previously uncharacterized yeast genes: KIN3 and YMR269W may be involved in biological processes related to cell growth and/or maintenance, whereas IES6, YEL008W, YEL033W, YHL029C, YMR010W, and YMR031W-A are likely to have metabolic functions.",
"year": 2003,
"venue": "Genome Research",
"authors": [
"T. Schlitt",
"Kimmo Palin",
"J. Rung",
"S. Dietmann",
"M. Lappe",
"E. Ukkonen",
"A. Brazma"
],
"externalIds": {
"MAG": "2058766645",
"DOI": "10.1101/GR.1111403",
"CorpusId": 9126810,
"PubMed": "14656964"
},
"url": "https://www.semanticscholar.org/paper/aac61dbdad235c440e11cf6944e91f1fbdb653a7",
"referenceCount": 50,
"citationCount": 164,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "GENIA corpus - a semantically annotated corpus for bio-textmining",
"abstract": "MOTIVATION\nNatural language processing (NLP) methods are regarded as being useful to raise the potential of text mining from biological literature. The lack of an extensively annotated corpus of this literature, however, causes a major bottleneck for applying NLP techniques. GENIA corpus is being developed to provide reference materials to let NLP techniques work for bio-textmining.\n\n\nRESULTS\nGENIA corpus version 3.0 consisting of 2000 MEDLINE abstracts has been released with more than 400,000 words and almost 100,000 annotations for biological terms.",
"year": 2003,
"venue": "Intelligent Systems in Molecular Biology",
"authors": [
"Jin-Dong Kim",
"Tomoko Ohta",
"Yuka Tateisi",
"Junichi Tsujii"
],
"externalIds": {
"MAG": "2163107094",
"DBLP": "conf/ismb/KimOTT03",
"DOI": "10.1093/BIOINFORMATICS/BTG1023",
"CorpusId": 11522524,
"PubMed": "12855455"
},
"url": "https://www.semanticscholar.org/paper/da6c3fdf8ef9aae979a5dd156e074ba6691b2e2c",
"referenceCount": 2,
"citationCount": 1269,
"influentialCitationCount": 137,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "The Human Phenome Project",
"abstract": null,
"year": 2003,
"venue": "Nature Genetics",
"authors": [
"N. Freimer",
"C. Sabatti"
],
"externalIds": {
"MAG": "2014022579",
"DOI": "10.1038/ng0503-15",
"CorpusId": 31510391,
"PubMed": "12721547"
},
"url": "https://www.semanticscholar.org/paper/1bf5b0939b61bf34e39678248464497095ebd836",
"referenceCount": 29,
"citationCount": 398,
"influentialCitationCount": 9,
"isOpenAccess": false,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"abstract": "Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.",
"year": 2002,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"K. Papineni",
"Salim Roukos",
"T. Ward",
"Wei-Jing Zhu"
],
"externalIds": {
"DBLP": "conf/acl/PapineniRWZ02",
"MAG": "2101105183",
"ACL": "P02-1040",
"DOI": "10.3115/1073083.1073135",
"CorpusId": 11080756
},
"url": "https://www.semanticscholar.org/paper/d7da009f457917aa381619facfa5ffae9329a6e9",
"referenceCount": 5,
"citationCount": 24976,
"influentialCitationCount": 5731,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Novel amyloid precursor protein mutation in an Iowa family with dementia and severe cerebral amyloid angiopathy",
"abstract": "Several mutations in the amyloid precursor protein (APP) gene have been found to associate with pathologic deposition of the β‐amyloid peptide (Aβ) in neuritic plaques or in the walls of cerebral vessels. We report a mutation at a novel site in APP in a three‐generation Iowa family with autosomal dominant dementia beginning in the sixth or seventh decade of life. The proband and an affected brother had progressive aphasic dementia, leukoencephalopathy, and occipital calcifications. Neuropathological examination of the proband revealed severe cerebral amyloid angiopathy, widespread neurofibrillary tangles, and unusually extensive distribution of Aβ40 in plaques. The affected brothers shared a missense mutation in APP, resulting in substitution of asparagine for aspartic acid at position 694. This site corresponds to residue 23 of Aβ, thus differing from familial Alzheimer's disease mutations, which occur outside the Aβ sequence. Restriction enzyme analysis of DNA from 94 unrelated patients with sporadic cerebral amyloid angiopathy‐related hemorrhage found no other instances of this mutation. These results suggest a novel site within Aβ that may promote its deposition and toxicity.",
"year": 2001,
"venue": "Annals of Neurology",
"authors": [
"T. Grabowski",
"H. Cho",
"J. Vonsattel",
"G. Rebeck",
"S. Greenberg"
],
"externalIds": {
"MAG": "2076453964",
"DOI": "10.1002/ana.1009",
"CorpusId": 24951589,
"PubMed": "11409420"
},
"url": "https://www.semanticscholar.org/paper/4d46aeb4fa59d5900ad0a61384568783078fe21f",
"referenceCount": 57,
"citationCount": 516,
"influentialCitationCount": 21,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Diffuse Lewy body disease",
"abstract": "Diffuse Lewy body disease (DLBD) has been studied from various viewpoints and, although clinical diagnostic criteria for DLBD have been proposed, diagnosis remains difficult. DLBD has been reported to be the second most common form of dementia in the aged, following Alzheimer‐type dementia. It has, however, been clinically under‐diagnosed. Therefore, the search for diagnostic markers for DLBD must continue. Very recently, ‘dementia with Lewy bodies’ (DLB) was proposed as a generic term for various forms of dementia with Lewy bodies, including DLBD and similar disorders. Cortical Lewy bodies are the most important pathologic marker for diagnosis of DLBD. At present, however, the mechanism responsible for cortical Lewy body formation has yet to be disclosed.",
"year": 2000,
"venue": "Neuropathology (Kyoto. 1993)",
"authors": [
"Kenji Kosaka"
],
"externalIds": {
"MAG": "2463624932",
"DOI": "10.1046/j.1440-1789.2000.00301.x",
"CorpusId": 2398037,
"PubMed": "11037193"
},
"url": "https://www.semanticscholar.org/paper/5757ddb31c887f82b5ee1abe0249c497dfec6976",
"referenceCount": 45,
"citationCount": 107,
"influentialCitationCount": 7,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Online Mendelian Inheritance in Man.",
"abstract": null,
"year": 1998,
"venue": "Anesthesiology",
"authors": [
"J. Oyston"
],
"externalIds": {
"MAG": "1965708122",
"DOI": "10.1097/00000542-199809000-00060",
"CorpusId": 35098083,
"PubMed": "9743436"
},
"url": "https://www.semanticscholar.org/paper/357496abc8b475c2d441a3f24f0375d41e133f4d",
"referenceCount": 0,
"citationCount": 1014,
"influentialCitationCount": 14,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models",
"abstract": "This paper presents a comprehensive survey of ChatGPT and GPT-4, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innova-tions such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs’ adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Yi-Hsueh Liu",
"Tianle Han",
"Siyuan Ma",
"Jiayue Zhang",
"Yuanyuan Yang",
"Jiaming Tian",
"Haoyang He",
"Antong Li",
"Mengshen He",
"Zheng Liu",
"Zihao Wu",
"Dajiang Zhu",
"Xiang Li",
"Ning Qiang",
"Dinggang Shen",
"Tianming Liu",
"Bao Ge"
],
"externalIds": {
"DBLP": "journals/corr/abs-2304-01852",
"DOI": "10.48550/arXiv.2304.01852",
"CorpusId": 263893278
},
"url": "https://www.semanticscholar.org/paper/6c5a1079d9705c0ee022cef77207daa20ce2cde5",
"referenceCount": 107,
"citationCount": 202,
"influentialCitationCount": 9,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain",
"abstract": null,
"year": 2023,
"venue": "MLMI@MICCAI",
"authors": [
"Zheng Liu",
"Aoxiao Zhong",
"Yiwei Li",
"Longtao Yang",
"Chao Ju",
"Zihao Wu",
"Chong-Yi Ma",
"Peng Shu",
"Cheng Chen",
"Sekeun Kim",
"Haixing Dai",
"Lin Zhao",
"Dajiang Zhu",
"Jun Liu",
"Wei Liu",
"Dinggang Shen",
"Quanzheng Li",
"Tianming Liu",
"Xiang Li"
],
"externalIds": {
"DBLP": "conf/mlmi-ws/LiuZLYJWMSCKDZZLLSLLL23",
"DOI": "10.1007/978-3-031-45673-2_46",
"CorpusId": 264307793
},
"url": "https://www.semanticscholar.org/paper/2a9a48893032d2663b7a8b225cee4d45aaeb373b",
"referenceCount": 0,
"citationCount": 19,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ChatAug: Leveraging ChatGPT for Text Data Augmentation",
"abstract": "—Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks. This challenge is especially prominent in the few-shot learning scenario, where the data in the target domain is generally much scarcer and of lowered quality. A natural and widely-used strategy to mitigate such challenges is to perform data augmentation on the training data to better capture the data invariance and increase the sample size. However, current text data augmentation methods either can not ensure the correct labeling of the generated data (lacking faithfulness) or can not ensure sufficient diversity in the generated data (lacking completeness), or both. Inspired by the recent success of large language models, especially the development of ChatGPT, which demonstrated improved language comprehension abilities, in this work, we propose a text data augmentation approach based on ChatGPT (named ChatAug). ChatGPT is trained on data with unparalleled linguistic richness and employs a reinforcement training process with large-scale human feedback, which endows the model with affinity to the naturalness of human language. Our text data augmentation approach ChatAug rephrases each sentence in the training samples into multiple conceptually similar but semantically different samples. The augmented samples can then be used in downstream model training. Experiment results on few-shot learning text classification tasks show the superior performance of the proposed ChatAug approach over state-of-the-art text data augmentation methods in terms of testing accuracy and distribution of the augmented samples.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Haixing Dai",
"Zheng Liu",
"Wenxiong Liao",
"Xiaoke Huang",
"Zihao Wu",
"Lin Zhao",
"Wei Liu",
"Ninghao Liu",
"Sheng Li",
"Dajiang Zhu",
"Hongmin Cai",
"Quanzheng Li",
"Dinggang Shen",
"Tianming Liu",
"Xiang Li"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-13007",
"DOI": "10.48550/arXiv.2302.13007",
"CorpusId": 257219780
},
"url": "https://www.semanticscholar.org/paper/ae7f67c705c12391f7198527a0b962340ac8d39c",
"referenceCount": 90,
"citationCount": 109,
"influentialCitationCount": 8,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Crosslingual Generalization through Multitask Finetuning",
"abstract": "Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task genrealization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-of-the-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on human-written prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and language-agnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at https://github.com/ bigscience-workshop/xmtf.",
"year": 2023,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Niklas Muennighoff",
"Thomas Wang",
"Lintang Sutawika",
"Adam Roberts",
"Stella Biderman",
"Teven Le Scao",
"M Saiful Bari",
"Sheng Shen",
"Zheng-Xin Yong",
"Hailey Schoelkopf",
"Xiangru Tang",
"Dragomir R. Radev",
"Alham Fikri Aji",
"Khalid Almubarak",
"Samuel Albanie",
"Zaid Alyafeai",
"Albert Webson",
"Edward Raff",
"Colin Raffel"
],
"externalIds": {
"ACL": "2023.acl-long.891",
"DBLP": "conf/acl/MuennighoffWSRB23",
"DOI": "10.18653/v1/2023.acl-long.891",
"CorpusId": 253264914
},
"url": "https://www.semanticscholar.org/paper/4972b88f8f324a4fa18e921f62a9857af2b5fc7b",
"referenceCount": 0,
"citationCount": 348,
"influentialCitationCount": 33,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks",
"abstract": "In this paper, we introduce a unified and generalist Biomed ical G enerative P re-trained T ransformer ( BiomedGPT ) model, which leverages self-supervision on large and diverse datasets to accept multi-modal inputs and perform a range of downstream tasks. Our experiments demonstrate that BiomedGPT delivers expansive and inclusive representations of biomedical data, outperforming the majority of preceding state-of-the-art models across five distinct tasks with 20 public datasets spanning over 15 unique biomedical modalities. Through the ablation study, we also showcase the efficacy of our multi-modal and multi-task pretraining approach in transferring knowledge to previously unseen data. Overall, our work presents a significant step forward in developing unified and generalist models for biomedicine, with far-reaching implications for improving healthcare outcomes.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Kai Zhang",
"Jun Yu",
"Zhiling Yan",
"Yixin Liu",
"Eashan Adhikarla",
"S. Fu",
"Xun Chen",
"Chen Chen",
"Yuyin Zhou",
"Xiang Li",
"Lifang He",
"Brian D. Davison",
"Quanzheng Li",
"Yong Chen",
"Hongfang Liu",
"Lichao Sun"
],
"externalIds": {
"DBLP": "journals/corr/abs-2305-17100",
"DOI": "10.48550/arXiv.2305.17100",
"CorpusId": 271854755
},
"url": "https://www.semanticscholar.org/paper/31a7d8c4a5ab6bab522494b57270249105c8748e",
"referenceCount": 146,
"citationCount": 12,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Multimodal Approaches for Alzheimer's Detection Using Patients' Speech and Transcript",
"abstract": null,
"year": 2023,
"venue": "BI",
"authors": [
"Hongmin Cai",
"Xiaoke Huang",
"Zheng Liu",
"Wenxiong Liao",
"Haixing Dai",
"Zihao Wu",
"Dajiang Zhu",
"Hui Ren",
"Quanzheng Li",
"Tianming Liu",
"Xiang Li"
],
"externalIds": {
"DBLP": "conf/brain/CaiHLLDWZRLLL23",
"DOI": "10.1007/978-3-031-43075-6_34",
"CorpusId": 261894334
},
"url": "https://www.semanticscholar.org/paper/672a2eef4141970617e5e49ad7613de1e5b04163",
"referenceCount": 0,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Embedding Human Brain Function via Transformer",
"abstract": null,
"year": 2022,
"venue": "International Conference on Medical Image Computing and Computer-Assisted Intervention",
"authors": [
"Lin Zhao",
"Zihao Wu",
"Haixing Dai",
"Zheng-Long Liu",
"Tuo Zhang",
"Dajiang Zhu",
"Tianming Liu"
],
"externalIds": {
"DBLP": "conf/miccai/ZhaoWDLZZL22",
"DOI": "10.1007/978-3-031-16431-6_35",
"CorpusId": 252369176
},
"url": "https://www.semanticscholar.org/paper/6b8a412b47c33f131c69ff2476f18e1d658adfc9",
"referenceCount": 0,
"citationCount": 9,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Genome annotation",
"abstract": null,
"year": 2013,
"venue": "",
"authors": [
"Qi Sun",
"Nnnnnnnnnnnnnnnnn Accaaacgtacaa"
],
"externalIds": {
"DOI": "10.1016/s0981-9428(01)01242-6",
"CorpusId": 4967574
},
"url": "https://www.semanticscholar.org/paper/b54c4d6bbdee0ad47bafa74e41d4053cf6f42219",
"referenceCount": 88,
"citationCount": 63,
"influentialCitationCount": 7,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "National Institute on Aging–Alzheimer’s Association guidelines for the neuropathologic assessment of Alzheimer’s disease: a practical approach",
"abstract": null,
"year": 2011,
"venue": "Acta Neuropathologica",
"authors": [
"T. Montine",
"C. Phelps",
"T. Beach",
"E. Bigio",
"N. Cairns",
"D. Dickson",
"C. Duyckaerts",
"M. Frosch",
"E. Masliah",
"S. Mirra",
"P. Nelson",
"J. Schneider",
"D. Thal",
"J. Trojanowski",
"H. Vinters",
"B. Hyman"
],
"externalIds": {
"MAG": "2128228199",
"DOI": "10.1007/s00401-011-0910-3",
"CorpusId": 3073661,
"PubMed": "22101365"
},
"url": "https://www.semanticscholar.org/paper/7c6b7f2a5f319db0fd34261f77e81b7a8472b523",
"referenceCount": 127,
"citationCount": 2160,
"influentialCitationCount": 67,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Meta-analysis methods.",
"abstract": null,
"year": 2008,
"venue": "Advances in Genetics",
"authors": [
"T. Trikalinos",
"G. Salanti",
"E. Zintzaras",
"J. Ioannidis"
],
"externalIds": {
"MAG": "1556717463",
"DOI": "10.1016/S0065-2660(07)00413-0",
"CorpusId": 12241454,
"PubMed": "18358326"
},
"url": "https://www.semanticscholar.org/paper/991129f5478e0e313df1f6121201609b6ef7d9a9",
"referenceCount": 116,
"citationCount": 137,
"influentialCitationCount": 4,
"isOpenAccess": false,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "アルツハイマー病の早期診断に向けて-米国 Alzheimer's Disease Neuroimaging Initiative の取り組み",
"abstract": null,
"year": 2006,
"venue": "",
"authors": [
"岩坪威"
],
"externalIds": {
"MAG": "2398224616",
"CorpusId": 77153623
},
"url": "https://www.semanticscholar.org/paper/952aa9768e4cb67dd8ea5104b22745b340aa4bc1",
"referenceCount": 0,
"citationCount": 1577,
"influentialCitationCount": 133,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Pathological and clinical heterogeneity of presenilin 1 gene mutations.",
"abstract": "The presenilins are two closely related genes which implication in familial Alzheimer's disease (FAD) is well known. Presenilin 1 gene (PS1) mutations cause heterogeneous disorders and a bibliographical review of atypical PS1-FAD cases allows us to describe a great diversity of neuropathological and clinical variations and conclude that most of them do not strongly depend on the genetic location of the mutation so other genetic or epigenetic factors may be involved.",
"year": 2004,
"venue": "Journal of Alzheimer's Disease",
"authors": [
"M. Menéndez"
],
"externalIds": {
"MAG": "239184292",
"DOI": "10.3233/JAD-2004-6503",
"CorpusId": 11225359,
"PubMed": "15505368"
},
"url": "https://www.semanticscholar.org/paper/0a062fb5f925a898f0c4cedbc99bf5f53269b0a3",
"referenceCount": 26,
"citationCount": 37,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Psychology"
]
},
{
"title": "The gene ontology knowledge-base in 2023",
"abstract": null,
"year": null,
"venue": "Genetics",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Stanford CRFM — crfm",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Falcon-40b: an open large language model with state-of-the-art performance",
"abstract": null,
"year": null,
"venue": "falconllm. tii. ae",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "National Center for Biotechnology Information Bethesda (MD): National Library of Medicine (US)",
"abstract": null,
"year": null,
"venue": "Gene",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Assessing the Utility of Large Language Models for Phenotype-Driven Gene Prioritization in Rare Genetic Disorder Diagnosis.": {
"paper_title": "Assessing the Utility of Large Language Models for Phenotype-Driven Gene Prioritization in Rare Genetic Disorder Diagnosis.",
"arxiv_id": "2403.14801",
"authors": [
"Junyoung Kim",
"Jingye Yang",
"Kai Wang",
"Chunhua Weng",
"Cong Liu"
],
"year": 2024,
"venue": "arXiv.org",
"abstract": "Phenotype-driven gene prioritization is a critical process in the diagnosis of rare genetic disorders for identifying and ranking potential disease-causing genes based on observed physical traits or phenotypes. While traditional approaches rely on curated knowledge graphs with phenotype-gene relations, recent advancements in large language models have opened doors to the potential of AI predictions through extensive training on diverse corpora and complex models. This study conducted a comprehensive evaluation of five large language models, including two Generative Pre-trained Transformers series, and three Llama2 series, assessing their performance across three key metrics: task completeness, gene prediction accuracy, and adherence to required output structures. Various experiments explored combinations of models, prompts, input types, and task difficulty levels. Our findings reveal that even the best-performing LLM, GPT-4, achieved an accuracy of 16.0%, which still lags behind traditional bioinformatics tools. Prediction accuracy increased with the parameter/model size. A similar increasing trend was observed for the task completion rate, with complicated prompts more likely to increase task completeness in models smaller than GPT-4. However, complicated prompts are more likely to decrease the structure compliance rate, but no prompt effects on GPT-4. Compared to HPO term-based input, LLM was also able to achieve better than random prediction accuracy by taking free-text input, but slightly lower than with the HPO input. Bias analysis showed that certain genes, such as MECP2, CDKL5, and SCN1A, are more likely to be top-ranked, potentially explaining the variances observed across different datasets. This study provides valuable insights into the integration of LLMs within genomic analysis, contributing to the ongoing discussion on the utilization of advanced LLMs in clinical workflows.",
"references": [
{
"title": "A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT",
"abstract": "Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to common problems faced in a particular context, i.e., output generation and interaction when working with LLMs. This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software development tasks. First, it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Jules White",
"Quchen Fu",
"Sam Hays",
"Michael Sandborn",
"Carlos Olea",
"Henry Gilbert",
"Ashraf Elnashar",
"Jesse Spencer-Smith",
"Douglas C. Schmidt"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-11382",
"ArXiv": "2302.11382",
"DOI": "10.48550/arXiv.2302.11382",
"CorpusId": 257079092
},
"url": "https://www.semanticscholar.org/paper/08b85bce712168998004ee80ce4e475390413c74",
"referenceCount": 37,
"citationCount": 676,
"influentialCitationCount": 48,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Zero-Shot Information Extraction via Chatting with ChatGPT",
"abstract": "Zero-shot information extraction (IE) aims to build IE systems from the unannotated text. It is challenging due to involving little human intervention. Challenging but worthwhile, zero-shot IE reduces the time and effort that data labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3, ChatGPT) show promising performance on zero-shot settings, thus inspiring us to explore prompt-based methods. In this work, we ask whether strong IE models can be constructed by directly prompting LLMs. Specifically, we transform the zero-shot IE task into a multi-turn question-answering problem with a two-stage framework (ChatIE). With the power of ChatGPT, we extensively evaluate our framework on three IE tasks: entity-relation triple extract, named entity recognition, and event extraction. Empirical results on six datasets across two languages show that ChatIE achieves impressive performance and even surpasses some full-shot models on several datasets (e.g., NYT11-HRL). We believe that our work could shed light on building IE models with limited resources.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Xiang Wei",
"Xingyu Cui",
"Ning Cheng",
"Xiaobin Wang",
"Xin Zhang",
"Shen Huang",
"Pengjun Xie",
"Jinan Xu",
"Yufeng Chen",
"Meishan Zhang",
"Yong Jiang",
"Wenjuan Han"
],
"externalIds": {
"ArXiv": "2302.10205",
"DBLP": "journals/corr/abs-2302-10205",
"DOI": "10.48550/arXiv.2302.10205",
"CorpusId": 257050669
},
"url": "https://www.semanticscholar.org/paper/f4cba0db34aa0c389cec267ca1f3ba5255ea2645",
"referenceCount": 43,
"citationCount": 237,
"influentialCitationCount": 19,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
}
]
},
"Thought Graph: Generating Thought Process for Biological Reasoning": {
"paper_title": "Thought Graph: Generating Thought Process for Biological Reasoning",
"arxiv_id": "2403.07144",
"authors": [
"Chi-Yang Hsu",
"K. Cox",
"Jiawei Xu",
"Zhen Tan",
"Tianhua Zhai",
"Mengzhou Hu",
"Dexter Pratt",
"Tianlong Chen",
"Ziniu Hu",
"Ying Ding"
],
"year": 2024,
"venue": "The Web Conference",
"abstract": "We present the Thought Graph as a novel framework to support complex reasoning and use gene set analysis as an example to uncover semantic relationships between biological processes. Our framework stands out for its ability to provide a deeper understanding of gene sets, significantly surpassing GSEA by 40.28% and LLM baselines by 5.38% based on cosine similarity to human annotations. Our analysis further provides insights into future directions of biological processes naming, and implications for bioinformatics and precision medicine.",
"references": [
{
"title": "Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey",
"abstract": "The contemporary LLMs are prone to producing hallucinations, stemming mainly from the knowledge gaps within the models. To address this critical limitation, researchers employ diverse strategies to augment the LLMs by incorporating external knowledge, aiming to reduce hallucinations and enhance reasoning accuracy. Among these strategies, leveraging knowledge graphs as a source of external information has demonstrated promising results. In this survey, we comprehensively review these knowledge-graph-based augmentation techniques in LLMs, focusing on their efficacy in mitigating hallucinations. We systematically categorize these methods into three overarching groups, offering methodological comparisons and performance evaluations. Lastly, this survey explores the current trends and challenges associated with these techniques and outlines potential avenues for future research in this emerging field.",
"year": 2023,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Garima Agrawal",
"Tharindu Kumarage",
"Zeyad Alghami",
"Huanmin Liu"
],
"externalIds": {
"ACL": "2024.naacl-long.219",
"DBLP": "journals/corr/abs-2311-07914",
"ArXiv": "2311.07914",
"DOI": "10.48550/arXiv.2311.07914",
"CorpusId": 265158225
},
"url": "https://www.semanticscholar.org/paper/ec67d5f0e236f23c6b48b926f9e25db52194dd71",
"referenceCount": 128,
"citationCount": 33,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Evaluation of large language models for discovery of gene set function",
"abstract": "Gene set analysis is a mainstay of functional genomics, but it relies on curated databases of gene functions that are incomplete. Here we evaluate five Large Language Models (LLMs) for their ability to discover the common biological functions represented by a gene set, substantiated by supporting rationale, citations and a confidence assessment. Benchmarking against canonical gene sets from the Gene Ontology, GPT-4 confidently recovered the curated name or a more general concept (73% of cases), while benchmarking against random gene sets correctly yielded zero confidence. Gemini-Pro and Mixtral-Instruct showed ability in naming but were falsely confident for random sets, whereas Llama2–70b had poor performance overall. In gene sets derived from ‘omics data, GPT-4 identified novel functions not reported by classical functional enrichment (32% of cases), which independent review indicated were largely verifiable and not hallucinations. The ability to rapidly synthesize common gene functions positions LLMs as valuable ‘omics assistants.",
"year": 2023,
"venue": "Research Square",
"authors": [
"Mengzhou Hu",
"Sahar Alkhairy",
"Ingoo Lee",
"Rudolf T. Pillich",
"Robin Bachelder",
"T. Ideker",
"Dexter Pratt"
],
"externalIds": {
"DBLP": "journals/corr/abs-2309-04019",
"PubMedCentral": "10508824",
"ArXiv": "2309.04019",
"DOI": "10.21203/rs.3.rs-3270331/v1",
"CorpusId": 263606100,
"PubMed": "37790547"
},
"url": "https://www.semanticscholar.org/paper/e10eeddc2672909c2ae4ec98db23e9eb446b4a22",
"referenceCount": 102,
"citationCount": 10,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology",
"Computer Science"
]
},
{
"title": "Graph of Thoughts: Solving Elaborate Problems with Large Language Models",
"abstract": "We introduce Graph of Thoughts (GoT): a framework that\nadvances prompting capabilities in large language models\n(LLMs) beyond those offered by paradigms such as \nChain-of-Thought or Tree of Thoughts (ToT). The key idea and \nprimary advantage of GoT is the ability to model the information \ngenerated by an LLM as an arbitrary graph, where units of \ninformation (\"LLM thoughts\") are vertices, and edges correspond\nto dependencies between these vertices. This approach enables \ncombining arbitrary LLM thoughts into synergistic outcomes, \ndistilling the essence of whole networks of thoughts,\nor enhancing thoughts using feedback loops. We illustrate\nthat GoT offers advantages over state of the art on different\ntasks, for example increasing the quality of sorting by 62%\nover ToT, while simultaneously reducing costs by >31%.\nWe ensure that GoT is extensible with new thought \ntransformations and thus can be used to spearhead new prompting\nschemes. This work brings the LLM reasoning closer to human \nthinking or brain mechanisms such as recurrence, both\nof which form complex networks",
"year": 2023,
"venue": "AAAI Conference on Artificial Intelligence",
"authors": [
"Maciej Besta",
"Nils Blach",
"Aleš Kubíček",
"Robert Gerstenberger",
"Lukas Gianinazzi",
"Joanna Gajda",
"Tomasz Lehmann",
"Michal Podstawski",
"H. Niewiadomski",
"P. Nyczyk",
"Torsten Hoefler"
],
"externalIds": {
"DBLP": "conf/aaai/BestaBKGPGGLNNH24",
"ArXiv": "2308.09687",
"DOI": "10.1609/aaai.v38i16.29720",
"CorpusId": 261030303
},
"url": "https://www.semanticscholar.org/paper/aade40af0d85b0b4fe15c97f6222d5c2e4d6d9b3",
"referenceCount": 88,
"citationCount": 313,
"influentialCitationCount": 16,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Building a knowledge graph to enable precision medicine",
"abstract": null,
"year": 2022,
"venue": "bioRxiv",
"authors": [
"P. Chandak",
"Kexin Huang",
"M. Zitnik"
],
"externalIds": {
"PubMedCentral": "9893183",
"DOI": "10.1038/s41597-023-01960-3",
"CorpusId": 248518473,
"PubMed": "36732524"
},
"url": "https://www.semanticscholar.org/paper/5cc58bcfb9bf39d4114eab88fca36eb0ce36afd9",
"referenceCount": 126,
"citationCount": 168,
"influentialCitationCount": 12,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Self-Alignment Pretraining for Biomedical Entity Representations",
"abstract": "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
"year": 2020,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Fangyu Liu",
"Ehsan Shareghi",
"Zaiqiao Meng",
"Marco Basaldella",
"Nigel Collier"
],
"externalIds": {
"ACL": "2021.naacl-main.334",
"MAG": "3094258447",
"ArXiv": "2010.11784",
"DBLP": "journals/corr/abs-2010-11784",
"DOI": "10.18653/V1/2021.NAACL-MAIN.334",
"CorpusId": 225039747
},
"url": "https://www.semanticscholar.org/paper/615204452304331004532c5800399ef55d58b4c7",
"referenceCount": 58,
"citationCount": 260,
"influentialCitationCount": 77,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "From the Cover: Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles",
"abstract": "Although genomewide RNA expression analysis has become a routine tool in biomedical research, extracting biological insight from such information remains a major challenge. Here, we describe a powerful analytical method called Gene Set Enrichment Analysis (GSEA) for interpreting gene expression data. The method derives its power by focusing on gene sets, that is, groups of genes that share common biological function, chromosomal location, or regulation. We demonstrate how GSEA yields insights into several cancer-related data sets, including leukemia and lung cancer. Notably, where single-gene analysis finds little similarity between two independent studies of patient survival in lung cancer, GSEA reveals many biological pathways in common. The GSEA method is embodied in a freely available software package, together with an initial database of 1,325 biologically defined gene sets.",
"year": 2005,
"venue": "",
"authors": [
"A. Subramanian",
"P. Tamayo",
"V. Mootha",
"Sayan Mukherjee",
"B. Ebert",
"Michael A. Gillette",
"A. Paulovich",
"S. Pomeroy",
"T. Golub",
"E. Lander",
"J. Mesirov"
],
"externalIds": {
"MAG": "2782512734",
"DOI": "10.1073/PNAS.0506580102",
"CorpusId": 2374637
},
"url": "https://www.semanticscholar.org/paper/a7c1642509397ccebaa98bf5ec67b8b6ba219cc8",
"referenceCount": 0,
"citationCount": 38187,
"influentialCitationCount": 4784,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Gene Ontology: tool for the unification of biology",
"abstract": null,
"year": 2000,
"venue": "Nature Genetics",
"authors": [
"M. Ashburner",
"C. Ball",
"J. Blake",
"D. Botstein",
"Heather L. Butler",
"J. Cherry",
"A. P. Davis",
"K. Dolinski",
"S. Dwight",
"J. Eppig",
"M. Harris",
"D. Hill",
"L. Issel-Tarver",
"A. Kasarskis",
"S. Lewis",
"J. Matese",
"J. Richardson",
"M. Ringwald",
"G. Rubin"
],
"externalIds": {
"MAG": "2103017472",
"DOI": "10.1038/75556",
"CorpusId": 10718909,
"PubMed": "10802651"
},
"url": "https://www.semanticscholar.org/paper/a411f6a0e6473137ac1a538f7cee65722fa3584f",
"referenceCount": 35,
"citationCount": 37578,
"influentialCitationCount": 1751,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Medicine"
]
},
{
"title": "2023. MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "2023. Self-ConsistencyImprovesChain of Thought Reasoning in Language Models",
"abstract": null,
"year": null,
"venue": "arXiv",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "2023.Chain-of-ThoughtPromptingElicitsReasoninginLargeLanguageModels",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Large language models in bioinformatics: applications and perspectives": {
"paper_title": "Large language models in bioinformatics: applications and perspectives",
"arxiv_id": "2401.04155",
"authors": [
"Jiajia Liu",
"Mengyuan Yang",
"Yankai Yu",
"Haixia Xu",
"Kang Li",
"Xiaobo Zhou"
],
"year": 2024,
"venue": "arXiv.org",
"abstract": "Large language models (LLMs) are a class of artificial intelligence models based on deep learning, which have great performance in various tasks, especially in natural language processing (NLP). Large language models typically consist of artificial neural networks with numerous parameters, trained on large amounts of unlabeled input using self-supervised or semi-supervised learning. However, their potential for solving bioinformatics problems may even exceed their proficiency in modeling human language. In this review, we will present a summary of the prominent large language models used in natural language processing, such as BERT and GPT, and focus on exploring the applications of large language models at different omics levels in bioinformatics, mainly including applications of large language models in genomics, transcriptomics, proteomics, drug discovery and single cell analysis. Finally, this review summarizes the potential and prospects of large language models in solving bioinformatic problems.",
"references": []
},
"Northeast Materials Database (NEMAD): Enabling Discovery of High Transition Temperature Magnetic Compounds": {
"paper_title": "Northeast Materials Database (NEMAD): Enabling Discovery of High Transition Temperature Magnetic Compounds",
"arxiv_id": "2409.15675",
"authors": [
"Suman Itani",
"Yibo Zhang",
"Jiadong Zang"
],
"year": 2024,
"venue": "",
"abstract": "The discovery of novel magnetic materials with greater operating temperature ranges and optimized performance is essential for advanced applications. Current data-driven approaches are challenging and limited due to the lack of accurate, comprehensive, and feature-rich databases. This study aims to address this challenge by introducing a new approach that uses Large Language Models (LLMs) to create a comprehensive, experiment-based, magnetic materials database named the Northeast Materials Database (NEMAD), which consists of 26,706 magnetic materials (www.nemad.org). The database incorporates chemical composition, magnetic phase transition temperatures, structural details, and magnetic properties. Enabled by NEMAD, machine learning models were developed to classify materials and predict transition temperatures. Our classification model achieved an accuracy of 90% in categorizing materials as ferromagnetic (FM), antiferromagnetic (AFM), and non-magnetic (NM). The regression models predict Curie (N\\'eel) temperature with a coefficient of determination (R2) of 0.86 (0.85) and a mean absolute error (MAE) of 62K (32K). These models identified 62 (19) FM (AFM) candidates with a predicted Curie (N\\'eel) temperature above 500K (100K) from the Materials Project. This work shows the feasibility of combining LLMs for automated data extraction and machine learning models in accelerating the discovery of magnetic materials.",
"references": [
{
"title": "Enhancing magnetocaloric material discovery: A machine learning approach using an autogenerated database by large language models",
"abstract": "Magnetic cooling based on the magnetocaloric effect is a promising solid-state refrigeration technology for a wide range of applications in different temperature ranges. Previous studies have mostly focused on near room temperature (300 K) and cryogenic temperature (<10 K) ranges, while important applications such as hydrogen liquefaction call for efficient magnetic refrigerants for the intermediate temperature range of 10–100 K. For efficient use in this range, new magnetocaloric materials with matching Curie temperatures need to be discovered, while conventional experimental approaches are typically time-consuming and expensive. Here, we report a computational material discovery pipeline based on a materials database containing more than 6000 entries auto-generated by extracting reported material properties from the literature using a large language model. We then use this database to train a machine learning model that can efficiently predict the magnetocaloric properties of materials based on their chemical composition. We further verify the magnetocaloric properties of the predicted compounds using ab initio atomistic spin dynamics simulations to complete the computational material discovery. Using this approach, we identify 11 new promising magnetocaloric materials for the target temperature range. Our work demonstrates the potential of combining large language models, machine learning, and ab initio simulations to efficiently discover new functional materials.",
"year": 2024,
"venue": "AIP Advances",
"authors": [
"Jiaoyue Yuan",
"Runqing Yang",
"L. Patra",
"Bolin Liao"
],
"externalIds": {
"ArXiv": "2403.02553",
"DOI": "10.1063/5.0206855",
"CorpusId": 268247901
},
"url": "https://www.semanticscholar.org/paper/2ebe931a101bfbabc91f08a1f1b518defed83c2e",
"referenceCount": 67,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Structured information extraction from scientific text with large language models",
"abstract": null,
"year": 2024,
"venue": "Nature Communications",
"authors": [
"John Dagdelen",
"Alex Dunn",
"Sanghoon Lee",
"Nicholas Walker",
"Andrew S. Rosen",
"G. Ceder",
"Kristin A. Persson",
"Anubhav Jain"
],
"externalIds": {
"PubMedCentral": "10869356",
"DOI": "10.1038/s41467-024-45563-x",
"CorpusId": 267700596,
"PubMed": "38360817"
},
"url": "https://www.semanticscholar.org/paper/da54bda4ef0ba0f013ac639fb6a3dea1e4745ad4",
"referenceCount": 64,
"citationCount": 43,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "GPTArticleExtractor: An automated workflow for magnetic material database construction",
"abstract": null,
"year": 2024,
"venue": "Journal of Magnetism and Magnetic Materials",
"authors": [
"Yibo Zhang",
"Suman Itani",
"Kamal Khanal",
"Emmanuel Okyere",
"Gavin Smith",
"Koichiro Takahashi",
"Jiadong Zang"
],
"externalIds": {
"ArXiv": "2401.05875",
"DOI": "10.1016/j.jmmm.2024.172001",
"CorpusId": 266933500
},
"url": "https://www.semanticscholar.org/paper/6ee676bd807f16ea02849e51c2ee70ea43095706",
"referenceCount": 38,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Unleashing the Power of Artificial Intelligence in Materials Design",
"abstract": "The integration of artificial intelligence (AI) algorithms in materials design is revolutionizing the field of materials engineering thanks to their power to predict material properties, design de novo materials with enhanced features, and discover new mechanisms beyond intuition. In addition, they can be used to infer complex design principles and identify high-quality candidates more rapidly than trial-and-error experimentation. From this perspective, herein we describe how these tools can enable the acceleration and enrichment of each stage of the discovery cycle of novel materials with optimized properties. We begin by outlining the state-of-the-art AI models in materials design, including machine learning (ML), deep learning, and materials informatics tools. These methodologies enable the extraction of meaningful information from vast amounts of data, enabling researchers to uncover complex correlations and patterns within material properties, structures, and compositions. Next, a comprehensive overview of AI-driven materials design is provided and its potential future prospects are highlighted. By leveraging such AI algorithms, researchers can efficiently search and analyze databases containing a wide range of material properties, enabling the identification of promising candidates for specific applications. This capability has profound implications across various industries, from drug development to energy storage, where materials performance is crucial. Ultimately, AI-based approaches are poised to revolutionize our understanding and design of materials, ushering in a new era of accelerated innovation and advancement.",
"year": 2023,
"venue": "Materials",
"authors": [
"Silvia Badini",
"S. Regondi",
"R. Pugliese"
],
"externalIds": {
"PubMedCentral": "10488647",
"DOI": "10.3390/ma16175927",
"CorpusId": 261411411,
"PubMed": "37687620"
},
"url": "https://www.semanticscholar.org/paper/d4d84bc4e0f633d47c9740c8a3cf13014ed7a23d",
"referenceCount": 143,
"citationCount": 21,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Physics-Informed Machine-Learning Prediction of Curie Temperatures and Its Promise for Guiding the Discovery of Functional Magnetic Materials",
"abstract": null,
"year": 2023,
"venue": "Chemistry of Materials",
"authors": [
"Prashant Singh",
"Tyler J. Del Rose",
"A. Palasyuk",
"Y. Mudryk"
],
"externalIds": {
"DOI": "10.1021/acs.chemmater.3c00892",
"CorpusId": 260602946
},
"url": "https://www.semanticscholar.org/paper/fea0b4e71d73df01839f7c03ef7c5b83d71bded2",
"referenceCount": 38,
"citationCount": 9,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "Machine learning predictions of high-Curie-temperature materials",
"abstract": "Technologies that function at room temperature often require magnets with a high Curie temperature, TC, and can be improved with better materials. Discovering magnetic materials with a substantial TC is challenging because of the large number of candidates and the cost of fabricating and testing them. Using the two largest known datasets of experimental Curie temperatures, we develop machine-learning models to make rapid TC predictions solely based on the chemical composition of a material. We train a random-forest model and a k-NN one and predict on an initial dataset of over 2500 materials and then validate the model on a new dataset containing over 3000 entries. The accuracy is compared for multiple compounds' representations (“descriptors”) and regression approaches. A random-forest model provides the most accurate predictions and is not improved by dimensionality reduction or by using more complex descriptors based on atomic properties. A random-forest model trained on a combination of both datasets shows that cobalt-rich and iron-rich materials have the highest Curie temperatures for all binary and ternary compounds. An analysis of the model reveals systematic error that causes the model to over-predict low-TC materials and under-predict high-TC materials. For exhaustive searches to find new high-TC materials, analysis of the learning rate suggests either that much more data is needed or that more efficient descriptors are necessary.",
"year": 2023,
"venue": "Applied Physics Letters",
"authors": [
"Joshua F. Belot",
"V. Taufour",
"S. Sanvito",
"G. Hart"
],
"externalIds": {
"ArXiv": "2307.06879",
"DOI": "10.1063/5.0156377",
"CorpusId": 259847238
},
"url": "https://www.semanticscholar.org/paper/e889fc0dd34b6d63800eec924dfe550ea04d6cde",
"referenceCount": 44,
"citationCount": 6,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Extracting accurate materials data from research papers with conversational language models and prompt engineering",
"abstract": null,
"year": 2023,
"venue": "Nature Communications",
"authors": [
"M. Polak",
"Dane Morgan"
],
"externalIds": {
"DBLP": "journals/corr/abs-2303-05352",
"PubMedCentral": "10882009",
"ArXiv": "2303.05352",
"DOI": "10.1038/s41467-024-45914-8",
"CorpusId": 257427054,
"PubMed": "38383556"
},
"url": "https://www.semanticscholar.org/paper/edfb5696b5431bd20eb57964c083e9118e153e97",
"referenceCount": 51,
"citationCount": 62,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics",
"Medicine"
]
},
{
"title": "Machine learning-based Curie temperature prediction for magnetic 14:2:1 phases",
"abstract": "The TM14RE2B-based phases (TM = transition metal, RE = rare earth metal; hereafter called 14:2:1) enable permanent magnets with outstanding magnetic properties. Novel chemical compositions that represent new 14:2:1 phases necessitate that they do not demagnetize at application-specific operating temperatures. Therefore, an accurate knowledge of the Curie temperature ( T c) is important. For magnetic 14:2:1 phases, we present a machine learning model that predicts T c by using merely chemical compositional features. Hyperparameter tuning on bagging and boosting models, as well as averaging predictions from individual models using the voting regressor, enables a low mean-absolute-error of 16 K on an unseen test set. The training set and a test set have been constructed by randomly splitting, in an 80:20 ratio, of a database that contains 449 phases (270 compositionally unique) mapped with their T c, taken from distinct publications. The model correctly identifies the relative importance of key substitutional elements that influence T c, especially in an Fe base such as Co, Mn, and Al. This paper is expected to serve as a basis for accurate Curie temperature predictions in the sought-after 14:2:1 permanent magnet family, particularly for transition metal substitution of within 20% in an Fe or Co base.",
"year": 2023,
"venue": "AIP Advances",
"authors": [
"A. Choudhary",
"A. Kini",
"Dominic Hohs",
"A. Jansche",
"T. Bernthaler",
"O. Csiszár",
"D. Goll",
"G. Schneider"
],
"externalIds": {
"DOI": "10.1063/5.0116650",
"CorpusId": 257480211
},
"url": "https://www.semanticscholar.org/paper/849962fb97004bf068df0ef0b7901387f7e88736",
"referenceCount": 28,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "A rule-free workflow for the automated generation of databases from scientific literature",
"abstract": null,
"year": 2023,
"venue": "npj Computational Materials",
"authors": [
"Luke P J Gilligan",
"M. Cobelli",
"V. Taufour",
"S. Sanvito"
],
"externalIds": {
"PubMedCentral": "11041762",
"ArXiv": "2301.11689",
"DOI": "10.1038/s41524-023-01171-9",
"CorpusId": 256358431,
"PubMed": "38666056"
},
"url": "https://www.semanticscholar.org/paper/1cfe781523b4b3469d822bc52a0721b1c70cee5f",
"referenceCount": 78,
"citationCount": 10,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Medicine"
]
},
{
"title": "Machine Learning Study of the Magnetic Ordering in 2D Materials.",
"abstract": "Magnetic materials have been applied in a large variety of technologies, from data storage to quantum devices. The development of two-dimensional (2D) materials has opened new arenas for magnetic compounds, even when classical theories discourage their examination. Here we propose a machine-learning-based strategy to predict and understand magnetic ordering in 2D materials. This strategy couples the prediction of the existence of magnetism in 2D materials using a random forest and the Shapley additive explanations method with material maps defined by atomic features predicting the magnetic ordering (ferromagnetic or antiferromagnetic). While the random forest model predicts magnetism with an accuracy of 86%, the material maps obtained by the sure independence screening and sparsifying method have an accuracy of ∼90% in predicting the magnetic ordering. Our model indicates that 3d transition metals, halides, and structural clusters with regular transition-metal sublattices have a positive contribution in the total weight deciding the existence of magnetism in 2D compounds. This behavior is associated with the competition between crystal field and exchange splitting. The machine learning model also indicates that the atomic spin orbit coupling (SOC) is a determinant feature for the identification of the patterns separating ferro- from antiferromagnetic order. The proposed strategy is used to identify novel 2D magnetic compounds that, together with the fundamental trends in the chemical and structural space, pave novel routes for experimental exploration.",
"year": 2022,
"venue": "ACS Applied Materials and Interfaces",
"authors": [
"C. M. Acosta",
"Elton Ogoshi",
"J. A. Souza",
"G. Dalpian"
],
"externalIds": {
"ArXiv": "2201.12630",
"DOI": "10.1021/acsami.1c21558",
"CorpusId": 246430809,
"PubMed": "35133125"
},
"url": "https://www.semanticscholar.org/paper/cbc8ffc81564078be67a342998a39b42171d1bf3",
"referenceCount": 86,
"citationCount": 37,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Medicine"
]
},
{
"title": "On-the-fly interpretable machine learning for rapid discovery of two-dimensional ferromagnets with high Curie temperature",
"abstract": null,
"year": 2021,
"venue": "Chem",
"authors": [
"Shuaihua Lu",
"Qionghua Zhou",
"Yilv Guo",
"Jinlan Wang"
],
"externalIds": {
"DOI": "10.1016/j.chempr.2021.11.009",
"CorpusId": 244893363
},
"url": "https://www.semanticscholar.org/paper/32a622126c1cd863520be4af26e532e4762f01de",
"referenceCount": 58,
"citationCount": 40,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Magnetic and superconducting phase diagrams and transition temperatures predicted using text mining and machine learning",
"abstract": null,
"year": 2020,
"venue": "npj Computational Materials",
"authors": [
"Callum J Court",
"J. Cole"
],
"externalIds": {
"MAG": "3008287297",
"DOI": "10.1038/s41524-020-0287-8",
"CorpusId": 212681680
},
"url": "https://www.semanticscholar.org/paper/084e843cd1cc0ffc9c90e0db534df03fb5535674",
"referenceCount": 62,
"citationCount": 69,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Engineering"
]
},
{
"title": "Perspective and Prospects for Rare Earth Permanent Magnets",
"abstract": null,
"year": 2020,
"venue": "Engineering",
"authors": [
"J. Coey"
],
"externalIds": {
"MAG": "2951990673",
"DOI": "10.1016/J.ENG.2018.11.034",
"CorpusId": 197447861
},
"url": "https://www.semanticscholar.org/paper/c83f9749d530d2ef092c4062a60f806187fb3dcd",
"referenceCount": 92,
"citationCount": 413,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Engineering"
]
},
{
"title": "A regression-based model evaluation of the Curie temperature of transition-metal rare-earth compounds",
"abstract": "The Curie temperature (TC) of RT binary compounds consisting of 3d transition-metal (T ) and 4f rare-earth elements (R) is analyzed systematically by a developed machine learning technique called kernel regression-based model evaluation. Twenty-one descriptive variables were designed assuming completely obtained information of the TC. Multiple kernel regression analyses with different kernel types: cosine, linear, Gaussian, polynomial, and Laplacian kernels were implemented and examined. All possible descriptive variable combinations were generated to construct the corresponding prediction models. As a result, by appropriate combinations between descriptive variable sets and kernel formulations, we demonstrate that a number of kernel regression models can accurately reproduce the TC of the RT compounds. The relevance of descriptive variables for predicting TC are systematically investigated. The results indicate that the rare-earth concentration is the most relevant variable in the TC phenomenon. We demonstrate that the regression-based model selection technique can be applied to learn the relationship between the descriptive variables and the actuation mechanism of the corresponding physical phenomenon, i.e., TC in the present case.",
"year": 2019,
"venue": "Journal of Physics: Conference Series",
"authors": [
"Duong-Nguyen Nguyen",
"T. Pham",
"Viet-Cuong Nguyen",
"A. Nguyen",
"H. Kino",
"T. Miyake",
"H. Dam"
],
"externalIds": {
"MAG": "2986015109",
"DOI": "10.1088/1742-6596/1290/1/012009",
"CorpusId": 210257991
},
"url": "https://www.semanticscholar.org/paper/3fbca7421c92ff6106823c6d8081a257733352b2",
"referenceCount": 30,
"citationCount": 13,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Materials Science"
]
},
{
"title": "An accelerating approach of designing ferromagnetic materials via machine learning modeling of magnetic ground state and Curie temperature",
"abstract": "ABSTRACT Magnetic materials have a plethora of applications from information technologies to energy harvesting. However, their functionalities are often limited by the magnetic ordering temperature. In this work, we performed random forest on the magnetic ground state and the Curie temperature (TC ) to classify ferromagnetic and antiferromagnetic compounds and to predict the TC of the ferromagnets. The resulting accuracy is about 87% for classification and 91% for regression. When the trained model is applied to magnetic intermetallic materials in Materials Project, the accuracy is comparable. Our work paves the way to accelerate the discovery of new magnetic compounds for technological applications. GRAPHICAL ABSTRACT",
"year": 2019,
"venue": "Materials Research Letters",
"authors": [
"Teng Long",
"N. Fortunato",
"Yixuan Zhang",
"O. Gutfleisch",
"Hongbin Zhang"
],
"externalIds": {
"MAG": "3119520457",
"ArXiv": "1908.00926",
"DOI": "10.1080/21663831.2020.1863876",
"CorpusId": 199405615
},
"url": "https://www.semanticscholar.org/paper/7703b7e617867ce527a2cce733c7b89d41615c87",
"referenceCount": 47,
"citationCount": 28,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science",
"Physics"
]
},
{
"title": "Permanent Magnetism",
"abstract": null,
"year": 2019,
"venue": "",
"authors": [
"R. Skomski",
"J. Coey"
],
"externalIds": {
"DOI": "10.1201/9780203743829",
"CorpusId": 241707820
},
"url": "https://www.semanticscholar.org/paper/6c0edeb9679002302d254204d7ab2e5aa9730db6",
"referenceCount": 0,
"citationCount": 55,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "Data‐Driven Materials Science: Status, Challenges, and Perspectives",
"abstract": "Data‐driven science is heralded as a new paradigm in materials science. In this field, data is the new resource, and knowledge is extracted from materials datasets that are too big or complex for traditional human reasoning—typically with the intent to discover new or improved materials or materials phenomena. Multiple factors, including the open science movement, national funding, and progress in information technology, have fueled its development. Such related tools as materials databases, machine learning, and high‐throughput methods are now established as parts of the materials research toolset. However, there are a variety of challenges that impede progress in data‐driven materials science: data veracity, integration of experimental and computational data, data longevity, standardization, and the gap between industrial interests and academic efforts. In this perspective article, the historical development and current state of data‐driven materials science, building from the early evolution of open science to the rapid expansion of materials data infrastructures are discussed. Key successes and challenges so far are also reviewed, providing a perspective on the future development of the field.",
"year": 2019,
"venue": "Advancement of science",
"authors": [
"Lauri Himanen",
"A. Geurts",
"A. Foster",
"P. Rinke"
],
"externalIds": {
"MAG": "2972219719",
"PubMedCentral": "6839624",
"ArXiv": "1907.05644",
"DOI": "10.1002/advs.201900808",
"CorpusId": 196470911,
"PubMed": "31728276"
},
"url": "https://www.semanticscholar.org/paper/abc0a9eb3ae901ece2f532f504c336fbb6ba81ca",
"referenceCount": 232,
"citationCount": 455,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Medicine"
]
},
{
"title": "Predicting the Curie temperature of ferromagnets using machine learning",
"abstract": "The magnetic properties of a material are determined by a subtle balance between the various interactions at play, a fact that makes the design of new magnets a daunting task. High-throughput electronic structure theory may help to explore the vast chemical space available and offers a design tool to the experimental synthesis. This method efficiently predicts the elementary magnetic properties of a compound and its thermodynamical stability, but it is blind to information concerning the magnetic critical temperature. Here we introduce a range of machine-learning models to predict the Curie temperature, $T_\\mathrm{C}$, of ferromagnets. The models are constructed by using experimental data for about 2,500 known magnets and consider the chemical composition of a compound as the only feature determining $T_\\mathrm{C}$. Thus, we are able to establish a one-to-one relation between the chemical composition and the critical temperature. We show that the best model can predict $T_\\mathrm{C}$'s with an accuracy of about 50K. Most importantly our model is able to extrapolate the predictions to regions of the chemical space, where only a little fraction of the data was considered for training. This is demonstrated by tracing the $T_\\mathrm{C}$ of binary intermetallic alloys along their composition space and for the Al-Co-Fe ternary system.",
"year": 2019,
"venue": "PHYSICAL REVIEW MATERIALS",
"authors": [
"J. Nelson",
"S. Sanvito"
],
"externalIds": {
"MAG": "2949096148",
"ArXiv": "1906.08534",
"DOI": "10.1103/PhysRevMaterials.3.104405",
"CorpusId": 195218553
},
"url": "https://www.semanticscholar.org/paper/595409f261649220f7eff80c8f788033ed093793",
"referenceCount": 67,
"citationCount": 51,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Materials Science"
]
},
{
"title": "Heavy rare earth free, free rare earth and rare earth free magnets - vision and reality.",
"abstract": null,
"year": 2018,
"venue": "IEEE International Magnetics Conference",
"authors": [
"Oliver Gutfleisch",
"Konstantin P. Skokov"
],
"externalIds": {
"MAG": "2793190280",
"DOI": "10.1016/J.SCRIPTAMAT.2018.01.032",
"CorpusId": 53081101
},
"url": "https://www.semanticscholar.org/paper/494bb6fd7696ee59ccf146b4ad2f9246a926c575",
"referenceCount": 56,
"citationCount": 158,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Machine learning for molecular and materials science",
"abstract": null,
"year": 2018,
"venue": "Nature",
"authors": [
"K. Butler",
"D. Davies",
"H. Cartwright",
"O. Isayev",
"A. Walsh"
],
"externalIds": {
"MAG": "2884430236",
"DOI": "10.1038/s41586-018-0337-2",
"CorpusId": 50780992,
"PubMed": "30046072"
},
"url": "https://www.semanticscholar.org/paper/c292e473b3825eeb9db03c70b2e1c033aea190d5",
"referenceCount": 119,
"citationCount": 2531,
"influentialCitationCount": 19,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry",
"Materials Science",
"Medicine"
]
},
{
"title": "Data-Driven Materials Investigations: The Next Frontier in Understanding and Predicting Fatigue Behavior",
"abstract": null,
"year": 2018,
"venue": "",
"authors": [
"A. Spear",
"S. Kalidindi",
"B. Meredig",
"A. Kontsos",
"J. Graverend"
],
"externalIds": {
"MAG": "2800381178",
"DOI": "10.1007/S11837-018-2894-0",
"CorpusId": 139743399
},
"url": "https://www.semanticscholar.org/paper/0548f1dbec5aff5f44cd3f9a8b806e4fa96a33c6",
"referenceCount": 36,
"citationCount": 30,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Billion-Scale Similarity Search with GPUs",
"abstract": "Similarity search finds application in database systems handling complex data such as images or videos, which are typically represented by high-dimensional features and require specific indexing structures. This paper tackles the problem of better utilizing GPUs for this task. While GPUs excel at data parallel tasks such as distance computation, prior approaches in this domain are bottlenecked by algorithms that expose less parallelism, such as $k$k-min selection, or make poor use of the memory hierarchy. We propose a novel design for $k$k-selection. We apply it in different similarity search scenarios, by optimizing brute-force, approximate and compressed-domain search based on product quantization. In all these setups, we outperform the state of the art by large margins. Our implementation operates at up to 55 percent of theoretical peak performance, enabling a nearest neighbor implementation that is 8.5 × faster than prior GPU state of the art. It enables the construction of a high accuracy $k$k-NN graph on 95 million images from the Yfcc100M dataset in 35 minutes, and of a graph connecting 1 billion vectors in less than 12 hours on 4 Maxwell Titan X GPUs. We have open-sourced our approach for the sake of comparison and reproducibility.",
"year": 2017,
"venue": "IEEE Transactions on Big Data",
"authors": [
"Jeff Johnson",
"Matthijs Douze",
"H. Jégou"
],
"externalIds": {
"MAG": "2998702515",
"DBLP": "journals/tbd/JohnsonDJ21",
"ArXiv": "1702.08734",
"DOI": "10.1109/tbdata.2019.2921572",
"CorpusId": 926364
},
"url": "https://www.semanticscholar.org/paper/2cbb8de53759e75411bc528518947a3094fbce3a",
"referenceCount": 49,
"citationCount": 3186,
"influentialCitationCount": 381,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MAGNDATA: towards a database of magnetic structures. I. The commensurate case",
"abstract": "A free web page under the name MAGNDATA, which provides detailed quantitative information on more than 400 published magnetic structures, has been made available at the Bilbao Crystallographic Server (http://www.cryst.ehu.es). It includes both commensurate and incommensurate structures. In the first article in this series, the information available on commensurate magnetic structures was presented [Gallego, Perez-Mato, Elcoro, Tasci, Hanson, Momma, Aroyo & Madariaga (2016). J. Appl. Cryst. 49, 1750–1776]. In this second article, the subset of the database devoted to incommensurate magnetic structures is discussed. These structures are described using magnetic superspace groups, i.e. a direct extension of the non-magnetic superspace groups, which is the standard approach in the description of aperiodic crystals. The use of magnetic superspace symmetry ensures a robust and unambiguous description of both atomic positions and magnetic moments within a common unique formalism. The point-group symmetry of each structure is derived from its magnetic superspace group, and any macroscopic tensor property of interest governed by this point-group symmetry can be retrieved through direct links to other programs of the Bilbao Crystallographic Server. The fact that incommensurate magnetic structures are often reported with ambiguous or incomplete information has made it impossible to include in this collection a good number of the published structures which were initially considered. However, as a proof of concept, the published data of about 30 structures have been re-interpreted and transformed, and together with ten structures where the superspace formalism was directly employed, they form this section of MAGNDATA. The relevant symmetry of most of the structures could be identified with an epikernel or isotropy subgroup of one irreducible representation of the space group of the parent phase, but in some cases several irreducible representations are active. Any entry of the collection can be visualized using the online tools available on the Bilbao server or can be retrieved as a magCIF file, a file format under development by the International Union of Crystallography. These CIF-like files are supported by visualization programs like Jmol and by analysis programs like JANA and ISODISTORT.",
"year": 2016,
"venue": "",
"authors": [
"S. V. Gallego",
"J. Perez-Mato",
"L. Elcoro",
"E. Tasci",
"R. M. Hanson",
"K. Momma",
"M. Aroyo",
"G. Madariaga"
],
"externalIds": {
"MAG": "2527045065",
"DOI": "10.1107/S1600576716012863",
"CorpusId": 102384908
},
"url": "https://www.semanticscholar.org/paper/02099d60623517598e756b8c7b4eb03ec8d131c8",
"referenceCount": 182,
"citationCount": 72,
"influentialCitationCount": 5,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A General-Purpose Machine Learning Framework for Predicting Properties of Inorganic Materials",
"abstract": null,
"year": 2016,
"venue": "",
"authors": [
"Logan T. Ward",
"Ankit Agrawal",
"A. Choudhary",
"C. Wolverton"
],
"externalIds": {
"ArXiv": "1606.09551",
"MAG": "2464725281",
"DOI": "10.1038/npjcompumats.2016.28",
"CorpusId": 33517367
},
"url": "https://www.semanticscholar.org/paper/a10bc90b3c97a4abe86c73cfb2a8490a9b44373f",
"referenceCount": 68,
"citationCount": 1080,
"influentialCitationCount": 26,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science",
"Physics",
"Computer Science"
]
},
{
"title": "Computational predictions of energy materials using density functional theory",
"abstract": null,
"year": 2016,
"venue": "",
"authors": [
"Anubhav Jain",
"Yongwoo Shin",
"K. Persson"
],
"externalIds": {
"MAG": "2624679515",
"DOI": "10.1038/NATREVMATS.2015.4",
"CorpusId": 53000648
},
"url": "https://www.semanticscholar.org/paper/f8eba72b95a16d88b69447a2eb430e73776e6c5d",
"referenceCount": 149,
"citationCount": 573,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Criteria for Predicting the Formation of Single-Phase High-Entropy Alloys",
"abstract": "High entropy alloys constitute a new class of materials whose very existence poses fundamental questions. Originally thought to be stabilized by the large entropy of mixing, these alloys have attracted attention due to their potential applications, yet no model capable of robustly predicting which combinations of elements will form a single-phase currently exists. Here we propose a model that, through the use of high-throughput computation of the enthalpies of formation of binary compounds, is able to confirm all known high-entropy alloys while rejecting similar alloys that are known to form multiple phases. Despite the increasing entropy, our model predicts that the number of potential single-phase multicomponent alloys decreases with an increasing number of components: out of more than two million possible 7-component alloys considered, fewer than twenty single-phase alloys are likely.",
"year": 2015,
"venue": "",
"authors": [
"M. C. Troparevsky",
"James R. Morris",
"P. Kent",
"A. Lupini",
"G. M. Stocks"
],
"externalIds": {
"MAG": "2333442782",
"DOI": "10.1103/PHYSREVX.5.011041",
"CorpusId": 123823682
},
"url": "https://www.semanticscholar.org/paper/221483526c25da5fc68eb7fd4124b43ef8cfedb4",
"referenceCount": 45,
"citationCount": 315,
"influentialCitationCount": 9,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Materials Design and Discovery with High-Throughput Density Functional Theory: The Open Quantum Materials Database (OQMD)",
"abstract": null,
"year": 2013,
"venue": "",
"authors": [
"J. Saal",
"S. Kirklin",
"Muratahan Aykol",
"B. Meredig",
"C. Wolverton"
],
"externalIds": {
"MAG": "1976492731",
"DOI": "10.1007/S11837-013-0755-4",
"CorpusId": 135497159
},
"url": "https://www.semanticscholar.org/paper/789ae8da65d2b26be12a8a2dcba51073cea41182",
"referenceCount": 97,
"citationCount": 1639,
"influentialCitationCount": 36,
"isOpenAccess": false,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Commentary: The Materials Project: A materials genome approach to accelerating materials innovation",
"abstract": "Accelerating the discovery of advanced materials is essential for human welfare and sustainable, clean energy. In this paper, we introduce the Materials Project (www.materialsproject.org), a core program of the Materials Genome Initiative that uses high-throughput computing to uncover the properties of all known inorganic materials. This open dataset can be accessed through multiple channels for both interactive exploration and data mining. The Materials Project also seeks to create open-source platforms for developing robust, sophisticated materials analyses. Future efforts will enable users to perform ‘‘rapid-prototyping’’ of new materials in silico, and provide researchers with new avenues for cost-effective, data-driven materials design. © 2013 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.",
"year": 2013,
"venue": "",
"authors": [
"Anubhav Jain",
"S. Ong",
"G. Hautier",
"Wei Chen",
"W. Richards",
"S. Dacek",
"S. Cholia",
"D. Gunter",
"D. Skinner",
"G. Ceder",
"K. Persson"
],
"externalIds": {
"MAG": "1992985800",
"DOI": "10.1063/1.4812323",
"CorpusId": 94929253
},
"url": "https://www.semanticscholar.org/paper/98476502ec663771d41d2f8c948fd176257f17bd",
"referenceCount": 61,
"citationCount": 7843,
"influentialCitationCount": 231,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Python Materials Genomics (pymatgen): A robust, open-source python library for materials analysis",
"abstract": null,
"year": 2012,
"venue": "",
"authors": [
"S. Ong",
"W. Richards",
"Anubhav Jain",
"G. Hautier",
"M. Kocher",
"S. Cholia",
"D. Gunter",
"V. Chevrier",
"K. Persson",
"G. Ceder"
],
"externalIds": {
"MAG": "2015197254",
"DOI": "10.1016/J.COMMATSCI.2012.10.028",
"CorpusId": 40344783
},
"url": "https://www.semanticscholar.org/paper/89b3d4a4eddd8ab0498fca7aa5a0a65c7b680c98",
"referenceCount": 55,
"citationCount": 2725,
"influentialCitationCount": 48,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "AFLOW: An automatic framework for high-throughput materials discovery",
"abstract": null,
"year": 2012,
"venue": "",
"authors": [
"S. Curtarolo",
"W. Setyawan",
"G. Hart",
"M. Jahnátek",
"R. Chepulskii",
"Richard H. Taylor",
"Shidong Wang",
"Junkai Xue",
"Kesong Yang",
"O. Levy",
"M. Mehl",
"H. Stokes",
"D. Demchenko",
"D. Morgan"
],
"externalIds": {
"MAG": "2134329894",
"ArXiv": "1308.5715",
"DOI": "10.1016/j.commatsci.2012.02.005",
"CorpusId": 15974265
},
"url": "https://www.semanticscholar.org/paper/021d0c485b602d0a85abcfba69bde334cb8083a8",
"referenceCount": 116,
"citationCount": 1003,
"influentialCitationCount": 13,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science",
"Physics"
]
},
{
"title": "AFLOWLIB.ORG: A distributed materials properties repository from high-throughput ab initio calculations",
"abstract": null,
"year": 2012,
"venue": "",
"authors": [
"S. Curtarolo",
"W. Setyawan",
"Shidong Wang",
"Junkai Xue",
"Kesong Yang",
"Richard H. Taylor",
"G. Hart",
"S. Sanvito",
"M. Nardelli",
"N. Mingo",
"O. Levy"
],
"externalIds": {
"MAG": "2117363206",
"DOI": "10.1016/J.COMMATSCI.2012.02.002",
"CorpusId": 16864443
},
"url": "https://www.semanticscholar.org/paper/00d85607f350cee1840af5765ab022f1b9527657",
"referenceCount": 41,
"citationCount": 850,
"influentialCitationCount": 10,
"isOpenAccess": false,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Inorganic Materials Database for Exploring the Nature of Material",
"abstract": "An inorganic materials database system, AtomWork, has been developed and released on the Internet. It includes the phase diagram, crystal structure, X-ray powder diffraction, and property data of more than 80,000 inorganic materials extracted from scientific literature. The feature of this database is that the information of the synthesis, identification, and property of materials is organically linked, which enables the data reported in different papers to be grouped and compared at four different levels: chemical system, compound, substance, and material. The database can provide users with a comprehensive overview of substances and necessary information to understand the relationships among chemical component, structure, and property.",
"year": 2011,
"venue": "",
"authors": [
"Yibin Xu",
"M. Yamazaki",
"P. Villars"
],
"externalIds": {
"MAG": "1967256595",
"DOI": "10.1143/JJAP.50.11RH02",
"CorpusId": 11469084
},
"url": "https://www.semanticscholar.org/paper/f8e4dea35d1332b1f1168674548c8f29e242bacb",
"referenceCount": 14,
"citationCount": 141,
"influentialCitationCount": 4,
"isOpenAccess": false,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Scikit-learn: Machine Learning in Python",
"abstract": "Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.",
"year": 2011,
"venue": "Journal of machine learning research",
"authors": [
"Fabian Pedregosa",
"G. Varoquaux",
"Alexandre Gramfort",
"V. Michel",
"B. Thirion",
"O. Grisel",
"Mathieu Blondel",
"Gilles Louppe",
"P. Prettenhofer",
"Ron Weiss",
"Ron J. Weiss",
"J. Vanderplas",
"Alexandre Passos",
"D. Cournapeau",
"M. Brucher",
"M. Perrot",
"E. Duchesnay"
],
"externalIds": {
"MAG": "2950511172",
"DBLP": "journals/corr/abs-1201-0490",
"ArXiv": "1201.0490",
"DOI": "10.5555/1953048.2078195",
"CorpusId": 10659969
},
"url": "https://www.semanticscholar.org/paper/168f28ac3c8c7ea63bf7ed25f2288e8b67e2fe74",
"referenceCount": 18,
"citationCount": 69137,
"influentialCitationCount": 5373,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A systematic analysis of performance measures for classification tasks",
"abstract": null,
"year": 2009,
"venue": "Information Processing & Management",
"authors": [
"Marina Sokolova",
"G. Lapalme"
],
"externalIds": {
"MAG": "2170505850",
"DBLP": "journals/ipm/SokolovaL09",
"DOI": "10.1016/j.ipm.2009.03.002",
"CorpusId": 14454728
},
"url": "https://www.semanticscholar.org/paper/c11d0ba3e581e691f1bee0022dc0807ff4c428f2",
"referenceCount": 47,
"citationCount": 4492,
"influentialCitationCount": 240,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Insights into Current Limitations of Density Functional Theory",
"abstract": "Density functional theory of electronic structure is widely and successfully applied in simulations throughout engineering and sciences. However, for many predicted properties, there are spectacular failures that can be traced to the delocalization error and static correlation error of commonly used approximations. These errors can be characterized and understood through the perspective of fractional charges and fractional spins introduced recently. Reducing these errors will open new frontiers for applications of density functional theory.",
"year": 2008,
"venue": "Science",
"authors": [
"A. Cohen",
"P. Mori-Sánchez",
"Weitao Yang"
],
"externalIds": {
"MAG": "2084053963",
"DOI": "10.1126/science.1158722",
"CorpusId": 11502274,
"PubMed": "18687952"
},
"url": "https://www.semanticscholar.org/paper/d92fe87c520454e0743524a94eb81456602f9ba8",
"referenceCount": 29,
"citationCount": 1983,
"influentialCitationCount": 23,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry",
"Medicine"
]
},
{
"title": "Computational complexity of interacting electrons and fundamental limitations of density functional theory",
"abstract": null,
"year": 2007,
"venue": "",
"authors": [
"N. Schuch",
"F. Verstraete"
],
"externalIds": {
"ArXiv": "0712.0483",
"MAG": "2153671025",
"DOI": "10.1038/nphys1370",
"CorpusId": 67775117
},
"url": "https://www.semanticscholar.org/paper/e8343473958a66b2e38baa872a3b6a9fa2943eee",
"referenceCount": 19,
"citationCount": 212,
"influentialCitationCount": 12,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Exchange interactions, spin waves, and transition temperatures in itinerant magnets",
"abstract": "This contribution reviews an ab initio two-step procedure to determine exchange interactions, spin-wave spectra, and thermodynamic properties of itinerant magnets. In the first step, the self-consistent electronic structure of a system is calculated for a collinear spin structure at zero temperature. In the second step, parameters of an effective classical Heisenberg Hamiltonian are determined using the magnetic force theorem and the one-electron Green functions. The Heisenberg Hamiltonian and methods of statistical physics are employed in subsequent evaluation of magnon dispersion laws, spin-wave stiffness constants, and Curie/Néel temperatures. The applicability of the developed scheme is illustrated by selected properties of various systems such as transition and rare-earth metals, disordered alloys including diluted magnetic semiconductors, ultrathin films, and surfaces. A comparison to other ab initio approaches is presented as well.",
"year": 2006,
"venue": "",
"authors": [
"I. Turek",
"J. Kudrnovský",
"V. Drchal",
"P. Bruno"
],
"externalIds": {
"MAG": "2132755882",
"DOI": "10.1080/14786430500504048",
"CorpusId": 15280187
},
"url": "https://www.semanticscholar.org/paper/df6ffaa7acbba7255b7d6e949b9b8101e4ae3b96",
"referenceCount": 179,
"citationCount": 114,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Spin Symmetry Requirements in Density Functional Theory: The Proper Way to Predict Magnetic Coupling Constants in Molecules and Solids",
"abstract": null,
"year": 2006,
"venue": "",
"authors": [
"F. Illas",
"I. P. R. Moreira",
"J. M. Bofill",
"M. Filatov"
],
"externalIds": {
"MAG": "2108333486",
"DOI": "10.1007/S00214-006-0104-6",
"CorpusId": 52242018
},
"url": "https://www.semanticscholar.org/paper/df142f428109ecdd71d34fe27fa5bb3f3dc3a2f5",
"referenceCount": 93,
"citationCount": 69,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "Electronic structure and volume magnetostriction of rare-earth metals and compounds",
"abstract": null,
"year": 2005,
"venue": "",
"authors": [
"I. Turek",
"J. Rusz",
"M. Divis"
],
"externalIds": {
"MAG": "1974466999",
"DOI": "10.1016/J.JMMM.2004.11.260",
"CorpusId": 119422191
},
"url": "https://www.semanticscholar.org/paper/233c72cf4f566a70341af2d7c7f3564fcc9d7d43",
"referenceCount": 28,
"citationCount": 21,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Handbook of Magnetic Materials",
"abstract": null,
"year": 2003,
"venue": "",
"authors": [
"K. Buschow"
],
"externalIds": {
"MAG": "382109670",
"DOI": "10.1016/s1567-2719(09)01808-3",
"CorpusId": 92729076
},
"url": "https://www.semanticscholar.org/paper/f11369fbbf358befec69d0f09dd3debc523d8262",
"referenceCount": 0,
"citationCount": 774,
"influentialCitationCount": 49,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "Magnetic Interactions in Transition Metal Oxides with Orbital Degrees of Freedom",
"abstract": "We review the frustrated magnetic interactions in spin‐orbital models which describe superexchange in transition metal oxides with orbital degeneracy, and analyze the reasons for the symmetry breaking in cubic perovskites. The superexchange in eg systems is dominated by orbital interactions responsible for the orbital ordering, and the A‐type antiferromagnetic ordering follows at lower temperatures. Instead, a generic tendency towards dimerization, found already in the degenerate Hubbard model, occurs in t2g systems. In this case the quantum orbital fluctuations may stabilize orbital liquid states along one directions even in some undoped t2g systems, leading to the C‐type antiferromagnetic order. The orbital liquid in manganites is triggered by doping. The present understanding of the spectroscopic parameters provides reliable information on the magnetic interactions, as shown on the example of magnons in ferromagnetic cubic and bilayer manganites.",
"year": 2002,
"venue": "",
"authors": [
"A. Oleś"
],
"externalIds": {
"MAG": "1499499237",
"ArXiv": "cond-mat/0201455",
"DOI": "10.1063/1.1639588",
"CorpusId": 117240263
},
"url": "https://www.semanticscholar.org/paper/c0c15e6a5bb266d1d4bd9935bceb7708abb4955b",
"referenceCount": 17,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Molecular field analysis for melt-spun amorphous Fe100−xGdx alloys (18≦X≦60)",
"abstract": null,
"year": 2000,
"venue": "",
"authors": [
"K. Yano"
],
"externalIds": {
"MAG": "2012417338",
"DOI": "10.1016/S0304-8853(99)00517-X",
"CorpusId": 122187436
},
"url": "https://www.semanticscholar.org/paper/f6ee90233b096e17a35f584c0ea7a01503bc3f72",
"referenceCount": 10,
"citationCount": 28,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Mean field analysis of the magnetic properties of amorphous transition-metal--rare-earth alloys",
"abstract": "A generalized form of mean field theory and procedures for applying it to analyze experimental data on the temperature dependence of the saturation magnetization or the effective gyromagnetic factor of n‐component amorphous transition‐metal–rare‐earth alloys are outlined. This analysis yields the spin values of each component and the effective exchange interaction energies between them, from which other magnetic properties such as the subnetwork magnetizations and the macroscopic exchange stiffness can be calculated. Examples of application of this mean field procedure are given.",
"year": 1978,
"venue": "",
"authors": [
"A. Gangulee",
"R. Kobliska"
],
"externalIds": {
"MAG": "1981270584",
"DOI": "10.1063/1.325523",
"CorpusId": 120015804
},
"url": "https://www.semanticscholar.org/paper/92597caa71a1ef4983dae7a832a8c720a36440e9",
"referenceCount": 24,
"citationCount": 45,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Magnetic properties of amorphous alloy films of Fe with Gd, Tb, Dy, Ho, or Er",
"abstract": "Amorphous rare‐earth RE(Gd, Tb, Dy, Ho, Er) ‐Fe films prepared by cosputtering were studied. The compositional analysis was obtained from known deposition profiles, x‐ray microanalysis, and the stripe‐width measurements. The structural variation with composition change was investigated by electron diffraction and dark‐field microscopy. The Curie temperature Tc, the compensation temperature Tcomp, the coercive force Hc, the uniaxial anisotropy energy Ku and the static domain properties such as the stripe width Ws, the wall energy σw, and the exchange stiffness constant A were investigated. The systematic variation of Tc and Tcomp associated with a variation of composition and RE species could be described by the Heiman et al. model. The static domain properties could be interpreted in terms of the wall energy model and the mean field approximation of the exchange stiffness constant A.",
"year": 1978,
"venue": "",
"authors": [
"Y. Mimura",
"N. Imamura",
"Toshihiko Kobayashi",
"A. Okada",
"Y. Kushiro"
],
"externalIds": {
"MAG": "1981299134",
"DOI": "10.1063/1.325008",
"CorpusId": 119742994
},
"url": "https://www.semanticscholar.org/paper/c5d3698c6be9638fc5e63a6fecf2d8170388a622",
"referenceCount": 26,
"citationCount": 161,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Bibliography of Magnetic Materials and Tabulation of Magnetic Transition Temperatures",
"abstract": null,
"year": 1972,
"venue": "",
"authors": [
"T. F. Connolly",
"E. Copenhaver"
],
"externalIds": {
"MAG": "1492140285",
"DOI": "10.1007/978-1-4684-1396-0",
"CorpusId": 93382366
},
"url": "https://www.semanticscholar.org/paper/e174e2170bda40d20277234e3076f81f8756ee95",
"referenceCount": 0,
"citationCount": 56,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "Self-Consistent Equations Including Exchange and Correlation Effects",
"abstract": "From a theory of Hohenberg and Kohn, approximation methods for treating an inhomogeneous system of interacting electrons are developed. These methods are exact for systems of slowly varying or high density. For the ground state, they lead to self-consistent equations analogous to the Hartree and Hartree-Fock equations, respectively. In these equations the exchange and correlation portions of the chemical potential of a uniform electron gas appear as additional effective potentials. (The exchange portion of our effective potential differs from that due to Slater by a factor of $\\frac{2}{3}$.) Electronic systems at finite temperatures and in magnetic fields are also treated by similar methods. An appendix deals with a further correction for systems with short-wavelength density oscillations.",
"year": 1965,
"venue": "",
"authors": [
"W. Kohn",
"L. Sham"
],
"externalIds": {
"MAG": "2230728100",
"DOI": "10.1103/PHYSREV.140.A1133",
"CorpusId": 55364462
},
"url": "https://www.semanticscholar.org/paper/a459da147adfd6e843493214a88deef65748a88c",
"referenceCount": 0,
"citationCount": 42845,
"influentialCitationCount": 1896,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Magnetic Oxides and Other Compounds",
"abstract": null,
"year": 2021,
"venue": "Handbook of Magnetism and Magnetic Materials",
"authors": [
"J. Coey"
],
"externalIds": {
"MAG": "3198570342",
"DOI": "10.1007/978-3-030-63101-7_17-1",
"CorpusId": 239744938
},
"url": "https://www.semanticscholar.org/paper/9316892fb2b58a39554ca6ab43afd7aec86b77b5",
"referenceCount": 108,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "Descriptor : Auto-generated materials database of Curie and Néel temperatures via semi-supervised relationship extraction",
"abstract": "Large auto-generated databases of magnetic materials properties have the potential for great utility in materials science research. This article presents an auto-generated database of 39,822 records containing chemical compounds and their associated Curie and Néel magnetic phase transition temperatures. The database was produced using natural language processing and semi-supervised quaternary relationship extraction, applied to a corpus of 68,078 chemistry and physics articles. Evaluation of the database shows an estimated overall precision of 73%. Therein, records processed with the text-mining toolkit, ChemDataExtractor, were assisted by a modified Snowball algorithm, whose original binary relationship extraction capabilities were extended to quaternary relationship extraction. Consequently, its machine learning component can now train with ≤ 500 seeds, rather than the 4,000 originally used. Data processed with the modified Snowball algorithm affords 82% precision. Database records are available in MongoDB, CSV and JSON formats which can easily be read using Python, R, Java and MatLab. This makes the database easy to query for tackling big-data materials science initiatives and provides a basis for magnetic materials discovery.",
"year": 2018,
"venue": "",
"authors": [
"Callum J Court",
"J. Cole"
],
"externalIds": {
"CorpusId": 49303862
},
"url": "https://www.semanticscholar.org/paper/825546fd7a3c942b51174460e2549aba09ecaa8b",
"referenceCount": 15,
"citationCount": 106,
"influentialCitationCount": 4,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "Machine learning in materials informatics: recent applications and prospects",
"abstract": null,
"year": 2017,
"venue": "npj Comput. Mater.",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Magnetism in Condensed Matter",
"abstract": "1. Introduction 2. Isolated magnetic moments 3. Environments 4. Interactions 5. Order and magnetic structures 6. Order and broken symmetry 7. Magnetism in metals 8. Competing interactions and low dimensionality Appendix A: Units in electromagnetism Appendix B: Electromagnetism Appendix C: Quantum and atomic physics Appendix D: Energy in magnetism and demagnetism Appendix E: Statistical mechanics Appendix F: List of symbols Index",
"year": 2001,
"venue": "",
"authors": [
"S. Blundell"
],
"externalIds": {
"MAG": "1966861047",
"DOI": "10.1119/1.1522704",
"CorpusId": 120183772
},
"url": "https://www.semanticscholar.org/paper/68ff4c833d98a2d657f7f77a3db107560bf18647",
"referenceCount": 0,
"citationCount": 1237,
"influentialCitationCount": 135,
"isOpenAccess": false,
"fieldsOfStudy": [
"Physics"
]
}
]
},
"ChemDFM: A Large Language Foundation Model for Chemistry": {
"paper_title": "ChemDFM: A Large Language Foundation Model for Chemistry",
"arxiv_id": "2401.14818",
"authors": [
"Zihan Zhao",
"Da Ma",
"Lu Chen",
"Liangtai Sun",
"Zihao Li",
"Hongshen Xu",
"Zichen Zhu",
"Su Zhu",
"Shuai Fan",
"Guodong Shen",
"Xin Chen",
"Kai Yu"
],
"year": 2024,
"venue": "",
"abstract": "Artificial intelligence (AI) has played an increasingly important role in chemical research. However, most models currently used in chemistry are specialist models that require training and tuning for specific tasks. A more generic and efficient solution would be an AI model that could address many tasks and support free-form dialogue in the broad field of chemistry. In its utmost form, such a generalist AI chemist could be referred to as Chemical General Intelligence. Large language models (LLMs) have recently logged tremendous success in the general domain of natural language processing, showing emerging task generalization and free-form dialogue capabilities. However, domain knowledge of chemistry is largely missing when training general-domain LLMs. The lack of such knowledge greatly hinders the performance of generalist LLMs in the field of chemistry. To this end, we develop ChemDFM, a pioneering LLM for chemistry trained on 34B tokens from chemical literature and textbooks, and fine-tuned using 2.7M instructions. As a result, it can understand and reason with chemical knowledge in free-form dialogue. Quantitative evaluations show that ChemDFM significantly surpasses most representative open-source LLMs. It outperforms GPT-4 on a great portion of chemical tasks, despite the substantial size difference. We have open-sourced the inference codes, evaluation datasets, and model weights of ChemDFM on Huggingface (https://huggingface.co/AI4Chem/ChemLLM-7B-Chat).",
"references": [
{
"title": "UniMoT: Unified Molecule-Text Language Model with Discrete Token Representation",
"abstract": "The remarkable success of Large Language Models (LLMs) across diverse tasks has driven the research community to extend their capabilities to molecular applications. However, most molecular LLMs employ adapter-based architectures that do not treat molecule and text modalities equally and lack a supervision signal for the molecule modality. To address these issues, we introduce UniMoT, a Unified Molecule-Text LLM adopting a tokenizer-based architecture that expands the vocabulary of LLM with molecule tokens. Specifically, we introduce a Vector Quantization-driven tokenizer that incorporates a Q-Former to bridge the modality gap between molecule and text. This tokenizer transforms molecules into sequences of molecule tokens with causal dependency, encapsulating high-level molecular and textual information. Equipped with this tokenizer, UniMoT can unify molecule and text modalities under a shared token representation and an autoregressive training paradigm, enabling it to interpret molecules as a foreign language and generate them as text. Following a four-stage training scheme, UniMoT emerges as a multi-modal generalist capable of performing both molecule-to-text and text-to-molecule tasks. Extensive experiments demonstrate that UniMoT achieves state-of-the-art performance across a wide range of molecule comprehension and generation tasks.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Juzheng Zhang",
"Yatao Bian",
"Yongqiang Chen",
"Quanming Yao"
],
"externalIds": {
"DBLP": "journals/corr/abs-2408-00863",
"ArXiv": "2408.00863",
"DOI": "10.48550/arXiv.2408.00863",
"CorpusId": 271693464
},
"url": "https://www.semanticscholar.org/paper/03b0e16a5cb30f0e350e8b5d5bc153e7fb2db037",
"referenceCount": 57,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Retrosynthesis prediction with an iterative string editing model",
"abstract": null,
"year": 2024,
"venue": "Nature Communications",
"authors": [
"Yuqiang Han",
"Xiaoyang Xu",
"Chang-Yu Hsieh",
"Keyan Ding",
"Hongxia Xu",
"Renjun Xu",
"Tingjun Hou",
"Qiang Zhang",
"Huajun Chen"
],
"externalIds": {
"PubMedCentral": "11289138",
"DOI": "10.1038/s41467-024-50617-1",
"CorpusId": 271569488,
"PubMed": "39080274"
},
"url": "https://www.semanticscholar.org/paper/c2b05a2d462dca39609462c9fae0f56a535d4984",
"referenceCount": 66,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Large Language Models for Inorganic Synthesis Predictions.",
"abstract": "We evaluate the effectiveness of pretrained and fine-tuned large language models (LLMs) for predicting the synthesizability of inorganic compounds and the selection of precursors needed to perform inorganic synthesis. The predictions of fine-tuned LLMs are comparable to─and sometimes better than─recent bespoke machine learning models for these tasks but require only minimal user expertise, cost, and time to develop. Therefore, this strategy can serve both as an effective and strong baseline for future machine learning studies of various chemical applications and as a practical tool for experimental chemists.",
"year": 2024,
"venue": "Journal of the American Chemical Society",
"authors": [
"Seongmin Kim",
"Yousung Jung",
"Joshua Schrier"
],
"externalIds": {
"DOI": "10.1021/jacs.4c05840",
"CorpusId": 271112001,
"PubMed": "38991051"
},
"url": "https://www.semanticscholar.org/paper/57e0525c06165c461b22df3a1ce683a119bcbfa7",
"referenceCount": 43,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "GeoMFormer: A General Architecture for Geometric Molecular Representation Learning",
"abstract": "Molecular modeling, a central topic in quantum mechanics, aims to accurately calculate the properties and simulate the behaviors of molecular systems. The molecular model is governed by physical laws, which impose geometric constraints such as invariance and equivariance to coordinate rotation and translation. While numerous deep learning approaches have been developed to learn molecular representations under these constraints, most of them are built upon heuristic and costly modules. We argue that there is a strong need for a general and flexible framework for learning both invariant and equivariant features. In this work, we introduce a novel Transformer-based molecular model called GeoMFormer to achieve this goal. Using the standard Transformer modules, two separate streams are developed to maintain and learn invariant and equivariant representations. Carefully designed cross-attention modules bridge the two streams, allowing information fusion and enhancing geometric modeling in each stream. As a general and flexible architecture, we show that many previous architectures can be viewed as special instantiations of GeoMFormer. Extensive experiments are conducted to demonstrate the power of GeoMFormer. All empirical results show that GeoMFormer achieves strong performance on both invariant and equivariant tasks of different types and scales. Code and models will be made publicly available at https://github.com/c-tl/GeoMFormer.",
"year": 2024,
"venue": "International Conference on Machine Learning",
"authors": [
"Tianlang Chen",
"Shengjie Luo",
"Di He",
"Shuxin Zheng",
"Tie-Yan Liu",
"Liwei Wang"
],
"externalIds": {
"DBLP": "journals/corr/abs-2406-16853",
"ArXiv": "2406.16853",
"DOI": "10.48550/arXiv.2406.16853",
"CorpusId": 270703756
},
"url": "https://www.semanticscholar.org/paper/8751f5b1394f38b19893f668c907c5c74703c853",
"referenceCount": 77,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Physics",
"Biology"
]
},
{
"title": "Machine learning-aided generative molecular design",
"abstract": null,
"year": 2024,
"venue": "Nat. Mac. Intell.",
"authors": [
"Yuanqi Du",
"Arian R. Jamasb",
"Jeff Guo",
"Tianfan Fu",
"Charles Harris",
"Yingheng Wang",
"Chenru Duan",
"Pietro Liò",
"P. Schwaller",
"Tom L. Blundell"
],
"externalIds": {
"DBLP": "journals/natmi/DuJGFHWDLSB24",
"DOI": "10.1038/s42256-024-00843-5",
"CorpusId": 270606382
},
"url": "https://www.semanticscholar.org/paper/87c7a6ef2083371e9a06f5057d7a2bc1a689dcaa",
"referenceCount": 91,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset",
"abstract": "Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, however, we demonstrate that our developed LLMs can achieve very strong results on a comprehensive set of chemistry tasks, outperforming the most advanced GPT-4 and Claude 3 Opus by a substantial margin. To accomplish this, we propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning. It contains 14 selected chemistry tasks and over three million samples, laying a solid foundation for training and evaluating LLMs for chemistry. Using SMolInstruct, we fine-tune a set of open-source LLMs, among which, we find that Mistral serves as the best base model for chemistry tasks. Our analysis further demonstrates the critical role of the proposed dataset in driving the performance improvements.",
"year": 2024,
"venue": "arXiv.org",
"authors": [
"Botao Yu",
"Frazier N. Baker",
"Ziqi Chen",
"Xia Ning",
"Huan Sun"
],
"externalIds": {
"DBLP": "journals/corr/abs-2402-09391",
"ArXiv": "2402.09391",
"DOI": "10.48550/arXiv.2402.09391",
"CorpusId": 267657622
},
"url": "https://www.semanticscholar.org/paper/1823b8aecd62ccfca0cb6caa8e2a1159754afc5e",
"referenceCount": 74,
"citationCount": 15,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?",
"abstract": "Automation is one of the cornerstones of contemporary material discovery. Bayesian optimization (BO) is an essential part of such workflows, enabling scientists to leverage prior domain knowledge into efficient exploration of a large molecular space. While such prior knowledge can take many forms, there has been significant fanfare around the ancillary scientific knowledge encapsulated in large language models (LLMs). However, existing work thus far has only explored LLMs for heuristic materials searches. Indeed, recent work obtains the uncertainty estimate -- an integral part of BO -- from point-estimated, non-Bayesian LLMs. In this work, we study the question of whether LLMs are actually useful to accelerate principled Bayesian optimization in the molecular space. We take a sober, dispassionate stance in answering this question. This is done by carefully (i) viewing LLMs as fixed feature extractors for standard but principled BO surrogate models and by (ii) leveraging parameter-efficient finetuning methods and Bayesian neural networks to obtain the posterior of the LLM surrogate. Our extensive experiments with real-world chemistry problems show that LLMs can be useful for BO over molecules, but only if they have been pretrained or finetuned with domain-specific data.",
"year": 2024,
"venue": "International Conference on Machine Learning",
"authors": [
"Agustinus Kristiadi",
"Felix Strieth-Kalthoff",
"Marta Skreta",
"Pascal Poupart",
"Alán Aspuru-Guzik",
"Geoff Pleiss"
],
"externalIds": {
"DBLP": "conf/icml/KristiadiSSPAP24",
"ArXiv": "2402.05015",
"DOI": "10.48550/arXiv.2402.05015",
"CorpusId": 267523287
},
"url": "https://www.semanticscholar.org/paper/53acc58af6b51fb1168153729dd879915189893d",
"referenceCount": 90,
"citationCount": 7,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Shaping the Water-Harvesting Behavior of Metal-Organic Frameworks Aided by Fine-Tuned GPT Models.",
"abstract": "We construct a data set of metal-organic framework (MOF) linkers and employ a fine-tuned GPT assistant to propose MOF linker designs by mutating and modifying the existing linker structures. This strategy allows the GPT model to learn the intricate language of chemistry in molecular representations, thereby achieving an enhanced accuracy in generating linker structures compared with its base models. Aiming to highlight the significance of linker design strategies in advancing the discovery of water-harvesting MOFs, we conducted a systematic MOF variant expansion upon state-of-the-art MOF-303 utilizing a multidimensional approach that integrates linker extension with multivariate tuning strategies. We synthesized a series of isoreticular aluminum MOFs, termed Long-Arm MOFs (LAMOF-1 to LAMOF-10), featuring linkers that bear various combinations of heteroatoms in their five-membered ring moiety, replacing pyrazole with either thiophene, furan, or thiazole rings or a combination of two. Beyond their consistent and robust architecture, as demonstrated by permanent porosity and thermal stability, the LAMOF series offers a generalizable synthesis strategy. Importantly, these 10 LAMOFs establish new benchmarks for water uptake (up to 0.64 g g-1) and operational humidity ranges (between 13 and 53%), thereby expanding the diversity of water-harvesting MOFs.",
"year": 2023,
"venue": "Journal of the American Chemical Society",
"authors": [
"Zhiling Zheng",
"Ali H Alawadhi",
"Saumil Chheda",
"S. E. Neumann",
"Nakul Rampal",
"Shengchao Liu",
"Hamilton Nguyen",
"Yen-Hsu Lin",
"Zichao Rong",
"J. Siepmann",
"Laura Gagliardi",
"A. Anandkumar",
"C. Borgs",
"J. Chayes",
"O. Yaghi"
],
"externalIds": {
"DOI": "10.1021/jacs.3c12086",
"CorpusId": 266227950,
"PubMed": "38090755"
},
"url": "https://www.semanticscholar.org/paper/e5d29dfa58027f5d0c3b045d6cb946584651ce15",
"referenceCount": 65,
"citationCount": 21,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Accelerated chemical science with AI",
"abstract": "In light of the pressing need for practical materials and molecular solutions to renewable energy and health problems, to name just two examples, one wonders how to accelerate research and development in the chemical sciences, so as to address the time it takes to bring materials from initial discovery to commercialization. Artificial intelligence (AI)-based techniques, in particular, are having a transformative and accelerating impact on many if not most, technological domains. To shed light on these questions, the authors and participants gathered in person for the ASLLA Symposium on the theme of ‘Accelerated Chemical Science with AI’ at Gangneung, Republic of Korea. We present the findings, ideas, comments, and often contentious opinions expressed during four panel discussions related to the respective general topics: ‘Data’, ‘New applications’, ‘Machine learning algorithms’, and ‘Education’. All discussions were recorded, transcribed into text using Open AI's Whisper, and summarized using LG AI Research's EXAONE LLM, followed by revision by all authors. For the broader benefit of current researchers, educators in higher education, and academic bodies such as associations, publishers, librarians, and companies, we provide chemistry-specific recommendations and summarize the resulting conclusions.",
"year": 2023,
"venue": "Digital Discovery",
"authors": [
"S. Back",
"Alán Aspuru-Guzik",
"Michele Ceriotti",
"G. Gryn’ova",
"B. Grzybowski",
"Geun Ho Gu",
"Jason E. Hein",
"K. Hippalgaonkar",
"Rodrigo Hormazabal",
"Yousung Jung",
"Seonah Kim",
"Woo Youn Kim",
"Mohamed Moosavi",
"Juhwan Noh",
"Changyoung Park",
"Joshua Schrier",
"Philipp Schwaller",
"Koji Tsuda",
"T. Vegge",
"Anatole von Lilienfeld",
"A. Walsh"
],
"externalIds": {
"PubMedCentral": "10793638",
"DOI": "10.1039/d3dd00213f",
"CorpusId": 266079330,
"PubMed": "38239898"
},
"url": "https://www.semanticscholar.org/paper/7ab04d6150f52d6dd917fd76c4069af5664f5841",
"referenceCount": 0,
"citationCount": 10,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Autonomous chemical research with large language models",
"abstract": null,
"year": 2023,
"venue": "The Naturalist",
"authors": [
"Daniil A. Boiko",
"R. MacKnight",
"Ben Kline",
"Gabe Gomes"
],
"externalIds": {
"PubMedCentral": "10733136",
"DBLP": "journals/nature/BoikoMKG23",
"DOI": "10.1038/s41586-023-06792-0",
"CorpusId": 266432059,
"PubMed": "38123806"
},
"url": "https://www.semanticscholar.org/paper/6fe3779fe5f2e9402abdd08ad8db41a0f13a99eb",
"referenceCount": 19,
"citationCount": 138,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery",
"abstract": "The rapid evolution of artificial intelligence in drug discovery encounters challenges with generalization and extensive training, yet Large Language Models (LLMs) offer promise in reshaping interactions with complex molecular data. Our novel contribution, InstructMol, a multi-modal LLM, effectively aligns molecular structures with natural language via an instruction-tuning approach, utilizing a two-stage training strategy that adeptly combines limited domain-specific data with molecular and textual information. InstructMol showcases substantial performance improvements in drug discovery-related molecular tasks, surpassing leading LLMs and significantly reducing the gap with specialized models, thereby establishing a robust foundation for a versatile and dependable drug discovery assistant.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"He Cao",
"Zijing Liu",
"Xingyu Lu",
"Yuan Yao",
"Yu Li"
],
"externalIds": {
"DBLP": "journals/corr/abs-2311-16208",
"ArXiv": "2311.16208",
"DOI": "10.48550/arXiv.2311.16208",
"CorpusId": 265466509
},
"url": "https://www.semanticscholar.org/paper/2b3554a8fea6f123fc04bd3e120f2293f227e1b2",
"referenceCount": 85,
"citationCount": 31,
"influentialCitationCount": 6,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions",
"abstract": "The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation. Nevertheless, alongside these strides, LLMs exhibit a critical tendency to produce hallucinations, resulting in content that is inconsistent with real-world facts or user inputs. This phenomenon poses substantial challenges to their practical deployment and raises concerns over the reliability of LLMs in real-world scenarios, which attracts increasing attention to detect and mitigate these hallucinations. In this survey, we aim to provide a thorough and in-depth overview of recent advances in the field of LLM hallucinations. We begin with an innovative taxonomy of LLM hallucinations, then delve into the factors contributing to hallucinations. Subsequently, we present a comprehensive overview of hallucination detection methods and benchmarks. Additionally, representative approaches designed to mitigate hallucinations are introduced accordingly. Finally, we analyze the challenges that highlight the current limitations and formulate open questions, aiming to delineate pathways for future research on hallucinations in LLMs.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Lei Huang",
"Weijiang Yu",
"Weitao Ma",
"Weihong Zhong",
"Zhangyin Feng",
"Haotian Wang",
"Qianglong Chen",
"Weihua Peng",
"Xiaocheng Feng",
"Bing Qin",
"Ting Liu"
],
"externalIds": {
"ArXiv": "2311.05232",
"DBLP": "journals/corr/abs-2311-05232",
"DOI": "10.48550/arXiv.2311.05232",
"CorpusId": 265067168
},
"url": "https://www.semanticscholar.org/paper/1e909e2a8cdacdcdff125ebcc566f37cb869a1c8",
"referenceCount": 281,
"citationCount": 206,
"influentialCitationCount": 11,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Large language models for chemistry robotics",
"abstract": null,
"year": 2023,
"venue": "Autonomous Robots",
"authors": [
"Naruki Yoshikawa",
"Marta Skreta",
"Kourosh Darvish",
"Sebastian Arellano-Rubach",
"Zhi Ji",
"L. B. Kristensen",
"A. Z. Li",
"Yuchi Zhao",
"Haoping Xu",
"Artur Kuramshin",
"Alán Aspuru-Guzik",
"F. Shkurti",
"Animesh Garg"
],
"externalIds": {
"DBLP": "journals/arobots/YoshikawaSDAJKLZXKASG23",
"DOI": "10.1007/s10514-023-10136-2",
"CorpusId": 264498447
},
"url": "https://www.semanticscholar.org/paper/8e3d701b767213062edda831e7deef010e48c9fd",
"referenceCount": 97,
"citationCount": 30,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Prompt engineering of GPT-4 for chemical research: what can/cannot be done?",
"abstract": "ABSTRACT This paper evaluates the capabilities and limitations of the Generative Pre-trained Transformer 4 (GPT-4) in chemical research. Although GPT-4 exhibits remarkable proficiencies, it is evident that the quality of input data significantly affects its performance. We explore GPT-4’s potential in chemical tasks, such as foundational chemistry knowledge, cheminformatics, data analysis, problem prediction, and proposal abilities. While the language model partially outperformed traditional methods, such as black-box optimization, it fell short against specialized algorithms, highlighting the need for their combined use. The paper shares the prompts given to GPT-4 and its responses, providing a resource for prompt engineering within the community, and concludes with a discussion on the future of chemical research using large language models. GRAPHICAL ABSTRACT IMPACT STATEMENT This paper comprehensively reveals the advantages and limitations of GPT-4 in chemical research, such as expert knowledge, data analysis, prediction, suggestion, and autonomous experimentation.",
"year": 2023,
"venue": "Science and Technology of Advanced Materials: Methods",
"authors": [
"Kan Hatakeyama‐Sato",
"Naoki Yamane",
"Yasuhiko Igarashi",
"Y. Nabae",
"T. Hayakawa"
],
"externalIds": {
"DOI": "10.1080/27660400.2023.2260300",
"CorpusId": 262138540
},
"url": "https://www.semanticscholar.org/paper/c17c3c31e104e04a2f33f87d1a38909082df81c9",
"referenceCount": 74,
"citationCount": 20,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "MeSesamol, a bio-based and versatile polar aprotic solvent for organic synthesis and depolymerization",
"abstract": null,
"year": 2023,
"venue": "Chemical Engineering Journal",
"authors": [
"Gyula Dargó",
"David Kis",
"Martin Gede",
"Sushil Kumar",
"J. Kupai",
"G. Szekely"
],
"externalIds": {
"DOI": "10.1016/j.cej.2023.144365",
"CorpusId": 259593457
},
"url": "https://www.semanticscholar.org/paper/31e1a0c1768da6964cbe6a7f77d9c8b85ded985f",
"referenceCount": 67,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "DARWIN Series: Domain Specific Large Language Models for Natural Science",
"abstract": "Emerging tools bring forth fresh approaches to work, and the field of natural science is no different. In natural science, traditional manual, serial, and labour-intensive work is being augmented by automated, parallel, and iterative processes driven by artificial intelligence-based experimental automation and more. To add new capabilities in natural science, enabling the acceleration and enrichment of automation of the discovery process, we present DARWIN, a series of tailored LLMs for natural science, mainly in physics, chemistry, and material science. This series relies on open-source LLM, incorporating structured and unstructured scientific knowledge from public datasets and literature. We fine-tuned the models using over 60,000 instruction data points, emphasizing factual correctness. During the fine-tuning, we introduce the Scientific Instruction Generation (SIG) model, automating instruction generation from scientific texts. This eliminates the need for manual extraction or domain-specific knowledge graphs and efficiently injects scientific knowledge into the model. We also explore multi-task training strategies, revealing interconnections between scientific tasks. DARWIN series not only achieves state-of-the-art results on various scientific tasks but also diminishes reliance on closed-source AI models. Our research showcases the ability of LLM in the scientific domain, with the overarching goal of fostering prosperity within the broader AI for science community.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Tong Xie",
"Yuwei Wan",
"Wei Huang",
"Zhenyu Yin",
"Yixuan Liu",
"Shaozhou Wang",
"Qingyuan Linghu",
"Chunyu Kit",
"Clara Grazian",
"Wenjie Zhang",
"Imran Razzak",
"B. Hoex"
],
"externalIds": {
"ArXiv": "2308.13565",
"DBLP": "journals/corr/abs-2308-13565",
"DOI": "10.48550/arXiv.2308.13565",
"CorpusId": 267913092
},
"url": "https://www.semanticscholar.org/paper/4c6fb350e7769cb730a15c62927b6e9b563d0157",
"referenceCount": 49,
"citationCount": 24,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Total Syntheses of Polycyclic Diterpenes Phomopsene, Methyl Phomopsenonate, and iso-Phomopsene via Reorganization of C-C Single Bonds.",
"abstract": "The first total syntheses of polycyclic diterpenes phomopsene (1), methyl phomopsenonate (2), and iso-phomopsene (3) have been accomplished through the unusual cascade reorganization of C-C single bonds. This approach features: (i) a synergistic Nazarov cyclization/double ring expansions in one-step, developed by authors, to rapid and stereospecific construction of the 5/5/5/5 tetraquinane scaffold bearing contiguous quaternary centers and (ii) a one-pot strategic ring expansion through Beckmann fragmentation/recombination to efficiently assemble the requisite 5/5/6/5 tetracyclic skeleton of the target molecules 1-3. This work enables us to determine that the correct structure of iso-phomopsene is, in fact, the C7 epimer of the originally assigned structure. Finally, the absolute configurations of three target molecules were confirmed through enantioselective synthesis.",
"year": 2023,
"venue": "Journal of the American Chemical Society",
"authors": [
"Junqiang Yin",
"Yun-Peng Wang",
"Jun Xue",
"F. Zhou",
"Xing-Qian Shan",
"Rong Zhu",
"Kun Fang",
"Lei Shi",
"Shu‐Yu Zhang",
"Si-Hua Hou",
"W. Xia",
"Y. Tu"
],
"externalIds": {
"DOI": "10.1021/jacs.3c07044",
"CorpusId": 261062846,
"PubMed": "37605370"
},
"url": "https://www.semanticscholar.org/paper/51f3e91c7832eb45ab8c42b3d02fefc153550f0f",
"referenceCount": 35,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Axially chiral styrene-based organocatalysts and their application in asymmetric cascade Michael/cyclization reaction",
"abstract": "An axially chiral styrene-based organocatalyst, featuring a combination of axially chiral styrene-based structure and a pyrrole ring, has been designed and synthesized. This catalyst demonstrates remarkable capabilities in producing a wide range of densely substituted spirooxindoles that feature an alkyne-substituted quaternary stereogenic center. These spirooxindoles are generated through mild cascade Michael/cyclization reactions, resulting in high conversion rates and exceptional enantioselectivity. Our catalytic model, based on experiments, X-ray structure analysis and DFT calculations suggests that chiral matched π–π interactions and multiple H-bonds between the organocatalyst and substrates play significant roles in controlling the stereoselectivity of the reaction.",
"year": 2023,
"venue": "Chemical Science",
"authors": [
"Yu Hao",
"Zi‐Hao Li",
"Zhi-Gang Ma",
"Ru-Xin Liu",
"Rui Ge",
"Quanzhe Li",
"Tong-Mei Ding",
"Shuyue Zhang"
],
"externalIds": {
"PubMedCentral": "10498726",
"DOI": "10.1039/d3sc02705h",
"CorpusId": 260984924,
"PubMed": "37712017"
},
"url": "https://www.semanticscholar.org/paper/001b4134853730e985300aead1b656122ff7d598",
"referenceCount": 0,
"citationCount": 1,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education",
"abstract": "EduChat (https://www.educhat.top/) is a large-scale language model (LLM)-based chatbot system in the education domain. Its goal is to support personalized, fair, and compassionate intelligent education, serving teachers, students, and parents. Guided by theories from psychology and education, it further strengthens educational functions such as open question answering, essay assessment, Socratic teaching, and emotional support based on the existing basic LLMs. Particularly, we learn domain-specific knowledge by pre-training on the educational corpus and stimulate various skills with tool use by fine-tuning on designed system prompts and instructions. Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e.g., GitHub https://github.com/icalk-nlp/EduChat, Hugging Face https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its capabilities online (https://vimeo.com/851004454). This initiative aims to promote research and applications of LLMs for intelligent education.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Yuhao Dan",
"Zhikai Lei",
"Yiyang Gu",
"Yong Li",
"Jia-Peng Yin",
"Jiaju Lin",
"Linhao Ye",
"Zhiyan Tie",
"Yougen Zhou",
"Yilei Wang",
"Aimin Zhou",
"Zeyang Zhou",
"Qin Chen",
"Jie Zhou",
"Liang He",
"Xipeng Qiu"
],
"externalIds": {
"DBLP": "journals/corr/abs-2308-02773",
"ArXiv": "2308.02773",
"DOI": "10.48550/arXiv.2308.02773",
"CorpusId": 260681803
},
"url": "https://www.semanticscholar.org/paper/c2f9006993d9d84d48eb894aab3ba60f946d0e15",
"referenceCount": 25,
"citationCount": 59,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales",
"abstract": "ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Z. Yao",
"Reza Yazdani Aminabadi",
"Olatunji Ruwase",
"Samyam Rajbhandari",
"Xiaoxia Wu",
"A. A. Awan",
"Jeff Rasley",
"Minjia Zhang",
"Conglong Li",
"Connor Holmes",
"Zhongzhu Zhou",
"Michael Wyatt",
"Molly Smith",
"L. Kurilenko",
"Heyang Qin",
"Masahiro Tanaka",
"Shuai Che",
"S. Song",
"Yuxiong He"
],
"externalIds": {
"ArXiv": "2308.01320",
"DBLP": "journals/corr/abs-2308-01320",
"DOI": "10.48550/arXiv.2308.01320",
"CorpusId": 260438723
},
"url": "https://www.semanticscholar.org/paper/dd278797cae2d4ca9725d98a4e0f73b637990381",
"referenceCount": 14,
"citationCount": 51,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Scientific discovery in the age of artificial intelligence",
"abstract": null,
"year": 2023,
"venue": "Nature",
"authors": [
"Hanchen Wang",
"Tianfan Fu",
"Yuanqi Du",
"Wenhao Gao",
"Kexin Huang",
"Ziming Liu",
"P. Chandak",
"Shengchao Liu",
"Peter Van Katwyk",
"Andreea Deac",
"Anima Anandkumar",
"K. Bergen",
"Carla P. Gomes",
"Shirley Ho",
"Pushmeet Kohli",
"Joan Lasenby",
"J. Leskovec",
"Tie-Yan Liu",
"A. Manrai",
"Debora S. Marks",
"Bharath Ramsundar",
"Le Song",
"Jimeng Sun",
"Jian Tang",
"Petar Velickovic",
"Max Welling",
"Linfeng Zhang",
"Connor W. Coley",
"Y. Bengio",
"M. Zitnik"
],
"externalIds": {
"DBLP": "journals/nature/WangFD0HLCLKDAB23",
"DOI": "10.1038/s41586-023-06221-2",
"CorpusId": 260384616,
"PubMed": "37532811"
},
"url": "https://www.semanticscholar.org/paper/f08060425aa8a212d74185ee23a08329b89abcd2",
"referenceCount": 269,
"citationCount": 426,
"influentialCitationCount": 4,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs",
"abstract": "Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions. The reason is that current instruction tuning largely focuses on basic language tasks but ignores the tool-use domain. This is in contrast to the excellent tool-use capabilities of state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap, we introduce ToolLLM, a general tool-use framework encompassing data construction, model training, and evaluation. We first present ToolBench, an instruction-tuning dataset for tool use, which is constructed automatically using ChatGPT. Specifically, the construction can be divided into three stages: (i) API collection: we collect 16,464 real-world RESTful APIs spanning 49 categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to generate diverse instructions involving these APIs, covering both single-tool and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to search for a valid solution path (chain of API calls) for each instruction. To enhance the reasoning capabilities of LLMs, we develop a novel depth-first search-based decision tree algorithm. It enables LLMs to evaluate multiple reasoning traces and expand the search space. Moreover, to evaluate the tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval. Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction. Experiments show that ToolLLaMA demonstrates a remarkable ability to execute complex instructions and generalize to unseen APIs, and exhibits comparable performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot generalization ability in an out-of-distribution tool-use dataset: APIBench.",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"Yujia Qin",
"Shi Liang",
"Yining Ye",
"Kunlun Zhu",
"Lan Yan",
"Ya-Ting Lu",
"Yankai Lin",
"Xin Cong",
"Xiangru Tang",
"Bill Qian",
"Sihan Zhao",
"Runchu Tian",
"Ruobing Xie",
"Jie Zhou",
"M. Gerstein",
"Dahai Li",
"Zhiyuan Liu",
"Maosong Sun"
],
"externalIds": {
"DBLP": "journals/corr/abs-2307-16789",
"ArXiv": "2307.16789",
"DOI": "10.48550/arXiv.2307.16789",
"CorpusId": 260334759
},
"url": "https://www.semanticscholar.org/paper/0bfc804e31eecfd77f45e4ee7f4d629fffdcd628",
"referenceCount": 65,
"citationCount": 359,
"influentialCitationCount": 68,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Llama 2: Open Foundation and Fine-Tuned Chat Models",
"abstract": "In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Louis Martin",
"Kevin R. Stone",
"Peter Albert",
"Amjad Almahairi",
"Yasmine Babaei",
"Nikolay Bashlykov",
"Soumya Batra",
"Prajjwal Bhargava",
"Shruti Bhosale",
"D. Bikel",
"Lukas Blecher",
"Cristian Cantón Ferrer",
"Moya Chen",
"Guillem Cucurull",
"David Esiobu",
"Jude Fernandes",
"Jeremy Fu",
"Wenyin Fu",
"Brian Fuller",
"Cynthia Gao",
"Vedanuj Goswami",
"Naman Goyal",
"A. Hartshorn",
"Saghar Hosseini",
"Rui Hou",
"Hakan Inan",
"Marcin Kardas",
"Viktor Kerkez",
"Madian Khabsa",
"Isabel M. Kloumann",
"A. Korenev",
"Punit Singh Koura",
"Marie-Anne Lachaux",
"Thibaut Lavril",
"Jenya Lee",
"Diana Liskovich",
"Yinghai Lu",
"Yuning Mao",
"Xavier Martinet",
"Todor Mihaylov",
"Pushkar Mishra",
"Igor Molybog",
"Yixin Nie",
"Andrew Poulton",
"Jeremy Reizenstein",
"Rashi Rungta",
"Kalyan Saladi",
"Alan Schelten",
"Ruan Silva",
"Eric Michael Smith",
"R. Subramanian",
"Xia Tan",
"Binh Tang",
"Ross Taylor",
"Adina Williams",
"Jian Xiang Kuan",
"Puxin Xu",
"Zhengxu Yan",
"Iliyan Zarov",
"Yuchen Zhang",
"Angela Fan",
"Melanie Kambadur",
"Sharan Narang",
"Aurelien Rodriguez",
"Robert Stojnic",
"Sergey Edunov",
"Thomas Scialom"
],
"externalIds": {
"ArXiv": "2307.09288",
"DBLP": "journals/corr/abs-2307-09288",
"CorpusId": 259950998
},
"url": "https://www.semanticscholar.org/paper/104b0bb1da562d53cbda87aec79ef6a2827d191a",
"referenceCount": 131,
"citationCount": 7147,
"influentialCitationCount": 1094,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Asymmetric Intramolecular Hydroalkylation of Internal Olefin with Cycloalkanone to Directly Access Polycyclic Systems.",
"abstract": "An asymmetric intramolecular hydroalkylation of unactivated internal olefins with tethered cyclic ketones was realized by the cooperative catalysis of a newly designed chiral amine (SPD-NH2) and Pd(II) complex, providing straightforward access to either bridged or fused bicyclic systems containing three stereogenic centers with excellent enantioselectivity (up to 99% ee) and diastereoselectivity (up to >20:1 dr). Notably, the bicyclic products could be conveniently transformed into a diverse range of key structures frequently found in bioactive terpenes, such as ∆6-protoilludene, cracroson D, and vulgarisins. The steric hindrance between the Ar group of the SPD-NH2 catalyst and the branched chain of the substrate, hydrogen-bonding interactions between the N-H of the enamine motif and the C=O of the directing group MQ, and the counterion of the Pd(II) complex were identified as key factors for excellent stereoinduction in this dual catalytic process by density functional theory calculations.",
"year": 2023,
"venue": "Angewandte Chemie",
"authors": [
"Ai‐Fang Wang",
"Jin‐Miao Tian",
"Xiao‐Jing Zhao",
"Zi‐Hao Li",
"Yehui Zhang",
"K. Lu",
"Hong Wang",
"Shu‐Yu Zhang",
"Y. Tu",
"Tong-Mei Ding",
"Yu-yang Xie"
],
"externalIds": {
"DOI": "10.1002/anie.202308858",
"CorpusId": 259946356,
"PubMed": "37462217"
},
"url": "https://www.semanticscholar.org/paper/9223d37facee549b133891c386fd494ca2de8dae",
"referenceCount": 0,
"citationCount": 5,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models",
"abstract": "Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Instructions, a comprehensive instruction dataset designed for the biomolecular domain. Mol-Instructions encompasses three key components: molecule-oriented instructions, protein-oriented instructions, and biomolecular text instructions. Each component aims to improve the understanding and prediction capabilities of LLMs concerning biomolecular features and behaviors. Through extensive instruction tuning experiments on LLMs, we demonstrate the effectiveness of Mol-Instructions in enhancing large models' performance in the intricate realm of biomolecular studies, thus fostering progress in the biomolecular research community. Mol-Instructions is publicly available for ongoing research and will undergo regular updates to enhance its applicability.",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"Yin Fang",
"Xiaozhuan Liang",
"Ningyu Zhang",
"Kangwei Liu",
"Rui Huang",
"Zhuo Chen",
"Xiaohui Fan",
"Huajun Chen"
],
"externalIds": {
"DBLP": "conf/iclr/FangL0LH0FC24",
"ArXiv": "2306.08018",
"DOI": "10.48550/arXiv.2306.08018",
"CorpusId": 259164901
},
"url": "https://www.semanticscholar.org/paper/f86aa25603d1f2e4066db9b6a9a6d311b4e8c491",
"referenceCount": 80,
"citationCount": 44,
"influentialCitationCount": 8,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "K2: A Foundation Language Model for Geoscience Knowledge Understanding and Utilization",
"abstract": "Large language models (LLMs) have achieved great success in general domains of natural language processing. In this paper, we bring LLMs to the realm of geoscience with the objective of advancing research and applications in this field. To this end, we present the first-ever LLM in geoscience, K2, alongside a suite of resources developed to further promote LLM research within geoscience. For instance, we have curated the first geoscience instruction tuning dataset, GeoSignal, which aims to align LLM responses to geoscience-related user queries. Additionally, we have established the first geoscience benchmark, GeoBench, to evaluate LLMs in the context of geoscience. In this work, we experiment with a complete recipe to adapt a pre-trained general-domain LLM to the geoscience domain. Specifically, we further train the LLaMA-7B model on 5.5B tokens of geoscience text corpus, including over 1 million pieces of geoscience literature, and utilize GeoSignal's supervised data to fine-tune the model. Moreover, we share a protocol that can efficiently gather domain-specific data and construct domain-supervised data, even in situations where manpower is scarce. Meanwhile, we equip K2 with the abilities of using tools to be a naive geoscience aide. Experiments conducted on the GeoBench demonstrate the effectiveness of our approach and datasets on geoscience knowledge understanding and utilization.We open-source all the training data and K2 model checkpoints at https://github.com/davendw49/k2",
"year": 2023,
"venue": "Web Search and Data Mining",
"authors": [
"Cheng Deng",
"Tianhang Zhang",
"Zhongmou He",
"Yi Xu",
"Qiyuan Chen",
"Yuanyuan Shi",
"Le Zhou",
"Luoyi Fu",
"Weinan Zhang",
"Xinbing Wang",
"Cheng Zhou",
"Zhouhan Lin",
"Junxian He"
],
"externalIds": {
"DBLP": "conf/wsdm/DengZHCSXF0WZLH24",
"ArXiv": "2306.05064",
"DOI": "10.1145/3616855.3635772",
"CorpusId": 259108887
},
"url": "https://www.semanticscholar.org/paper/32f541216112de78037d8e0f95ddc152eb6f05fa",
"referenceCount": 58,
"citationCount": 31,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks",
"abstract": "Large Language Models (LLMs) with strong abilities in natural language processing tasks have emerged and have been applied in various kinds of areas such as science, finance and software engineering. However, the capability of LLMs to advance the field of chemistry remains unclear. In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain. We identify three key chemistry-related capabilities including understanding, reasoning and explaining to explore in LLMs and establish a benchmark containing eight chemistry tasks. Our analysis draws on widely recognized datasets facilitating a broad exploration of the capacities of LLMs within the context of practical chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are evaluated for each chemistry task in zero-shot and few-shot in-context learning settings with carefully selected demonstration examples and specially crafted prompts. Our investigation found that GPT-4 outperformed other models and LLMs exhibit different competitive levels in eight chemistry tasks. In addition to the key findings from the comprehensive benchmark analysis, our work provides insights into the limitation of current LLMs and the impact of in-context learning settings on LLMs' performance across various chemistry tasks. The code and datasets used in this study are available at https://github.com/ChemFoundationModels/ChemLLMBench.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Taicheng Guo",
"Kehan Guo",
"B. Nan",
"Zhengwen Liang",
"Zhichun Guo",
"N. Chawla",
"O. Wiest",
"Xiangliang Zhang"
],
"externalIds": {
"ArXiv": "2305.18365",
"DBLP": "conf/nips/GuoGNLGCW023",
"CorpusId": 258967365
},
"url": "https://www.semanticscholar.org/paper/20d7965c0b282a0cd7f990e435d0f6bc9535bbc6",
"referenceCount": 77,
"citationCount": 59,
"influentialCitationCount": 5,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MolXPT: Wrapping Molecules with Text for Generative Pre-training",
"abstract": "Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified language model of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a language model for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning.",
"year": 2023,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Zequn Liu",
"W. Zhang",
"Yingce Xia",
"Lijun Wu",
"Shufang Xie",
"Tao Qin",
"M. Zhang",
"Tie-Yan Liu"
],
"externalIds": {
"DBLP": "conf/acl/LiuZXW0QZL23",
"ArXiv": "2305.10688",
"ACL": "2023.acl-short.138",
"DOI": "10.48550/arXiv.2305.10688",
"CorpusId": 258762343
},
"url": "https://www.semanticscholar.org/paper/ed1353d705eeabc0e916caba5fbae890eefe4f84",
"referenceCount": 48,
"citationCount": 47,
"influentialCitationCount": 7,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs",
"abstract": "A ChatGPT-like system for drug compounds could be a game-changer in pharmaceutical research, accelerating drug discovery, enhancing our understanding of structure-activity relationships, guiding lead optimization, aiding drug repurposing, reducing the failure rate, and streamlining clinical trials. In this work, we make an initial attempt towards enabling ChatGPT-like capabilities on drug molecule graphs, by developing a prototype system DrugChat. DrugChat works in a similar way as ChatGPT. Users upload a compound molecule graph and ask various questions about this compound. DrugChat will answer these questions in a multi-turn, interactive manner. The DrugChat system consists of a graph neural network (GNN), a large language model (LLM), and an adaptor. The GNN takes a compound molecule graph as input and learns a representation for this graph. The adaptor transforms the graph representation produced by the GNN into another representation that is acceptable to the LLM. The LLM takes the compound representation transformed by the adaptor and users' questions about this compound as inputs and generates answers. All these components are trained end-to-end. To train DrugChat, we collected instruction tuning datasets which contain 10,834 drug compounds and 143,517 question-answer pairs. The code and data is available at \\url{https://github.com/UCSD-AI4H/drugchat}",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Youwei Liang",
"Ruiyi Zhang",
"Li Zhang",
"Pengtao Xie"
],
"externalIds": {
"DBLP": "journals/corr/abs-2309-03907",
"ArXiv": "2309.03907",
"DOI": "10.48550/arXiv.2309.03907",
"CorpusId": 261660530
},
"url": "https://www.semanticscholar.org/paper/c77c48fe9060aa83627fc2c7f331325de0c4fdac",
"referenceCount": 16,
"citationCount": 36,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "PMC-LLaMA: Towards Building Open-source Language Models for Medicine",
"abstract": "Recently, Large Language Models (LLMs) have showcased remarkable capabilities in natural language understanding. While demonstrating proficiency in everyday conversations and question-answering situations, these models frequently struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge. In this paper, we describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA. Our contributions are threefold: (i) we systematically investigate the process of adapting a general-purpose foundation language model towards medical domain, this involves data-centric knowledge injection through the integration of 4.8M biomedical academic papers and 30K medical textbooks, as well as comprehensive fine-tuning for alignment with domain-specific instructions; (ii) we contribute a large-scale, comprehensive dataset for instruction tuning. This dataset encompasses medical question-answering (QA), rationale for reasoning, and conversational dialogues, comprising a total of 202M tokens; (iii) we conduct thorough ablation studies to demonstrate the effectiveness of each proposed component. While evaluating on various public medical question-answering benchmarks, our lightweight PMCLLaMA, which consists of only 13 billion parameters, exhibits superior performance, even surpassing ChatGPT. All models, codes, datasets can be found in https://github.com/chaoyi-wu/PMC-LLaMA.",
"year": 2023,
"venue": "",
"authors": [
"Chaoyi Wu",
"Xiaoman Zhang",
"Ya Zhang",
"Yanfeng Wang",
"Weidi Xie"
],
"externalIds": {
"ArXiv": "2304.14454",
"CorpusId": 258417843
},
"url": "https://www.semanticscholar.org/paper/04ee9597be4d6d2457214334e495e591000b5542",
"referenceCount": 29,
"citationCount": 60,
"influentialCitationCount": 11,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Augmenting large language models with chemistry tools",
"abstract": null,
"year": 2023,
"venue": "Nat. Mac. Intell.",
"authors": [
"Andrés M Bran",
"Sam Cox",
"Oliver Schilter",
"Carlo Baldassari",
"Andrew D. White",
"P. Schwaller"
],
"externalIds": {
"DBLP": "journals/natmi/BranCSBWS24",
"ArXiv": "2304.05376",
"PubMedCentral": "11116106",
"DOI": "10.1038/s42256-024-00832-8",
"CorpusId": 258059792,
"PubMed": "38799228"
},
"url": "https://www.semanticscholar.org/paper/354dcdebf3f8b5feeed5c62090e0bc1f0c28db06",
"referenceCount": 114,
"citationCount": 201,
"influentialCitationCount": 11,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Mathematics",
"Medicine",
"Computer Science"
]
},
{
"title": "ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge",
"abstract": "Objective The primary aim of this research was to address the limitations observed in the medical knowledge of prevalent large language models (LLMs) such as ChatGPT, by creating a specialized language model with enhanced accuracy in medical advice. Methods We achieved this by adapting and refining the large language model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues sourced from a widely used online medical consultation platform. These conversations were cleaned and anonymized to respect privacy concerns. In addition to the model refinement, we incorporated a self-directed information retrieval mechanism, allowing the model to access and utilize real-time information from online sources like Wikipedia and data from curated offline medical databases. Results The fine-tuning of the model with real-world patient-doctor interactions significantly improved the model's ability to understand patient needs and provide informed advice. By equipping the model with self-directed information retrieval from reliable online and offline sources, we observed substantial improvements in the accuracy of its responses. Conclusion Our proposed ChatDoctor, represents a significant advancement in medical LLMs, demonstrating a significant improvement in understanding patient inquiries and providing accurate advice. Given the high stakes and low error tolerance in the medical field, such enhancements in providing accurate and reliable information are not only beneficial but essential.",
"year": 2023,
"venue": "Cureus",
"authors": [
"Yunxiang Li",
"Zihan Li",
"Kai Zhang",
"Ruilong Dan",
"Steven Jiang",
"You Zhang"
],
"externalIds": {
"PubMedCentral": "10364849",
"ArXiv": "2303.14070",
"DOI": "10.7759/cureus.40895",
"CorpusId": 259252045,
"PubMed": "37492832"
},
"url": "https://www.semanticscholar.org/paper/4a7f6c4e71e20311ade4e76e8d0945d499c31fcd",
"referenceCount": 19,
"citationCount": 200,
"influentialCitationCount": 24,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "GPT-4 Technical Report",
"abstract": "We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.",
"year": 2023,
"venue": "",
"authors": [
"OpenAI Josh Achiam",
"Steven Adler",
"Sandhini Agarwal",
"Lama Ahmad",
"Ilge Akkaya",
"Florencia Leoni Aleman",
"Diogo Almeida",
"Janko Altenschmidt",
"Sam Altman",
"Shyamal Anadkat",
"Red Avila",
"Igor Babuschkin",
"S. Balaji",
"Valerie Balcom",
"Paul Baltescu",
"Haim-ing Bao",
"Mo Bavarian",
"Jeff Belgum",
"Irwan Bello",
"Jake Berdine",
"Gabriel Bernadett-Shapiro",
"Christopher Berner",
"Lenny Bogdonoff",
"Oleg Boiko",
"Madelaine Boyd",
"Anna-Luisa Brakman",
"Greg Brockman",
"Tim Brooks",
"Miles Brundage",
"Kevin Button",
"Trevor Cai",
"Rosie Campbell",
"Andrew Cann",
"Brittany Carey",
"Chelsea Carlson",
"Rory Carmichael",
"Brooke Chan",
"Che Chang",
"Fotis Chantzis",
"Derek Chen",
"Sully Chen",
"Ruby Chen",
"Jason Chen",
"Mark Chen",
"B. Chess",
"Chester Cho",
"Casey Chu",
"Hyung Won Chung",
"Dave Cummings",
"Jeremiah Currier",
"Yunxing Dai",
"Cory Decareaux",
"Thomas Degry",
"Noah Deutsch",
"Damien Deville",
"Arka Dhar",
"David Dohan",
"Steve Dowling",
"Sheila Dunning",
"Adrien Ecoffet",
"Atty Eleti",
"Tyna Eloundou",
"David Farhi",
"Liam Fedus",
"Niko Felix",
"Sim'on Posada Fishman",
"Juston Forte",
"Is-abella Fulford",
"Leo Gao",
"Elie Georges",
"C. Gibson",
"Vik Goel",
"Tarun Gogineni",
"Gabriel Goh",
"Raphael Gontijo-Lopes",
"Jonathan Gordon",
"Morgan Grafstein",
"Scott Gray",
"Ryan Greene",
"Joshua Gross",
"S. Gu",
"Yufei Guo",
"Chris Hallacy",
"Jesse Han",
"Jeff Harris",
"Yuchen He",
"Mike Heaton",
"Johannes Heidecke",
"Chris Hesse",
"Alan Hickey",
"Wade Hickey",
"Peter Hoeschele",
"Brandon Houghton",
"Kenny Hsu",
"Shengli Hu",
"Xin Hu",
"Joost Huizinga",
"Shantanu Jain",
"Shawn Jain",
"Joanne Jang",
"Angela Jiang",
"Roger Jiang",
"Haozhun Jin",
"Denny Jin",
"Shino Jomoto",
"B. Jonn",
"Heewoo Jun",
"Tomer Kaftan",
"Lukasz Kaiser",
"Ali Kamali",
"I. Kanitscheider",
"N. Keskar",
"Tabarak Khan",
"Logan Kilpatrick",
"Jong Wook Kim",
"Christina Kim",
"Yongjik Kim",
"Hendrik Kirchner",
"J. Kiros",
"Matthew Knight",
"Daniel Kokotajlo",
"Lukasz Kondraciuk",
"A. Kondrich",
"Aris Konstantinidis",
"Kyle Kosic",
"Gretchen Krueger",
"Vishal Kuo",
"Michael Lampe",
"Ikai Lan",
"Teddy Lee",
"J. Leike",
"Jade Leung",
"Daniel Levy",
"Chak Ming Li",
"Rachel Lim",
"Molly Lin",
"Stephanie Lin",
"Ma-teusz Litwin",
"Theresa Lopez",
"Ryan Lowe",
"Patricia Lue",
"A. Makanju",
"Kim Malfacini",
"Sam Manning",
"Todor Markov",
"Yaniv Markovski",
"Bianca Martin",
"Katie Mayer",
"Andrew Mayne",
"Bob McGrew",
"S. McKinney",
"C. McLeavey",
"Paul McMillan",
"Jake McNeil",
"David Medina",
"Aalok Mehta",
"Jacob Menick",
"Luke Metz",
"Andrey Mishchenko",
"Pamela Mishkin",
"Vinnie Monaco",
"Evan Morikawa",
"Daniel P. Mossing",
"Tong Mu",
"Mira Murati",
"O. Murk",
"David M'ely",
"Ashvin Nair",
"Reiichiro Nakano",
"Rajeev Nayak",
"Arvind Neelakantan",
"Richard Ngo",
"Hyeonwoo Noh",
"Ouyang Long",
"Cullen O'Keefe",
"J. Pachocki",
"Alex Paino",
"Joe Palermo",
"Ashley Pantuliano",
"Giambattista Parascandolo",
"Joel Parish",
"Emy Parparita",
"Alexandre Passos",
"Mikhail Pavlov",
"Andrew Peng",
"Adam Perelman",
"Filipe de Avila Belbute Peres",
"Michael Petrov",
"Henrique Pondé de Oliveira Pinto",
"Michael Pokorny",
"Michelle Pokrass",
"Vitchyr H. Pong",
"Tolly Powell",
"Alethea Power",
"Boris Power",
"Elizabeth Proehl",
"Raul Puri",
"Alec Radford",
"Jack W. Rae",
"Aditya Ramesh",
"Cameron Raymond",
"Francis Real",
"Kendra Rimbach",
"Carl Ross",
"Bob Rotsted",
"Henri Roussez",
"Nick Ryder",
"M. Saltarelli",
"Ted Sanders",
"Shibani Santurkar",
"Girish Sastry",
"Heather Schmidt",
"David Schnurr",
"John Schulman",
"Daniel Selsam",
"Kyla Sheppard",
"Toki Sherbakov",
"Jessica Shieh",
"Sarah Shoker",
"Pranav Shyam",
"Szymon Sidor",
"Eric Sigler",
"Maddie Simens",
"Jordan Sitkin",
"Katarina Slama",
"Ian Sohl",
"Benjamin D. Sokolowsky",
"Yang Song",
"Natalie Staudacher",
"F. Such",
"Natalie Summers",
"I. Sutskever",
"Jie Tang",
"N. Tezak",
"Madeleine Thompson",
"Phil Tillet",
"Amin Tootoonchian",
"Elizabeth Tseng",
"Preston Tuggle",
"Nick Turley",
"Jerry Tworek",
"Juan Felipe Cer'on Uribe",
"Andrea Vallone",
"Arun Vijayvergiya",
"Chelsea Voss",
"Carroll L. Wainwright",
"Justin Jay Wang",
"Alvin Wang",
"Ben Wang",
"Jonathan Ward",
"Jason Wei",
"CJ Weinmann",
"Akila Welihinda",
"P. Welinder",
"Jiayi Weng",
"Lilian Weng",
"Matt Wiethoff",
"Dave Willner",
"Clemens Winter",
"Samuel Wolrich",
"Hannah Wong",
"Lauren Workman",
"Sherwin Wu",
"Jeff Wu",
"Michael Wu",
"Kai Xiao",
"Tao Xu",
"Sarah Yoo",
"Kevin Yu",
"Qim-ing Yuan",
"Wojciech Zaremba",
"Rowan Zellers",
"Chong Zhang",
"Marvin Zhang",
"Shengjia Zhao",
"Tianhao Zheng",
"Juntang Zhuang",
"William Zhuk",
"Barret Zoph"
],
"externalIds": {
"ArXiv": "2303.08774",
"CorpusId": 257532815
},
"url": "https://www.semanticscholar.org/paper/163b4d6a79a5b19af88b8585456363340d9efd04",
"referenceCount": 0,
"citationCount": 7060,
"influentialCitationCount": 1037,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "LLaMA: Open and Efficient Foundation Language Models",
"abstract": "We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Hugo Touvron",
"Thibaut Lavril",
"Gautier Izacard",
"Xavier Martinet",
"Marie-Anne Lachaux",
"Timothée Lacroix",
"Baptiste Rozière",
"Naman Goyal",
"Eric Hambro",
"Faisal Azhar",
"Aurelien Rodriguez",
"Armand Joulin",
"Edouard Grave",
"Guillaume Lample"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-13971",
"ArXiv": "2302.13971",
"CorpusId": 257219404
},
"url": "https://www.semanticscholar.org/paper/57e849d0de13ed5f91d086936296721d4ff75a75",
"referenceCount": 80,
"citationCount": 8037,
"influentialCitationCount": 1074,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "3D Molecular Generation via Virtual Dynamics",
"abstract": "Structure-based drug design, i.e., finding molecules with high affinities to the target protein pocket, is one of the most critical tasks in drug discovery. Traditional solutions, like virtual screening, require exhaustively searching on a large molecular database, which are inefficient and cannot return novel molecules beyond the database. The pocket-based 3D molecular generation model, i.e., directly generating a molecule with a 3D structure and binding position in the pocket, is a new promising way to address this issue. Herein, we propose VD-Gen, a novel pocket-based 3D molecular generation pipeline. VD-Gen consists of several carefully designed stages to generate fine-grained 3D molecules with binding positions in the pocket cavity end-to-end. Rather than directly generating or sampling atoms with 3D positions in the pocket like in early attempts, in VD-Gen, we first randomly initialize many virtual particles in the pocket; then iteratively move these virtual particles, making the distribution of virtual particles approximate the distribution of molecular atoms. After virtual particles are stabilized in 3D space, we extract a 3D molecule from them. Finally, we further refine atoms in the extracted molecule by iterative movement again, to get a high-quality 3D molecule, and predict a confidence score for it. Extensive experiment results on pocket-based molecular generation demonstrate that VD-Gen can generate novel 3D molecules to fill the target pocket cavity with high binding affinities, significantly outperforming previous baselines.",
"year": 2023,
"venue": "Trans. Mach. Learn. Res.",
"authors": [
"Shuqi Lu",
"Lin Yao",
"X. Chen",
"Hang Zheng",
"Di He",
"Guolin Ke"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-05847",
"ArXiv": "2302.05847",
"DOI": "10.48550/arXiv.2302.05847",
"CorpusId": 256826771
},
"url": "https://www.semanticscholar.org/paper/2bc52556e167ab5a84f032963a4dcbee5246df09",
"referenceCount": 67,
"citationCount": 5,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Toolformer: Language Models Can Teach Themselves to Use Tools",
"abstract": "Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Timo Schick",
"Jane Dwivedi-Yu",
"Roberto Dessì",
"Roberta Raileanu",
"M. Lomeli",
"Luke Zettlemoyer",
"Nicola Cancedda",
"Thomas Scialom"
],
"externalIds": {
"DBLP": "journals/corr/abs-2302-04761",
"ArXiv": "2302.04761",
"DOI": "10.48550/arXiv.2302.04761",
"CorpusId": 256697342
},
"url": "https://www.semanticscholar.org/paper/53d128ea815bcc0526856eb5a9c42cc977cb36a7",
"referenceCount": 63,
"citationCount": 1120,
"influentialCitationCount": 77,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Unifying Molecular and Textual Representations via Multi-task Language Modelling",
"abstract": "The recent advances in neural language models have also been successfully applied to the field of chemistry, offering generative solutions for classical problems in molecular design and synthesis planning. These new methods have the potential to fuel a new era of data-driven automation in scientific discovery. However, specialized models are still typically required for each task, leading to the need for problem-specific fine-tuning and neglecting task interrelations. The main obstacle in this field is the lack of a unified representation between natural language and chemical representations, complicating and limiting human-machine interaction. Here, we propose the first multi-domain, multi-task language model that can solve a wide range of tasks in both the chemical and natural language domains. Our model can handle chemical and natural language concurrently, without requiring expensive pre-training on single domains or task-specific models. Interestingly, sharing weights across domains remarkably improves our model when benchmarked against state-of-the-art baselines on single-domain and cross-domain tasks. In particular, sharing information across domains and tasks gives rise to large improvements in cross-domain tasks, the magnitude of which increase with scale, as measured by more than a dozen of relevant metrics. Our work suggests that such models can robustly and efficiently accelerate discovery in physical sciences by superseding problem-specific fine-tuning and enhancing human-model interactions.",
"year": 2023,
"venue": "International Conference on Machine Learning",
"authors": [
"Dimitrios Christofidellis",
"Giorgio Giannone",
"Jannis Born",
"O. Winther",
"T. Laino",
"M. Manica"
],
"externalIds": {
"DBLP": "conf/icml/Christofidellis23",
"ArXiv": "2301.12586",
"DOI": "10.48550/arXiv.2301.12586",
"CorpusId": 256389950
},
"url": "https://www.semanticscholar.org/paper/b822f2abca1da6f990b2bd47ed43da0671bfc6f8",
"referenceCount": 62,
"citationCount": 54,
"influentialCitationCount": 10,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Large language models encode clinical knowledge",
"abstract": null,
"year": 2022,
"venue": "Nature",
"authors": [
"K. Singhal",
"Shekoofeh Azizi",
"T. Tu",
"S. Mahdavi",
"Jason Wei",
"Hyung Won Chung",
"Nathan Scales",
"A. Tanwani",
"H. Cole-Lewis",
"S. Pfohl",
"P. Payne",
"Martin G. Seneviratne",
"P. Gamble",
"C. Kelly",
"Nathaneal Scharli",
"Aakanksha Chowdhery",
"P. A. Mansfield",
"B. A. Y. Arcas",
"D. Webster",
"Greg S. Corrado",
"Yossi Matias",
"K. Chou",
"Juraj Gottweis",
"Nenad Tomašev",
"Yun Liu",
"A. Rajkomar",
"J. Barral",
"Christopher Semturs",
"A. Karthikesalingam",
"Vivek Natarajan"
],
"externalIds": {
"ArXiv": "2212.13138",
"PubMedCentral": "10396962",
"DBLP": "journals/corr/abs-2212-13138",
"DOI": "10.1038/s41586-023-06291-2",
"CorpusId": 255124952,
"PubMed": "37438534"
},
"url": "https://www.semanticscholar.org/paper/6052486bc9144dc1730c12bf35323af3792a1fd0",
"referenceCount": 111,
"citationCount": 1349,
"influentialCitationCount": 78,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Galactica: A Large Language Model for Science",
"abstract": "Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Ross Taylor",
"Marcin Kardas",
"Guillem Cucurull",
"Thomas Scialom",
"A. Hartshorn",
"Elvis Saravia",
"Andrew Poulton",
"Viktor Kerkez",
"Robert Stojnic"
],
"externalIds": {
"ArXiv": "2211.09085",
"DBLP": "journals/corr/abs-2211-09085",
"DOI": "10.48550/arXiv.2211.09085",
"CorpusId": 253553203
},
"url": "https://www.semanticscholar.org/paper/7d645a3fd276918374fd9483fd675c28e46506d1",
"referenceCount": 107,
"citationCount": 570,
"influentialCitationCount": 66,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "MOFormer: Self-Supervised Transformer Model for Metal–Organic Framework Property Prediction",
"abstract": "Metal–organic frameworks (MOFs) are materials with a high degree of porosity that can be used for many applications. However, the chemical space of MOFs is enormous due to the large variety of possible combinations of building blocks and topology. Discovering the optimal MOFs for specific applications requires an efficient and accurate search over countless potential candidates. Previous high-throughput screening methods using computational simulations like DFT can be time-consuming. Such methods also require the 3D atomic structures of MOFs, which adds one extra step when evaluating hypothetical MOFs. In this work, we propose a structure-agnostic deep learning method based on the Transformer model, named as MOFormer, for property predictions of MOFs. MOFormer takes a text string representation of MOF (MOFid) as input, thus circumventing the need of obtaining the 3D structure of a hypothetical MOF and accelerating the screening process. By comparing to other descriptors such as Stoichiometric-120 and revised autocorrelations, we demonstrate that MOFormer can achieve state-of-the-art structure-agnostic prediction accuracy on all benchmarks. Furthermore, we introduce a self-supervised learning framework that pretrains the MOFormer via maximizing the cross-correlation between its structure-agnostic representations and structure-based representations of the crystal graph convolutional neural network (CGCNN) on >400k publicly available MOF data. Benchmarks show that pretraining improves the prediction accuracy of both models on various downstream prediction tasks. Furthermore, we revealed that MOFormer can be more data-efficient on quantum-chemical property prediction than structure-based CGCNN when training data is limited. Overall, MOFormer provides a novel perspective on efficient MOF property prediction using deep learning.",
"year": 2022,
"venue": "Journal of the American Chemical Society",
"authors": [
"Zhonglin Cao",
"Rishikesh Magar",
"Yuyang Wang",
"A. Farimani"
],
"externalIds": {
"PubMedCentral": "10041520",
"DBLP": "journals/corr/abs-2210-14188",
"ArXiv": "2210.14188",
"DOI": "10.1021/jacs.2c11420",
"CorpusId": 253107370,
"PubMed": "36706365"
},
"url": "https://www.semanticscholar.org/paper/3dc24fa1a2851b792139a239342381de1ec5a440",
"referenceCount": 70,
"citationCount": 58,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics",
"Medicine"
]
},
{
"title": "One Transformer Can Understand Both 2D & 3D Molecular Data",
"abstract": "Unlike vision and language data which usually has a unique format, molecules can naturally be characterized using different chemical formulations. One can view a molecule as a 2D graph or define it as a collection of atoms located in a 3D space. For molecular representation learning, most previous works designed neural networks only for a particular data format, making the learned models likely to fail for other data formats. We believe a general-purpose neural network model for chemistry should be able to handle molecular tasks across data modalities. To achieve this goal, in this work, we develop a novel Transformer-based Molecular model called Transformer-M, which can take molecular data of 2D or 3D formats as input and generate meaningful semantic representations. Using the standard Transformer as the backbone architecture, Transformer-M develops two separated channels to encode 2D and 3D structural information and incorporate them with the atom features in the network modules. When the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. By training on 2D and 3D molecular data with properly designed supervised signals, Transformer-M automatically learns to leverage knowledge from different data modalities and correctly capture the representations. We conducted extensive experiments for Transformer-M. All empirical results show that Transformer-M can simultaneously achieve strong performance on 2D and 3D tasks, suggesting its broad applicability. The code and models will be made publicly available at https://github.com/lsj2408/Transformer-M.",
"year": 2022,
"venue": "International Conference on Learning Representations",
"authors": [
"Shengjie Luo",
"Tianlang Chen",
"Yixian Xu",
"Shuxin Zheng",
"Tie-Yan Liu",
"Di He",
"Liwei Wang"
],
"externalIds": {
"DBLP": "conf/iclr/LuoCXZL0H23",
"ArXiv": "2210.01765",
"DOI": "10.48550/arXiv.2210.01765",
"CorpusId": 252692952
},
"url": "https://www.semanticscholar.org/paper/bf8d98979f1dfd1df492be21d760d5c0e7f22359",
"referenceCount": 102,
"citationCount": 71,
"influentialCitationCount": 11,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology",
"Mathematics"
]
},
{
"title": "Large Language Models are Zero-Shot Reasoners",
"abstract": "Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding\"Let's think step by step\"before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Takeshi Kojima",
"S. Gu",
"Machel Reid",
"Yutaka Matsuo",
"Yusuke Iwasawa"
],
"externalIds": {
"DBLP": "journals/corr/abs-2205-11916",
"ArXiv": "2205.11916",
"CorpusId": 249017743
},
"url": "https://www.semanticscholar.org/paper/e7ad08848d5d7c5c47673ffe0da06af443643bda",
"referenceCount": 61,
"citationCount": 2724,
"influentialCitationCount": 259,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Translation between Molecules and Natural Language",
"abstract": "We present MolT5 - a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. MolT5 allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since MolT5 pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that MolT5-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.",
"year": 2022,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carl N. Edwards",
"T. Lai",
"Kevin Ros",
"Garrett Honke",
"Heng Ji"
],
"externalIds": {
"ACL": "2022.emnlp-main.26",
"DBLP": "journals/corr/abs-2204-11817",
"ArXiv": "2204.11817",
"DOI": "10.48550/arXiv.2204.11817",
"CorpusId": 248376906
},
"url": "https://www.semanticscholar.org/paper/3b9b1aba877ecd3f7e508cbc78a41b623349902b",
"referenceCount": 90,
"citationCount": 113,
"influentialCitationCount": 33,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation",
"abstract": "Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamics where heated particles will diffuse from original states to a noise distribution, in this paper, we propose a novel generative model named GeoDiff for molecular conformation prediction. GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i.e., transforming from a noise distribution to stable conformations) as a Markov chain. Modeling such a generation process is however very challenging as the likelihood of conformations should be roto-translational invariant. We theoretically show that Markov chains evolving with equivariant Markov kernels can induce an invariant distribution by design, and further propose building blocks for the Markov kernels to preserve the desirable equivariance property. The whole framework can be efficiently trained in an end-to-end fashion by optimizing a weighted variational lower bound to the (conditional) likelihood. Experiments on multiple benchmarks show that GeoDiff is superior or comparable to existing state-of-the-art approaches, especially on large molecules.",
"year": 2022,
"venue": "International Conference on Learning Representations",
"authors": [
"Minkai Xu",
"Lantao Yu",
"Yang Song",
"Chence Shi",
"Stefano Ermon",
"Jian Tang"
],
"externalIds": {
"DBLP": "conf/iclr/XuY0SE022",
"ArXiv": "2203.02923",
"DOI": "10.48550/arXiv.2203.02923",
"CorpusId": 247292764
},
"url": "https://www.semanticscholar.org/paper/c871d2dc802d276608a6734637f8bc9e6da0d837",
"referenceCount": 58,
"citationCount": 381,
"influentialCitationCount": 38,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Chain of Thought Prompting Elicits Reasoning in Large Language Models",
"abstract": "We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Jason Wei",
"Xuezhi Wang",
"Dale Schuurmans",
"Maarten Bosma",
"E. Chi",
"F. Xia",
"Quoc Le",
"Denny Zhou"
],
"externalIds": {
"DBLP": "journals/corr/abs-2201-11903",
"ArXiv": "2201.11903",
"CorpusId": 246411621
},
"url": "https://www.semanticscholar.org/paper/1b6e810ce0afd0dd093f789d2b2742d047e316d5",
"referenceCount": 118,
"citationCount": 5216,
"influentialCitationCount": 589,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Uncertainty-aware prediction of chemical reaction yields with graph neural networks",
"abstract": null,
"year": 2022,
"venue": "Journal of Cheminformatics",
"authors": [
"Y. Kwon",
"Dongseon Lee",
"Youn-Suk Choi",
"Seokho Kang"
],
"externalIds": {
"DBLP": "journals/jcheminf/KwonLCK22",
"PubMedCentral": "8750748",
"DOI": "10.1186/s13321-021-00579-z",
"CorpusId": 245829256,
"PubMed": "35012654"
},
"url": "https://www.semanticscholar.org/paper/919ae741e8e8796e05839b2b556f4ac636bce463",
"referenceCount": 19,
"citationCount": 26,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Molformer: Motif-Based Transformer on 3D Heterogeneous Molecular Graphs",
"abstract": "Procuring expressive molecular representations underpins AI-driven molecule design and scientific discovery. The research mainly focuses on atom-level homogeneous molecular graphs, ignoring the rich information in subgraphs or motifs. However, it has been widely accepted that substructures play a dominant role in identifying and determining molecular properties. To address such issues, we formulate heterogeneous molecular graphs (HMGs) and introduce a novel architecture to exploit both molecular motifs and 3D geometry. Precisely, we extract functional groups as motifs for small molecules and employ reinforcement learning to adaptively select quaternary amino acids as motif candidates for proteins. Then HMGs are constructed with both atom-level and motif-level nodes. To better accommodate those HMGs, we introduce a variant of the Transformer named Molformer, which adopts a heterogeneous self-attention layer to distinguish the interactions between multi-level nodes. Besides, it is also coupled with a multi-scale mechanism to capture fine-grained local patterns with increasing contextual scales. An attentive farthest point sampling algorithm is also proposed to obtain the molecular representations. We validate Molformer across a broad range of domains, including quantum chemistry, physiology, and biophysics. Extensive experiments show that Molformer outperforms or achieves the comparable performance of several state-of-the-art baselines. Our work provides a promising way to utilize informative motifs from the perspective of multi-level graph construction. The code is available at https://github.com/smiles724/Molformer.",
"year": 2021,
"venue": "AAAI Conference on Artificial Intelligence",
"authors": [
"Fang Wu",
"Qian Zhang",
"Dragomir R. Radev",
"Jiyu Cui",
"Wen Zhang",
"Huabin Xing",
"Ningyu Zhang",
"Hua-zeng Chen"
],
"externalIds": {
"DBLP": "conf/aaai/WuRL23",
"ArXiv": "2110.01191",
"DOI": "10.1609/aaai.v37i4.25662",
"CorpusId": 248863206
},
"url": "https://www.semanticscholar.org/paper/540ed994eb00b5279748d1f26d04371e3a67ec0d",
"referenceCount": 123,
"citationCount": 38,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Finetuned Language Models Are Zero-Shot Learners",
"abstract": "This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.",
"year": 2021,
"venue": "International Conference on Learning Representations",
"authors": [
"Jason Wei",
"Maarten Bosma",
"Vincent Zhao",
"Kelvin Guu",
"Adams Wei Yu",
"Brian Lester",
"Nan Du",
"Andrew M. Dai",
"Quoc V. Le"
],
"externalIds": {
"DBLP": "journals/corr/abs-2109-01652",
"ArXiv": "2109.01652",
"CorpusId": 237416585
},
"url": "https://www.semanticscholar.org/paper/ff0b2681d7b05e16c46dfb71d980cc2f605907cd",
"referenceCount": 169,
"citationCount": 2810,
"influentialCitationCount": 323,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Chemformer: a pre-trained transformer for computational chemistry",
"abstract": "Transformer models coupled with a simplified molecular line entry system (SMILES) have recently proven to be a powerful combination for solving challenges in cheminformatics. These models, however, are often developed specifically for a single application and can be very resource-intensive to train. In this work we present the Chemformer model—a Transformer-based model which can be quickly applied to both sequence-to-sequence and discriminative cheminformatics tasks. Additionally, we show that self-supervised pre-training can improve performance and significantly speed up convergence on downstream tasks. On direct synthesis and retrosynthesis prediction benchmark datasets we publish state-of-the-art results for top-1 accuracy. We also improve on existing approaches for a molecular optimisation task and show that Chemformer can optimise on multiple discriminative tasks simultaneously. Models, datasets and code will be made available after publication.",
"year": 2021,
"venue": "Machine Learning: Science and Technology",
"authors": [],
"externalIds": {
"DBLP": "journals/mlst/IrwinDHB22",
"MAG": "3178976598",
"DOI": "10.1088/2632-2153/ac3ffb",
"CorpusId": 237747003
},
"url": "https://www.semanticscholar.org/paper/3f9f7f690e003176316d0ee56fbcbfed4b6b0948",
"referenceCount": 0,
"citationCount": 190,
"influentialCitationCount": 20,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Computer Science"
]
},
{
"title": "RetroPrime: A Diverse, Plausible and Transformer-based Method for Single-Step Retrosynthesis Predictions",
"abstract": null,
"year": 2021,
"venue": "",
"authors": [
"Xiaorui Wang",
"Yuquan Li",
"J. Qiu",
"Guangyong Chen",
"Huanxiang Liu",
"B. Liao",
"Chang-Yu Hsieh",
"X. Yao"
],
"externalIds": {
"MAG": "3152975457",
"DOI": "10.1016/J.CEJ.2021.129845",
"CorpusId": 234830645
},
"url": "https://www.semanticscholar.org/paper/d03617dc7a574009ce31c8e3dc9d80678cba67c0",
"referenceCount": 31,
"citationCount": 78,
"influentialCitationCount": 7,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GLM: General Language Model Pretraining with Autoregressive Blank Infilling",
"abstract": "There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (NLU), unconditional generation, and conditional generation. We propose a General Language Model (GLM) based on autoregressive blank infilling to address this challenge. GLM improves blank filling pretraining by adding 2D positional encodings and allowing an arbitrary order to predict spans, which results in performance gains over BERT and T5 on NLU tasks. Meanwhile, GLM can be pretrained for different types of tasks by varying the number and lengths of blanks. On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1.25× parameters of BERT Large , demonstrating its generalizability to different downstream tasks.",
"year": 2021,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Zhengxiao Du",
"Yujie Qian",
"Xiao Liu",
"Ming Ding",
"J. Qiu",
"Zhilin Yang",
"Jie Tang"
],
"externalIds": {
"DBLP": "conf/acl/DuQLDQY022",
"ArXiv": "2103.10360",
"ACL": "2022.acl-long.26",
"DOI": "10.18653/v1/2022.acl-long.26",
"CorpusId": 247519241
},
"url": "https://www.semanticscholar.org/paper/50796b0f3edf9cb5ff1e447c298b33755378aa4f",
"referenceCount": 64,
"citationCount": 1078,
"influentialCitationCount": 134,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Deep generative models for ligand‐based de novo design applied to multi‐parametric optimization",
"abstract": "Multi‐parameter optimization (MPO) is a major challenge in new chemical entity (NCE) drug discovery. Recently, promising results were reported for deep learning generative models applied to de novo molecular design, but, to our knowledge, until now no report was made of the value of this new technology for addressing MPO in an actual drug discovery project. In this study, we demonstrate the benefit of applying AI technology in a real drug discovery project. We evaluate the potential of a ligand‐based de novo design technology using deep learning generative models to accelerate the obtention of lead compounds meeting 11 different biological activity objectives simultaneously. Using the initial dataset of the project, we built QSAR models for all the 11 objectives, with moderate to high performance (precision between 0.67 and 1.0 on an independent test set). Our DL‐based AI de novo design algorithm, combined with the QSAR models, generated 150 virtual compounds predicted as active on all objectives. Eleven were synthetized and tested. The AI‐designed compounds met 9.5 objectives on average (i.e., 86% success rate) versus 6.4 (i.e., 58% success rate) for the initial molecules measured on all objectives. One of the AI‐designed molecules was active on all 11 measured objectives, and two were active on 10 objectives while being in the error margin of the assay for the last one. The AI algorithm designed compounds with functional groups, which, although being rare or absent in the initial dataset, turned out to be highly beneficial for the MPO.",
"year": 2021,
"venue": "Journal of Computational Chemistry",
"authors": [
"Q. Perron",
"O. Mirguet",
"Hamza Tajmouati",
"A. Skiredj",
"Anne Rojas",
"Arnaud Gohier",
"P. Ducrot",
"M. Bourguignon",
"P. Sansilvestri-Morel",
"Nicolas Do Huu",
"F. Gellibert",
"Y. Gaston-Mathé"
],
"externalIds": {
"DBLP": "journals/jcc/PerronMTSRGDBSH22",
"MAG": "3121207609",
"DOI": "10.1002/jcc.26826",
"CorpusId": 234163184,
"PubMed": "35218219"
},
"url": "https://www.semanticscholar.org/paper/262af3e7cc2368818d44bd142ae0aa0381cc6ae3",
"referenceCount": 41,
"citationCount": 28,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Measuring Massive Multitask Language Understanding",
"abstract": "We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.",
"year": 2020,
"venue": "International Conference on Learning Representations",
"authors": [
"Dan Hendrycks",
"Collin Burns",
"Steven Basart",
"Andy Zou",
"Mantas Mazeika",
"D. Song",
"J. Steinhardt"
],
"externalIds": {
"DBLP": "conf/iclr/HendrycksBBZMSS21",
"ArXiv": "2009.03300",
"MAG": "3083410900",
"CorpusId": 221516475
},
"url": "https://www.semanticscholar.org/paper/814a4f680b9ba6baba23b93499f4b48af1a27678",
"referenceCount": 35,
"citationCount": 2252,
"influentialCitationCount": 471,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Predicting retrosynthetic pathways using transformer-based models and a hyper-graph exploration strategy†",
"abstract": "We present an extension of our Molecular Transformer model combined with a hyper-graph exploration strategy for automatic retrosynthesis route planning without human intervention. The single-step retrosynthetic model sets a new state of the art for predicting reactants as well as reagents, solvents and catalysts for each retrosynthetic step. We introduce four metrics (coverage, class diversity, round-trip accuracy and Jensen–Shannon divergence) to evaluate the single-step retrosynthetic models, using the forward prediction and a reaction classification model always based on the transformer architecture. The hypergraph is constructed on the fly, and the nodes are filtered and further expanded based on a Bayesian-like probability. We critically assessed the end-to-end framework with several retrosynthesis examples from literature and academic exams. Overall, the frameworks have an excellent performance with few weaknesses related to the training data. The use of the introduced metrics opens up the possibility to optimize entire retrosynthetic frameworks by focusing on the performance of the single-step model only.",
"year": 2020,
"venue": "Chemical Science",
"authors": [
"P. Schwaller",
"Riccardo Petraglia",
"Valerio Zullo",
"Vishnu H. Nair",
"Rico Häuselmann",
"Riccardo Pisoni",
"C. Bekas",
"A. Iuliano",
"T. Laino"
],
"externalIds": {
"MAG": "3010145447",
"PubMedCentral": "8152799",
"DOI": "10.1039/c9sc05704h",
"CorpusId": 216332642,
"PubMed": "34122839"
},
"url": "https://www.semanticscholar.org/paper/1d217bf5a66a8d13f3adfbb775989aa0fe407c06",
"referenceCount": 54,
"citationCount": 190,
"influentialCitationCount": 10,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "PIQA: Reasoning about Physical Commonsense in Natural Language",
"abstract": "To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains – such as news articles and encyclopedia entries, where text is plentiful – in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical commonsense questions without experiencing the physical world?In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Though humans find the dataset easy (95% accuracy), large pretrained models struggle (∼75%). We provide analysis about the dimensions of knowledge that existing models lack, which offers significant opportunities for future research.",
"year": 2019,
"venue": "AAAI Conference on Artificial Intelligence",
"authors": [
"Yonatan Bisk",
"Rowan Zellers",
"Ronan Le Bras",
"Jianfeng Gao",
"Yejin Choi"
],
"externalIds": {
"DBLP": "conf/aaai/BiskZLGC20",
"MAG": "2998617917",
"ArXiv": "1911.11641",
"DOI": "10.1609/AAAI.V34I05.6239",
"CorpusId": 208290939
},
"url": "https://www.semanticscholar.org/paper/04f4e55e14150b7c48b0287ba77c7443df76ed45",
"referenceCount": 41,
"citationCount": 1086,
"influentialCitationCount": 143,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ZeRO: Memory optimizations Toward Training Trillion Parameter Models",
"abstract": "Large deep learning models offer significant accuracy gains, but training billions to trillions of parameters is challenging. Existing solutions such as data and model parallelisms exhibit fundamental limitations to fit these models into limited device memory, while obtaining computation, communication and development efficiency. We develop a novel solution, Zero Redundancy Optimizer (ZeRO), to optimize memory, vastly improving training speed while increasing the model size that can be efficiently trained. ZeRO eliminates memory redundancies in data- and model-parallel training while retaining low communication volume and high computational granularity, allowing us to scale the model size proportional to the number of devices with sustained high efficiency. Our analysis on memory requirements and communication volume demonstrates: ZeRO has the potential to scale beyond 1 Trillion parameters using today’s hardware. We implement and evaluate ZeRO: it trains large models of over 100B parameter with super-linear speedup on 400 GPUs, achieving throughput of 15 Petaflops. This represents an 8x increase in model size and 10x increase in achievable performance over state-of-the-art. In terms of usability, ZeRO can train large models of up to 13B parameters (e.g., larger than Megatron GPT 8. 3B and T5 11B) without requiring model parallelism which is harder for scientists to apply. Last but not the least, researchers have used the system breakthroughs of ZeRO to create Turing-NLG, the world’s largest language model at the time (17B parameters) with record breaking accuracy.",
"year": 2019,
"venue": "International Conference for High Performance Computing, Networking, Storage and Analysis",
"authors": [
"Samyam Rajbhandari",
"Jeff Rasley",
"Olatunji Ruwase",
"Yuxiong He"
],
"externalIds": {
"MAG": "3025935268",
"DBLP": "conf/sc/RajbhandariRRH20",
"ArXiv": "1910.02054",
"DOI": "10.1109/SC41405.2020.00024",
"CorpusId": 269617042
},
"url": "https://www.semanticscholar.org/paper/00c957711b12468cb38424caccdf5291bb354033",
"referenceCount": 30,
"citationCount": 354,
"influentialCitationCount": 30,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Remarkable Reactivity of Boron-Substituted Furans in the Diels-Alder Reactions with Maleic Anhydride.",
"abstract": "The reactivity of boron-substituted furans as dienes in the Diels-Alder reaction with maleic anhydride has been investigated. Gratifyingly, the furans with boryl substituents at C-3 gave the exo cycloadduct exclusively with excellent yields. In particular, the potassium trifluoroborate exhibited outstanding reactivity at room temperature. Theoretical calculations suggested that the trifluoroborate group is highly activating and also that the thermodynamics is the main factor that determines whether the products can be obtained efficiently or not.",
"year": 2019,
"venue": "Organic Letters",
"authors": [
"Noelia S. Medran",
"Federico Dezotti",
"S. Pellegrinet"
],
"externalIds": {
"MAG": "2952684408",
"DOI": "10.1021/acs.orglett.9b01662",
"CorpusId": 195763608,
"PubMed": "31247787"
},
"url": "https://www.semanticscholar.org/paper/0505cd293a25dda89fda748a648f295e475f7e7b",
"referenceCount": 34,
"citationCount": 10,
"influentialCitationCount": 1,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Chemistry"
]
},
{
"title": "Predicting reaction performance in C–N cross-coupling using machine learning",
"abstract": "A guide for catalyst choice in the forest Chemists often discover reactions by applying catalysts to a series of simple compounds. Tweaking those reactions to tolerate more structural complexity in pharmaceutical research is time-consuming. Ahneman et al. report that machine learning can help. Using a high-throughput data set, they trained a random forest algorithm to predict which specific palladium catalysts would best tolerate isoxazoles (cyclic structures with an N–O bond) during C–N bond formation. The predictions also helped to guide analysis of the catalyst inhibition mechanism. Science, this issue p. 186 A random forest algorithm trained on high-throughput data predicts which catalysts best tolerate certain heterocycles. Machine learning methods are becoming integral to scientific inquiry in numerous disciplines. We demonstrated that machine learning can be used to predict the performance of a synthetic reaction in multidimensional chemical space using data obtained via high-throughput experimentation. We created scripts to compute and extract atomic, molecular, and vibrational descriptors for the components of a palladium-catalyzed Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline in the presence of various potentially inhibitory additives. Using these descriptors as inputs and reaction yield as output, we showed that a random forest algorithm provides significantly improved predictive performance over linear regression analysis. The random forest model was also successfully applied to sparse training sets and out-of-sample prediction, suggesting its value in facilitating adoption of synthetic methodology.",
"year": 2018,
"venue": "Science",
"authors": [
"Derek T. Ahneman",
"Jesús G Estrada",
"Shishi Lin",
"S. Dreher",
"A. Doyle"
],
"externalIds": {
"MAG": "2785942661",
"DOI": "10.1126/science.aar5169",
"CorpusId": 206666015,
"PubMed": "29449509"
},
"url": "https://www.semanticscholar.org/paper/a44f2ad10815051142c97fb466c51c8c9eda1d7f",
"referenceCount": 48,
"citationCount": 584,
"influentialCitationCount": 15,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge",
"abstract": "We present a new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering. Together, these constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI. The ARC question set is partitioned into a Challenge Set and an Easy Set, where the Challenge Set contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurence algorithm. The dataset contains only natural, grade-school science questions (authored for human tests), and is the largest public-domain set of this kind (7,787 questions). We test several baselines on the Challenge Set, including leading neural models from the SQuAD and SNLI tasks, and find that none are able to significantly outperform a random baseline, reflecting the difficult nature of this task. We are also releasing the ARC Corpus, a corpus of 14M science sentences relevant to the task, and implementations of the three neural baseline models tested. Can your model perform better? We pose ARC as a challenge to the community.",
"year": 2018,
"venue": "arXiv.org",
"authors": [
"Peter Clark",
"Isaac Cowhey",
"Oren Etzioni",
"Tushar Khot",
"Ashish Sabharwal",
"Carissa Schoenick",
"Oyvind Tafjord"
],
"externalIds": {
"ArXiv": "1803.05457",
"DBLP": "journals/corr/abs-1803-05457",
"MAG": "2794325560",
"CorpusId": 3922816
},
"url": "https://www.semanticscholar.org/paper/88bb0a28bb58d847183ec505dda89b63771bb495",
"referenceCount": 36,
"citationCount": 1435,
"influentialCitationCount": 213,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A platform for automated nanomole-scale reaction screening and micromole-scale synthesis in flow",
"abstract": "A reaction screen in flowing solvent Chemists charged with manufacturing pharmaceuticals have recently been exploring the efficiency advantages of continuous flow techniques. Perera et al. now show that a flow apparatus can also accelerate reaction optimization earlier in the drug discovery process. They modified a high-performance liquid chromatography system to screen a wide variety of solvent, ligand, and base combinations to optimize carbon-carbon bond formation. Injecting stock solution aliquots of the catalyst and reactants into a carrier solvent stream let the authors vary the main solvent efficiently and scale up the optimal conditions for product isolation. Science, this issue p. 429 Chromatographic, flow-based screening of reaction conditions is demonstrated for Suzuki coupling in pharmaceutical research. The scarcity of complex intermediates in pharmaceutical research motivates the pursuit of reaction optimization protocols on submilligram scales. We report here the development of an automated flow-based synthesis platform, designed from commercially available components, that integrates both rapid nanomole-scale reaction screening and micromole-scale synthesis into a single modular unit. This system was validated by exploring a diverse range of reaction variables in a Suzuki-Miyaura coupling on nanomole scale at elevated temperatures, generating liquid chromatography–mass spectrometry data points for 5760 reactions at a rate of >1500 reactions per 24 hours. Through multiple injections of the same segment, the system directly produced micromole quantities of desired material. The optimal conditions were also replicated in traditional flow and batch mode at 50- to 200-milligram scale to provide good to excellent yields.",
"year": 2018,
"venue": "Science",
"authors": [
"D. Perera",
"Joseph W. Tucker",
"Shalini Brahmbhatt",
"Christopher J. Helal",
"Ashley Chong",
"W. Farrell",
"P. Richardson",
"N. Sach"
],
"externalIds": {
"MAG": "2784918212",
"DOI": "10.1126/science.aap9112",
"CorpusId": 5003014,
"PubMed": "29371464"
},
"url": "https://www.semanticscholar.org/paper/44f60e595cf4583a7ae0494999c739fab9cfd03a",
"referenceCount": 33,
"citationCount": 259,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Decoupled Weight Decay Regularization",
"abstract": "L$_2$ regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \\emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L$_2$ regularization (often calling it \"weight decay\" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \\emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at this https URL",
"year": 2017,
"venue": "International Conference on Learning Representations",
"authors": [
"I. Loshchilov",
"F. Hutter"
],
"externalIds": {
"MAG": "2950541952",
"DBLP": "conf/iclr/LoshchilovH19",
"CorpusId": 53592270
},
"url": "https://www.semanticscholar.org/paper/d07284a6811f1b2745d91bdb06b040b57f226882",
"referenceCount": 35,
"citationCount": 17313,
"influentialCitationCount": 3078,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network",
"abstract": "The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring the space of possible transformations is intractable. The current solution utilizes reaction templates to limit the space, but it suffers from coverage and efficiency issues. In this paper, we propose a template-free approach to efficiently explore the space of product molecules by first pinpointing the reaction center -- the set of nodes and edges where graph edits occur. Since only a small number of atoms contribute to reaction center, we can directly enumerate candidate products. The generated candidates are scored by a Weisfeiler-Lehman Difference Network that models high-order interactions between changes occurring at nodes across the molecule. Our framework outperforms the top-performing template-based approach with a 10\\% margin, while running orders of magnitude faster. Finally, we demonstrate that the model accuracy rivals the performance of domain experts.",
"year": 2017,
"venue": "Neural Information Processing Systems",
"authors": [
"Wengong Jin",
"Connor W. Coley",
"R. Barzilay",
"T. Jaakkola"
],
"externalIds": {
"MAG": "2963477006",
"ArXiv": "1709.04555",
"DBLP": "journals/corr/abs-1709-04555",
"CorpusId": 21964381
},
"url": "https://www.semanticscholar.org/paper/aaf046c4da99ee6184f3fd31961a9967272152f9",
"referenceCount": 19,
"citationCount": 255,
"influentialCitationCount": 32,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Crowdsourcing Multiple Choice Science Questions",
"abstract": "We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions. We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.",
"year": 2017,
"venue": "NUT@EMNLP",
"authors": [
"Johannes Welbl",
"Nelson F. Liu",
"Matt Gardner"
],
"externalIds": {
"DBLP": "journals/corr/WelblLG17",
"MAG": "2963123047",
"ACL": "W17-4413",
"ArXiv": "1707.06209",
"DOI": "10.18653/v1/W17-4413",
"CorpusId": 1553193
},
"url": "https://www.semanticscholar.org/paper/932a5de79d8a8ebb75ea0c43493450fd9922e738",
"referenceCount": 47,
"citationCount": 299,
"influentialCitationCount": 42,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "MoleculeNet: a benchmark for molecular machine learning",
"abstract": "A large scale benchmark for molecular machine learning consisting of multiple public datasets, metrics, featurizations and learning algorithms.",
"year": 2017,
"venue": "Chemical Science",
"authors": [
"Zhenqin Wu",
"Bharath Ramsundar",
"Evan N. Feinberg",
"Joseph Gomes",
"C. Geniesse",
"Aneesh S. Pappu",
"K. Leswing",
"V. Pande"
],
"externalIds": {
"MAG": "2949858440",
"PubMedCentral": "5868307",
"ArXiv": "1703.00564",
"DBLP": "journals/corr/WuRFGGPLP17",
"DOI": "10.1039/c7sc02664a",
"CorpusId": 217680306,
"PubMed": "29629118"
},
"url": "https://www.semanticscholar.org/paper/d0ab11de3077490c80a08abd0fb8827bac84c454",
"referenceCount": 124,
"citationCount": 1488,
"influentialCitationCount": 265,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Physics",
"Mathematics"
]
},
{
"title": "What's What: The (Nearly) Definitive Guide to Reaction Role Assignment",
"abstract": "When analyzing chemical reactions it is essential to know which molecules are actively involved in the reaction and which educts will form the product molecules. Assigning reaction roles, like reactant, reagent, or product, to the molecules of a chemical reaction might be a trivial problem for hand-curated reaction schemes but it is more difficult to automate, an essential step when handling large amounts of reaction data. Here, we describe a new fingerprint-based and data-driven approach to assign reaction roles which is also applicable to rather unbalanced and noisy reaction schemes. Given a set of molecules involved and knowing the product(s) of a reaction we assign the most probable reactants and sort out the remaining reagents. Our approach was validated using two different data sets: one hand-curated data set comprising about 680 diverse reactions extracted from patents which span more than 200 different reaction types and include up to 18 different reactants. A second set consists of 50 000 randomly picked reactions from US patents. The results of the second data set were compared to results obtained using two different atom-to-atom mapping algorithms. For both data sets our method assigns the reaction roles correctly for the vast majority of the reactions, achieving an accuracy of 88% and 97% respectively. The median time needed, about 8 ms, indicates that the algorithm is fast enough to be applied to large collections. The new method is available as part of the RDKit toolkit and the data sets and Jupyter notebooks used for evaluation of the new method are available in the Supporting Information of this publication.",
"year": 2016,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Nadine Schneider",
"N. Stiefl",
"G. Landrum"
],
"externalIds": {
"DBLP": "journals/jcisd/SchneiderSL16",
"MAG": "2551217916",
"DOI": "10.1021/acs.jcim.6b00564",
"CorpusId": 206609664,
"PubMed": "28024398"
},
"url": "https://www.semanticscholar.org/paper/eba56a16763a489367dd5ca1ead995d75dbe8b1a",
"referenceCount": 13,
"citationCount": 135,
"influentialCitationCount": 15,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry",
"Medicine",
"Computer Science"
]
},
{
"title": "Suzuki–Miyaura cross-coupling optimization enabled by automated feedback",
"abstract": "An automated, droplet-flow microfluidic system explores and optimizes Pd-catalyzed Suzuki–Miyaura cross-coupling reactions.",
"year": 2016,
"venue": "Reaction Chemistry & Engineering",
"authors": [
"Brandon J. Reizman",
"Yi‐Ming Wang",
"S. Buchwald",
"K. Jensen"
],
"externalIds": {
"MAG": "2537845408",
"PubMedCentral": "5123644",
"DOI": "10.1039/C6RE00153J",
"CorpusId": 7317327,
"PubMed": "27928513"
},
"url": "https://www.semanticscholar.org/paper/294bb45137dd7e60b4c74ef66c7492755468aeb4",
"referenceCount": 82,
"citationCount": 99,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Chemistry"
]
},
{
"title": "Extraction of chemical structures and reactions from the literature",
"abstract": "........................................................................................................................................ II Table of",
"year": 2012,
"venue": "",
"authors": [
"Daniel M. Lowe"
],
"externalIds": {
"MAG": "29374554",
"DBLP": "phd/ethos/Lowe12",
"DOI": "10.17863/CAM.16293",
"CorpusId": 9573205
},
"url": "https://www.semanticscholar.org/paper/316189e419212794e74bf83442f65de96b5320ba",
"referenceCount": 100,
"citationCount": 223,
"influentialCitationCount": 24,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Engineering"
]
},
{
"title": "The use of the area under the ROC curve in the evaluation of machine learning algorithms",
"abstract": null,
"year": 1997,
"venue": "Pattern Recognition",
"authors": [
"A. Bradley"
],
"externalIds": {
"MAG": "2771514092",
"DBLP": "journals/pr/Bradley97",
"DOI": "10.1016/S0031-3203(96)00142-2",
"CorpusId": 13806304
},
"url": "https://www.semanticscholar.org/paper/48ddd9101a90fe65e3061de69626741b843ff5e4",
"referenceCount": 52,
"citationCount": 6165,
"influentialCitationCount": 661,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A Complex Catalytic Cycle Leading to a Regioselective Synthesis of o,o′‐Disubstituted Vinylarenes",
"abstract": null,
"year": 1997,
"venue": "",
"authors": [
"M. Catellani",
"Franco Frignani",
"Armando Rangoni"
],
"externalIds": {
"MAG": "2161243905",
"DOI": "10.1002/ANIE.199701191",
"CorpusId": 97787971
},
"url": "https://www.semanticscholar.org/paper/3652e86aa51bc52a7f0a9b45f78801cd1a97c5a3",
"referenceCount": 0,
"citationCount": 433,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "The properties of known drugs. 1. Molecular frameworks.",
"abstract": "In order to better understand the common features present in drug molecules, we use shape description methods to analyze a database of commercially available drugs and prepare a list of common drug shapes. A useful way of organizing this structural data is to group the atoms of each drug molecule into ring, linker, framework, and side chain atoms. On the basis of the two-dimensional molecular structures (without regard to atom type, hybridization, and bond order), there are 1179 different frameworks among the 5120 compounds analyzed. However, the shapes of half of the drugs in the database are described by the 32 most frequently occurring frameworks. This suggests that the diversity of shapes in the set of known drugs is extremely low. In our second method of analysis, in which atom type, hybridization, and bond order are considered, more diversity is seen; there are 2506 different frameworks among the 5120 compounds in the database, and the most frequently occurring 42 frameworks account for only one-fourth of the drugs. We discuss the possible interpretations of these findings and the way they may be used to guide future drug discovery research.",
"year": 1996,
"venue": "Journal of Medicinal Chemistry",
"authors": [
"G. Bemis",
"M. Murcko"
],
"externalIds": {
"MAG": "2060531713",
"DOI": "10.1021/JM9602928",
"CorpusId": 19424664,
"PubMed": "8709122"
},
"url": "https://www.semanticscholar.org/paper/c85d9b8b684367b44b924529fb6dd2913350d3c4",
"referenceCount": 0,
"citationCount": 1729,
"influentialCitationCount": 32,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry",
"Medicine"
]
},
{
"title": "SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules",
"abstract": "18-24.",
"year": 1988,
"venue": "Journal of chemical information and computer sciences",
"authors": [
"D. Weininger"
],
"externalIds": {
"MAG": "1975147762",
"DBLP": "journals/jcisd/Weininger88",
"DOI": "10.1021/ci00057a005",
"CorpusId": 5445756
},
"url": "https://www.semanticscholar.org/paper/3f7983818b76a5f1b5daf9b605877ed401c8e73c",
"referenceCount": 18,
"citationCount": 5268,
"influentialCitationCount": 339,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Readily accessible 12-I-5 oxidant for the conversion of primary and secondary alcohols to aldehydes and ketones",
"abstract": "Conversion de cyclohexanol, octanol, alcool benzylique et des alcools dimethoxy- et trimethoxy benzyliques par oxydation avec le triacetoxy-1,1,1 benzoiodoxole-1,2 one-3",
"year": 1983,
"venue": "",
"authors": [
"D. Dess",
"J. C. Martin"
],
"externalIds": {
"MAG": "3024385986",
"DOI": "10.1021/JO00170A070",
"CorpusId": 97259749
},
"url": "https://www.semanticscholar.org/paper/572c5fe53175005394a1d98761c1eeb6b825ee9b",
"referenceCount": 1,
"citationCount": 2359,
"influentialCitationCount": 11,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "The Generation of a Unique Machine Description for Chemical Structures-A Technique Developed at Chemical Abstracts Service.",
"abstract": null,
"year": 1965,
"venue": "",
"authors": [
"H. L. Morgan"
],
"externalIds": {
"MAG": "2044834685",
"DOI": "10.1021/C160017A018",
"CorpusId": 62164893
},
"url": "https://www.semanticscholar.org/paper/69b316e545a7d7eefa9d9ce510cdd17601daaff0",
"referenceCount": 0,
"citationCount": 1195,
"influentialCitationCount": 35,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "Uni-Mol: A Universal 3D Molecular Representation Learning Framework",
"abstract": "Molecular representation learning (MRL) has gained tremendous attention due to its critical role in learning from limited supervised data for applications like drug design. In most MRL methods, molecules are treated as 1D sequential tokens or 2D topology graphs, limiting their ability to incorporate 3D information for downstream tasks and, in particular, making it almost impossible for 3D geometry prediction or generation. Herein, we propose Uni-Mol, a universal MRL framework that significantly enlarges the representation ability and application scope of MRL schemes. Uni-Mol is composed of two models with the same SE(3)-equivariant transformer architecture: a molecular pretraining model trained by 209M molecular conformations; a pocket pretraining model trained by 3M candidate protein pocket data. The two models are used independently for separate tasks, and are combined when used in protein-ligand binding tasks. By properly incorporating 3D information, Uni-Mol outperforms SOTA in 14/15 molecular property prediction tasks. Moreover, Uni-Mol achieves superior performance in 3D spatial tasks, including protein-ligand binding pose prediction, molecular conformation generation, etc. Finally, we show that Uni-Mol can be successfully applied to the tasks with few-shot data like pocket druggability prediction. The code, model, and data are made publicly available at https://github.com/dptech-corp/Uni-Mol .",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"G. Zhou",
"Zhifeng Gao",
"Qiankun Ding",
"Hang Zheng",
"Hongteng Xu",
"Zhewei Wei",
"Linfeng Zhang",
"Guolin Ke"
],
"externalIds": {
"DBLP": "conf/iclr/ZhouGDZXWZK23",
"CorpusId": 259298651
},
"url": "https://www.semanticscholar.org/paper/11f42721f56f36a64638677ccde7784040829656",
"referenceCount": 111,
"citationCount": 193,
"influentialCitationCount": 31,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries",
"abstract": "We propose a new task, Text2Mol, to retrieve molecules using natural language descriptions as queries. Natural language and molecules encode information in very different ways, which leads to the exciting but challenging problem of integrating these two very different modalities. Although some work has been done on text-based retrieval and structure-based retrieval, this new task requires integrating molecules and natural language more directly. Moreover, this can be viewed as an especially challenging cross-lingual retrieval problem by considering the molecules as a language with a very unique grammar. We construct a paired dataset of molecules and their corresponding text descriptions, which we use to learn an aligned common semantic embedding space for retrieval. We extend this to create a cross-modal attention-based model for explainability and reranking by interpreting the attentions as association rules. We also employ an ensemble approach to integrate our different architectures, which significantly improves results from 0.372 to 0.499 MRR. This new multimodal approach opens a new perspective on solving problems in chemistry literature understanding and molecular machine learning.",
"year": 2021,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carl N. Edwards",
"Chengxiang Zhai",
"Heng Ji"
],
"externalIds": {
"DBLP": "conf/emnlp/EdwardsZJ21",
"ACL": "2021.emnlp-main.47",
"DOI": "10.18653/v1/2021.emnlp-main.47",
"CorpusId": 243865204
},
"url": "https://www.semanticscholar.org/paper/57651d65078818821234d13544ac1f29858dcd67",
"referenceCount": 70,
"citationCount": 93,
"influentialCitationCount": 24,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models",
"abstract": null,
"year": 2021,
"venue": "AI Open",
"authors": [
"Sha Yuan",
"Hanyu Zhao",
"Zhengxiao Du",
"Ming Ding",
"Xiao Liu",
"Yukuo Cen",
"Xu Zou",
"Zhilin Yang",
"Jie Tang"
],
"externalIds": {
"DBLP": "journals/aiopen/YuanZDDLCZYT21",
"MAG": "3169113923",
"DOI": "10.1016/J.AIOPEN.2021.06.001",
"CorpusId": 236712622
},
"url": "https://www.semanticscholar.org/paper/b9478e237b58160c65acd2c41894493d27e2c277",
"referenceCount": 24,
"citationCount": 84,
"influentialCitationCount": 9,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"year": 2019,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Jacob Devlin",
"Ming-Wei Chang",
"Kenton Lee",
"Kristina Toutanova"
],
"externalIds": {
"MAG": "2951055169",
"ACL": "N19-1423",
"DBLP": "journals/corr/abs-1810-04805",
"ArXiv": "1810.04805",
"DOI": "10.18653/v1/N19-1423",
"CorpusId": 52967399
},
"url": "https://www.semanticscholar.org/paper/df2b0e26d0599ce3e70df8a9da02e51594e0e992",
"referenceCount": 63,
"citationCount": 81690,
"influentialCitationCount": 19054,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Deep Learning for the Life Sciences",
"abstract": null,
"year": 2019,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Improving Language Understanding by Generative Pre-Training",
"abstract": "Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).",
"year": 2018,
"venue": "",
"authors": [
"Alec Radford",
"Karthik Narasimhan"
],
"externalIds": {
"MAG": "2965425874",
"CorpusId": 49313245
},
"url": "https://www.semanticscholar.org/paper/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035",
"referenceCount": 73,
"citationCount": 9709,
"influentialCitationCount": 1083,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Pattern matching: The gestalt approach",
"abstract": null,
"year": 1988,
"venue": "Dr. Dobb’s Journal",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Elementary mathematical theory of classification and prediction",
"abstract": null,
"year": 1958,
"venue": "Journal of Biomedical Science and Engineering",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Baize: An open-source chat model with parameter-efficient tuning on self-chat data",
"abstract": null,
"year": null,
"venue": "Proceedings of the EMNLP",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Scientific Large Language Models: A Survey on Biological & Chemical Domains": {
"paper_title": "Scientific Large Language Models: A Survey on Biological & Chemical Domains",
"arxiv_id": "2401.14656",
"authors": [
"Qiang Zhang",
"Keyan Ding",
"Tianwen Lyv",
"Xinda Wang",
"Qingyu Yin",
"Yiwen Zhang",
"Jing Yu",
"Yuhao Wang",
"Xiaotong Li",
"Zhuoyi Xiang",
"Zhuang Xiang",
"Zeyuan Wang",
"Ming Qin",
"Mengyao Zhang",
"Jinlu Zhang",
"Jiyu Cui",
"Renjun Xu",
"Hongyan Chen",
"Xiaohui Fan",
"Huabin Xing",
"Huajun Chen"
],
"year": 2024,
"venue": "arXiv.org",
"abstract": "Large Language Models (LLMs) have emerged as a transformative power in enhancing natural language comprehension, representing a significant stride toward artificial general intelligence. The application of LLMs extends beyond conventional linguistic boundaries, encompassing specialized linguistic systems developed within various scientific disciplines. This growing interest has led to the advent of scientific LLMs, a novel subclass specifically engineered for facilitating scientific discovery. As a burgeoning area in the community of AI for Science, scientific LLMs warrant comprehensive exploration. However, a systematic and up-to-date survey introducing them is currently lacking. In this paper, we endeavor to methodically delineate the concept of\"scientific language\", whilst providing a thorough review of the latest advancements in scientific LLMs. Given the expansive realm of scientific disciplines, our analysis adopts a focused lens, concentrating on the biological and chemical domains. This includes an in-depth examination of LLMs for textual knowledge, small molecules, macromolecular proteins, genomic sequences, and their combinations, analyzing them in terms of model architectures, capabilities, datasets, and evaluation. Finally, we critically examine the prevailing challenges and point out promising research directions along with the advances of LLMs. By offering a comprehensive overview of technical developments in this field, this survey aspires to be an invaluable resource for researchers navigating the intricate landscape of scientific LLMs.",
"references": []
},
"Mining experimental data from Materials Science literature with Large Language Models": {
"paper_title": "Mining experimental data from Materials Science literature with Large Language Models",
"arxiv_id": "2401.11052",
"authors": [
"Luca Foppiano",
"Guillaume Lambard",
"Toshiyuki Amagasa",
"Masashi Ishii"
],
"year": 2024,
"venue": "Science and Technology of Advanced Materials: Methods",
"abstract": "This study is dedicated to assessing the capabilities of large language models (LLMs) such as GPT-3.5-Turbo, GPT-4, and GPT-4-Turbo in extracting structured information from scientific documents in materials science. To this end, we primarily focus on two critical tasks of information extraction: (i) a named entity recognition (NER) of studied materials and physical properties and (ii) a relation extraction (RE) between these entities. Due to the evident lack of datasets within Materials Informatics (MI), we evaluated using SuperMat, based on superconductor research, and MeasEval, a generic measurement evaluation corpus. The performance of LLMs in executing these tasks is benchmarked against traditional models based on the BERT architecture and rule-based approaches (baseline). We introduce a novel methodology for the comparative analysis of intricate material expressions, emphasising the standardisation of chemical formulas to tackle the complexities inherent in materials science information assessment. For NER, LLMs fail to outperform the baseline with zero-shot prompting and exhibit only limited improvement with few-shot prompting. However, a GPT-3.5-Turbo fine-tuned with the appropriate strategy for RE outperforms all models, including the baseline. Without any fine-tuning, GPT-4 and GPT-4-Turbo display remarkable reasoning and relationship extraction capabilities after being provided with merely a couple of examples, surpassing the baseline. Overall, the results suggest that although LLMs demonstrate relevant reasoning skills in connecting concepts, specialised models are currently a better choice for tasks requiring extracting complex domain-specific entities like materials. These insights provide initial guidance applicable to other materials science sub-domains in future work.",
"references": [
{
"title": "MatSciRE: Leveraging Pointer Networks to Automate Entity and Relation Extraction for Material Science Knowledge-base Construction",
"abstract": null,
"year": 2024,
"venue": "Computational materials science",
"authors": [
"Ankan Mullick",
"Akash Ghosh",
"G. Chaitanya",
"Samir Ghui",
"Tapas Nayak",
"Seung-Cheol Lee",
"S. Bhattacharjee",
"Pawan Goyal"
],
"externalIds": {
"DBLP": "journals/corr/abs-2401-09839",
"ArXiv": "2401.09839",
"DOI": "10.1016/j.commatsci.2023.112659",
"CorpusId": 266736372
},
"url": "https://www.semanticscholar.org/paper/fd822d4c82c2694c04da02d6332b1502e1bfc2af",
"referenceCount": 65,
"citationCount": 5,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Prompt engineering of GPT-4 for chemical research: what can/cannot be done?",
"abstract": "ABSTRACT This paper evaluates the capabilities and limitations of the Generative Pre-trained Transformer 4 (GPT-4) in chemical research. Although GPT-4 exhibits remarkable proficiencies, it is evident that the quality of input data significantly affects its performance. We explore GPT-4’s potential in chemical tasks, such as foundational chemistry knowledge, cheminformatics, data analysis, problem prediction, and proposal abilities. While the language model partially outperformed traditional methods, such as black-box optimization, it fell short against specialized algorithms, highlighting the need for their combined use. The paper shares the prompts given to GPT-4 and its responses, providing a resource for prompt engineering within the community, and concludes with a discussion on the future of chemical research using large language models. GRAPHICAL ABSTRACT IMPACT STATEMENT This paper comprehensively reveals the advantages and limitations of GPT-4 in chemical research, such as expert knowledge, data analysis, prediction, suggestion, and autonomous experimentation.",
"year": 2023,
"venue": "Science and Technology of Advanced Materials: Methods",
"authors": [
"Kan Hatakeyama‐Sato",
"Naoki Yamane",
"Yasuhiko Igarashi",
"Y. Nabae",
"T. Hayakawa"
],
"externalIds": {
"DOI": "10.1080/27660400.2023.2260300",
"CorpusId": 262138540
},
"url": "https://www.semanticscholar.org/paper/c17c3c31e104e04a2f33f87d1a38909082df81c9",
"referenceCount": 74,
"citationCount": 20,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "On the Planning Abilities of Large Language Models - A Critical Investigation",
"abstract": "Intrigued by the claims of emergent reasoning capabilities in LLMs trained on general web corpora, in this paper, we set out to investigate their planning capabilities. We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs as a source of heuristic guidance for other agents (AI planners) in their planning tasks. We conduct a systematic study by generating a suite of instances on domains similar to the ones employed in the International Planning Competition and evaluate LLMs in two distinct modes: autonomous and heuristic. Our findings reveal that LLMs' ability to generate executable plans autonomously is rather limited, with the best model (GPT-4) having an average success rate of ~12% across the domains. However, the results in the heuristic mode show more promise. In the heuristic mode, we demonstrate that LLM-generated plans can improve the search process for underlying sound planners and additionally show that external verifiers can help provide feedback on the generated plans and back-prompt the LLM for better plan generation.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Karthik Valmeekam",
"Matthew Marquez",
"S. Sreedharan",
"Subbarao Kambhampati"
],
"externalIds": {
"DBLP": "conf/nips/ValmeekamMSK23",
"ArXiv": "2305.15771",
"DOI": "10.48550/arXiv.2305.15771",
"CorpusId": 260440590
},
"url": "https://www.semanticscholar.org/paper/dedfe929d182cc3537a9ed765d589b4735ce062a",
"referenceCount": 39,
"citationCount": 131,
"influentialCitationCount": 7,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents",
"abstract": "Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents.",
"year": 2023,
"venue": "Conference of the European Chapter of the Association for Computational Linguistics",
"authors": [
"Simeng Sun",
"Y. Liu",
"Shuo Wang",
"Chenguang Zhu",
"Mohit Iyyer"
],
"externalIds": {
"ACL": "2024.eacl-long.29",
"DBLP": "conf/eacl/SunLWIZI24",
"ArXiv": "2305.14564",
"DOI": "10.48550/arXiv.2305.14564",
"CorpusId": 258866190
},
"url": "https://www.semanticscholar.org/paper/4ee96f0757e517928590a2300af5d40ba768a5a7",
"referenceCount": 70,
"citationCount": 36,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models",
"abstract": "Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.",
"year": 2023,
"venue": "Neural Information Processing Systems",
"authors": [
"Shunyu Yao",
"Dian Yu",
"Jeffrey Zhao",
"Izhak Shafran",
"T. Griffiths",
"Yuan Cao",
"Karthik Narasimhan"
],
"externalIds": {
"ArXiv": "2305.10601",
"DBLP": "conf/nips/YaoYZS00N23",
"DOI": "10.48550/arXiv.2305.10601",
"CorpusId": 258762525
},
"url": "https://www.semanticscholar.org/paper/2f3822eb380b5e753a6d579f31dfc3ec4c4a0820",
"referenceCount": 52,
"citationCount": 946,
"influentialCitationCount": 90,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era",
"abstract": "OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Chaoning Zhang",
"Chenshuang Zhang",
"Chenghao Li",
"Yu Qiao",
"Sheng Zheng",
"Sumit Kumar Dam",
"Mengchun Zhang",
"Jung Uk Kim",
"Seonghyeon Kim",
"J. Choi",
"Gyeong-Moon Park",
"S. Bae",
"Lik-Hang Lee",
"Pan Hui",
"In-So Kweon",
"Choong-Seon Hong"
],
"externalIds": {
"ArXiv": "2304.06488",
"DBLP": "journals/corr/abs-2304-06488",
"CorpusId": 258108139
},
"url": "https://www.semanticscholar.org/paper/4de290467d903b9977e31b3d4084006789bd6ebd",
"referenceCount": 231,
"citationCount": 104,
"influentialCitationCount": 5,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Small data machine learning in materials science",
"abstract": null,
"year": 2023,
"venue": "npj Computational Materials",
"authors": [
"Pengcheng Xu",
"Xiaobo Ji",
"Minjie Li",
"Wencong Lu"
],
"externalIds": {
"DOI": "10.1038/s41524-023-01000-z",
"CorpusId": 257719239
},
"url": "https://www.semanticscholar.org/paper/35b1d79993f0e4fbfcb3b86c5013c5e2a7e3117c",
"referenceCount": 119,
"citationCount": 133,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Large Language Model Is Not a Good Few-shot Information Extractor, but a Good Reranker for Hard Samples!",
"abstract": "Large Language Models (LLMs) have made remarkable strides in various tasks. Whether LLMs are competitive few-shot solvers for information extraction (IE) tasks, however, remains an open problem. In this work, we aim to provide a thorough answer to this question. Through extensive experiments on nine datasets across four IE tasks, we demonstrate that current advanced LLMs consistently exhibit inferior performance, higher latency, and increased budget requirements compared to fine-tuned SLMs under most settings. Therefore, we conclude that LLMs are not effective few-shot information extractors in general. Nonetheless, we illustrate that with appropriate prompting strategies, LLMs can effectively complement SLMs and tackle challenging samples that SLMs struggle with. And moreover, we propose an adaptive filter-then-rerank paradigm to combine the strengths of LLMs and SLMs. In this paradigm, SLMs serve as filters and LLMs serve as rerankers. By prompting LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.4% F1-gain on average) on various IE tasks, with an acceptable time and cost investment.",
"year": 2023,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Yubo Ma",
"Yixin Cao",
"YongChing Hong",
"Aixin Sun"
],
"externalIds": {
"DBLP": "journals/corr/abs-2303-08559",
"ArXiv": "2303.08559",
"DOI": "10.18653/v1/2023.findings-emnlp.710",
"CorpusId": 257532405
},
"url": "https://www.semanticscholar.org/paper/0100785773b8217c44606ab260e3212f93b0a4fd",
"referenceCount": 75,
"citationCount": 85,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ChatGPT: Jack of all trades, master of none",
"abstract": null,
"year": 2023,
"venue": "Information Fusion",
"authors": [
"Jan Koco'n",
"Igor Cichecki",
"Oliwier Kaszyca",
"Mateusz Kochanek",
"Dominika Szydło",
"Joanna Baran",
"Julita Bielaniewicz",
"Marcin Gruza",
"Arkadiusz Janz",
"Kamil Kanclerz",
"Anna Koco'n",
"Bartlomiej Koptyra",
"W. Mieleszczenko-Kowszewicz",
"P. Milkowski",
"Marcin Oleksy",
"Maciej Piasecki",
"Lukasz Radli'nski",
"Konrad Wojtasik",
"Stanislaw Wo'zniak",
"Przemyslaw Kazienko"
],
"externalIds": {
"ArXiv": "2302.10724",
"DBLP": "journals/inffus/KoconCKKSBBGJKKKMMOPRWWK23",
"DOI": "10.1016/j.inffus.2023.101861",
"CorpusId": 257050407
},
"url": "https://www.semanticscholar.org/paper/5848737f78397f72ceae2ba6f3419a6a8502b8ba",
"referenceCount": 124,
"citationCount": 352,
"influentialCitationCount": 15,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Automatic extraction of materials and properties from superconductors scientific literature",
"abstract": "ABSTRACT The automatic extraction of materials and related properties from the scientific literature is gaining attention in data-driven materials science (Materials Informatics). In this paper, we discuss Grobid-superconductors, our solution for automatically extracting superconductor material names and respective properties from text. Built as a Grobid module, it combines machine learning and heuristic approaches in a multi-step architecture that supports input data as raw text or PDF documents. Using Grobid-superconductors, we built SuperCon2, a database of 40,324 materials and properties records from 37,700 papers. The material (or sample) information is represented by name, chemical formula, and material class, and is characterized by shape, doping, substitution variables for components, and substrate as adjoined information. The properties include the Tc superconducting critical temperature and, when available, applied pressure with the Tc measurement method. Graphical Abstract",
"year": 2022,
"venue": "Science and Technology of Advanced Materials: Methods",
"authors": [
"Luca Foppiano",
"P. B. Castro",
"Pedro Ortiz Suarez",
"K. Terashima",
"Y. Takano",
"M. Ishii"
],
"externalIds": {
"DBLP": "journals/corr/abs-2210-15600",
"ArXiv": "2210.15600",
"DOI": "10.1080/27660400.2022.2153633",
"CorpusId": 253157445
},
"url": "https://www.semanticscholar.org/paper/1a188751d2dcb8e5710add51751bf252e0b653b7",
"referenceCount": 43,
"citationCount": 6,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Accelerating materials discovery using artificial intelligence, high performance computing and robotics",
"abstract": null,
"year": 2022,
"venue": "npj Computational Materials",
"authors": [
"Edward O. Pyzer-Knapp",
"J. Pitera",
"P. Staar",
"Seiji Takeda",
"T. Laino",
"Daniel P. Sanders",
"J. Sexton",
"John Smith",
"A. Curioni"
],
"externalIds": {
"DOI": "10.1038/s41524-022-00765-z",
"CorpusId": 248386306
},
"url": "https://www.semanticscholar.org/paper/51bb51b06f57fada5d8f338aa484a87f93226468",
"referenceCount": 91,
"citationCount": 134,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Machine learning–enabled high-entropy alloy discovery",
"abstract": "High-entropy alloys are solid solutions of multiple principal elements that are capable of reaching composition and property regimes inaccessible for dilute materials. Discovering those with valuable properties, however, too often relies on serendipity, because thermodynamic alloy design rules alone often fail in high-dimensional composition spaces. We propose an active learning strategy to accelerate the design of high-entropy Invar alloys in a practically infinite compositional space based on very sparse data. Our approach works as a closed-loop, integrating machine learning with density-functional theory, thermodynamic calculations, and experiments. After processing and characterizing 17 new alloys out of millions of possible compositions, we identified two high-entropy Invar alloys with extremely low thermal expansion coefficients around 2 × 10−6 per degree kelvin at 300 kelvin. We believe this to be a suitable pathway for the fast and automated discovery of high-entropy alloys with optimal thermal, magnetic, and electrical properties. Description A little expansive Invar alloys have extremely low thermal expansion, making them attractive for several types of applications. Finding these types of alloys in a complex compositional space, however, is challenging. Rao et al. used an iterative scheme that combines machine learning, density functional theory, experiments, and thermodynamic calculation to find two new invar alloys out of millions of candidates (see the Perspective by Hu and Yang). The alloys are both compositionally complex, high entropy materials, thus demonstrating the power of this approach for materials discovery. —BG Two high-entropy alloys with extremely low thermal expansion were found with the help of machine learning.",
"year": 2022,
"venue": "Science",
"authors": [
"Z. Rao",
"Po-Yen Tung",
"Ruiwen Xie",
"Ye Wei",
"Hongbin Zhang",
"Alberto Ferrari",
"T. Klaver",
"F. Körmann",
"P. T. Sukumar",
"A. Kwiatkowski da Silva",
"Yao Chen",
"Zhiming Li",
"D. Ponge",
"J. Neugebauer",
"O. Gutfleisch",
"Stefan Bauer",
"D. Raabe"
],
"externalIds": {
"ArXiv": "2202.13753",
"DOI": "10.1126/science.abo4940",
"CorpusId": 247159033,
"PubMed": "36201584"
},
"url": "https://www.semanticscholar.org/paper/e4b52d7eec6b0fa7ca4e3c51de398f2a40b105dd",
"referenceCount": 64,
"citationCount": 217,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Physics"
]
},
{
"title": "Big Data Mining and Classification of Intelligent Material Science Data Using Machine Learning",
"abstract": "There is a high need for a big data repository for material compositions and their derived analytics of metal strength, in the material science community. Currently, many researchers maintain their own excel sheets, prepared manually by their team by tabulating the experimental data collected from scientific journals, and analyzing the data by performing manual calculations using formulas to determine the strength of the material. In this study, we propose a big data storage for material science data and its processing parameters information to address the laborious process of data tabulation from scientific articles, data mining techniques to retrieve the information from databases to perform big data analytics, and a machine learning prediction model to determine material strength insights. Three models are proposed based on Logistic regression, Support vector Machine SVM and Random Forest Algorithms. These models are trained and tested using a 10-fold cross validation approach. The Random Forest classification model performed better on the independent dataset, with 87% accuracy in comparison to Logistic regression and SVM with 72% and 78%, respectively.",
"year": 2021,
"venue": "Applied Sciences",
"authors": [
"Swetha Chittam",
"B. Gokaraju",
"Zhigang Xu",
"J. Sankar",
"K. Roy"
],
"externalIds": {
"MAG": "3200751658",
"DOI": "10.3390/app11188596",
"CorpusId": 240535760
},
"url": "https://www.semanticscholar.org/paper/6d9888460251e59fed2a3d826bc9785d437716b7",
"referenceCount": 51,
"citationCount": 7,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "GPT-3 Models are Poor Few-Shot Learners in the Biomedical Domain",
"abstract": "Deep neural language models have set new breakthroughs in many tasks of Natural Language Processing (NLP). Recent work has shown that deep transformer language models (pretrained on large amounts of texts) can achieve high levels of task-specific few-shot performance comparable to state-of-the-art models. However, the ability of these large language models in few-shot transfer learning has not yet been explored in the biomedical domain. We investigated the performance of two powerful transformer language models, i.e. GPT-3 and BioBERT, in few-shot settings on various biomedical NLP tasks. The experimental results showed that, to a great extent, both the models underperform a language model fine-tuned on the full training data. Although GPT-3 had already achieved near state-of-the-art results in few-shot knowledge transfer on open-domain NLP tasks, it could not perform as effectively as BioBERT, which is orders of magnitude smaller than GPT-3. Regarding that BioBERT was already pretrained on large biomedical text corpora, our study suggests that language models may largely benefit from in-domain pretraining in task-specific few-shot learning. However, in-domain pretraining seems not to be sufficient; novel pretraining and few-shot learning strategies are required in the biomedical NLP domain.",
"year": 2021,
"venue": "arXiv.org",
"authors": [
"M. Moradi",
"Kathrin Blagec",
"F. Haberl",
"M. Samwald"
],
"externalIds": {
"ArXiv": "2109.02555",
"DBLP": "journals/corr/abs-2109-02555",
"CorpusId": 237420775
},
"url": "https://www.semanticscholar.org/paper/2122ed4a82bc7d8affc5f7ae5026d174ea34ea52",
"referenceCount": 16,
"citationCount": 52,
"influentialCitationCount": 3,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Advances in scientific literature mining for interpreting materials characterization",
"abstract": null,
"year": 2021,
"venue": "Machine Learning: Science and Technology",
"authors": [
"Gilchan Park",
"Line C. Pouchard"
],
"externalIds": {
"DBLP": "journals/mlst/ParkP21",
"MAG": "3152735608",
"DOI": "10.1088/2632-2153/ABF751",
"CorpusId": 234817344
},
"url": "https://www.semanticscholar.org/paper/e2b0cea241cf2f4405d8b78626209f1eaf91a2c9",
"referenceCount": 44,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Engineering"
]
},
{
"title": "SuperMat: construction of a linked annotated dataset from superconductors-related publications",
"abstract": "ABSTRACT A growing number of papers are published in the area of superconducting materials science. However, novel text and data mining (TDM) processes are still needed to efficiently access and exploit this accumulated knowledge, paving the way towards data-driven materials design. Herein, we present SuperMat (Superconductor Materials), an annotated corpus of linked data derived from scientific publications on superconductors, which comprises 142 articles, 16052 entities, and 1398 links that are characterised into six categories: the names, classes, and properties of materials; links to their respective superconducting critical temperature (Tc); and parametric conditions such as applied pressure or measurement methods. The construction of SuperMat resulted from a fruitful collaboration between computer scientists and material scientists, and its high quality is ensured through validation by domain experts. The quality of the annotation guidelines was ensured by satisfactory Inter Annotator Agreement (IAA) between the annotators and the domain experts. SuperMat includes the dataset, annotation guidelines, and annotation support tools that use automatic suggestions to help minimise human errors.",
"year": 2021,
"venue": "Science and Technology of Advanced Materials: Methods",
"authors": [
"Luca Foppiano",
"Sae Dieb",
"Akira Suzuki",
"P. B. Castro",
"Suguru Iwasaki",
"A. Uzuki",
"Miren Garbine Esparza Echevarria",
"Y. Meng",
"K. Terashima",
"Laurent Romary",
"Y. Takano",
"M. Ishii"
],
"externalIds": {
"ArXiv": "2101.02455",
"MAG": "3119242074",
"DOI": "10.1080/27660400.2021.1918396",
"CorpusId": 230799408
},
"url": "https://www.semanticscholar.org/paper/74b72d715351ad45ef1a20e76033e0740e123850",
"referenceCount": 47,
"citationCount": 11,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Electron Doping of the Iron-Arsenide Superconductor CeFeAsO Controlled by Hydrostatic Pressure.",
"abstract": "In the iron-pnictide material CeFeAsO not only the Fe moments, but also the local 4f moments of the Ce order antiferromagnetically at low temperatures. We elucidate on the peculiar role of the Ce on the emergence of superconductivity. While application of pressure suppresses the iron SDW ordering temperature monotonously up to 4 GPa, the Ce-4f magnetism is stabilized until both types of magnetic orders disappear abruptly and a narrow SC dome develops. With further increasing pressure characteristics of a Kondo-lattice system become more and more apparent in the electrical resistivity. This suggests a connection of the emergence of superconductivity with the extinction of the magnetic order and the onset of Kondo screening of the Ce-4f moments.",
"year": 2020,
"venue": "Physical Review Letters",
"authors": [
"K. Mydeen",
"A. Jesche",
"A. Jesche",
"K. Meier-Kirchner",
"Ulrich S. Schwarz",
"C. Geibel",
"H. Rosner",
"M. Nicklas"
],
"externalIds": {
"ArXiv": "2011.05665",
"MAG": "3099133315",
"DOI": "10.1103/PhysRevLett.125.207001",
"CorpusId": 226299729,
"PubMed": "33258641"
},
"url": "https://www.semanticscholar.org/paper/ee6daf315e5d34c628ab46a52e20d8abf497aed4",
"referenceCount": 43,
"citationCount": 3,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science",
"Physics",
"Medicine"
]
},
{
"title": "Data augmentation in microscopic images for material data mining",
"abstract": null,
"year": 2020,
"venue": "npj Computational Materials",
"authors": [
"Boyuan Ma",
"Xiaoyan Wei",
"Chuni Liu",
"X. Ban",
"Haiyou Huang",
"Hao Wang",
"Weihua Xue",
"Stephen Wu",
"M. Gao",
"Qing Shen",
"Adnan O. M. Abuassba",
"Haokai Shen",
"Yanjing Su"
],
"externalIds": {
"MAG": "2981914352",
"DOI": "10.1038/s41524-020-00392-6",
"CorpusId": 204925402
},
"url": "https://www.semanticscholar.org/paper/4a04c70a4c29c96c4ce80c556eb8cbf880e23dca",
"referenceCount": 55,
"citationCount": 67,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science",
"Computer Science"
]
},
{
"title": "Data-driven design of metal–organic frameworks for wet flue gas CO2 capture",
"abstract": null,
"year": 2019,
"venue": "Nature",
"authors": [
"Peter G. Boyd",
"Arunraj Chidambaram",
"E. García-Díez",
"Christopher P Ireland",
"T. Daff",
"T. Daff",
"R. Bounds",
"Andrzej Gładysiak",
"P. Schouwink",
"S. M. Moosavi",
"M. Maroto-Valer",
"Jeffrey A. Reimer",
"Jeffrey A. Reimer",
"J. Navarro",
"T. Woo",
"S. Garcia",
"Kyriakos C. Stylianou",
"Kyriakos C. Stylianou",
"B. Smit"
],
"externalIds": {
"MAG": "2992843173",
"DOI": "10.1038/s41586-019-1798-7",
"CorpusId": 209312604,
"PubMed": "31827290"
},
"url": "https://www.semanticscholar.org/paper/989fc075fb45ffc92aed7a6b82a494e7fae382e3",
"referenceCount": 29,
"citationCount": 423,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Environmental Science"
]
},
{
"title": "Automatic Identification and Normalisation of Physical Measurements in Scientific Literature",
"abstract": "We present Grobid-quantities, an open-source application for extracting and normalising measurements from scientific and patent literature. Tools of this kind, aiming to understand and make unstructured information accessible, represent the building blocks for large-scale Text and Data Mining (TDM) systems. Grobid-quantities is a module built on top of Grobid [6] [13], a machine learning framework for parsing and structuring PDF documents. Designed to process large quantities of data, it provides a robust implementation accessible in batch mode or via a REST API. The machine learning engine architecture follows the cascade approach, where each model is specialised in the resolution of a specific task. The models are trained using CRF (Conditional Random Field) algorithm [12] for extracting quantities (atomic values, intervals and lists), units (such as length, weight) and different value representations (numeric, alphabetic or scientific notation). Identified measurements are normalised according to the International System of Units (SI). Thanks to its stable recall and reliable precision, Grobid-quantities has been integrated as the measurement-extraction engine in various TDM projects, such as Marve (Measurement Context Extraction from Text), for extracting semantic measurements and meaning in Earth Science [10]. At the National Institute for Materials Science in Japan (NIMS), it is used in an ongoing project to discover new superconducting materials. Normalised materials characteristics (such as critical temperature, pressure) extracted from scientific literature are a key resource for materials informatics (MI) [9].",
"year": 2019,
"venue": "ACM Symposium on Document Engineering",
"authors": [
"Luca Foppiano",
"L. Romary",
"M. Ishii",
"M. Tanifuji"
],
"externalIds": {
"DBLP": "conf/doceng/FoppianoRIT19",
"MAG": "2974767492",
"DOI": "10.1145/3342558.3345411",
"CorpusId": 202728856
},
"url": "https://www.semanticscholar.org/paper/e34ac69ea76bf45d8c1da6f639dced7c0d965222",
"referenceCount": 14,
"citationCount": 19,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"abstract": "BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.",
"year": 2019,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Nils Reimers",
"Iryna Gurevych"
],
"externalIds": {
"DBLP": "journals/corr/abs-1908-10084",
"ACL": "D19-1410",
"ArXiv": "1908.10084",
"MAG": "2971193649",
"DOI": "10.18653/v1/D19-1410",
"CorpusId": 201646309
},
"url": "https://www.semanticscholar.org/paper/93d63ec754f29fa22572615320afe0521f7ec66d",
"referenceCount": 38,
"citationCount": 9376,
"influentialCitationCount": 1441,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SciBERT: A Pretrained Language Model for Scientific Text",
"abstract": "Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.",
"year": 2019,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Iz Beltagy",
"Kyle Lo",
"Arman Cohan"
],
"externalIds": {
"ACL": "D19-1371",
"DBLP": "conf/emnlp/BeltagyLC19",
"MAG": "2973154071",
"ArXiv": "1903.10676",
"DOI": "10.18653/v1/D19-1371",
"CorpusId": 202558505
},
"url": "https://www.semanticscholar.org/paper/156d217b0a911af97fa1b5a71dc909ccef7a8028",
"referenceCount": 32,
"citationCount": 2542,
"influentialCitationCount": 462,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "An open experimental database for exploring inorganic materials",
"abstract": null,
"year": 2018,
"venue": "Scientific Data",
"authors": [
"A. Zakutayev",
"Nick Wunder",
"Marcus Schwarting",
"J. Perkins",
"Robert R. White",
"Kristin Munch",
"W. Tumas",
"Caleb Phillips"
],
"externalIds": {
"PubMedCentral": "5881410",
"MAG": "2796365074",
"DOI": "10.1038/sdata.2018.53",
"CorpusId": 4837259,
"PubMed": "29611842"
},
"url": "https://www.semanticscholar.org/paper/643d38e247d6280c9a0f5a0d70dae98dc6e3af4d",
"referenceCount": 44,
"citationCount": 138,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "A polymer dataset for accelerated property prediction and design",
"abstract": null,
"year": 2016,
"venue": "Scientific Data",
"authors": [
"T. D. Huan",
"A. Mannodi-Kanakkithodi",
"Chiho Kim",
"Vinit Sharma",
"G. Pilania",
"R. Ramprasad"
],
"externalIds": {
"PubMedCentral": "4772654",
"MAG": "2291908932",
"DOI": "10.1038/sdata.2016.12",
"CorpusId": 1654029,
"PubMed": "26927478"
},
"url": "https://www.semanticscholar.org/paper/23e4ecc6b0ed232bae790d71398b1fc0cf667943",
"referenceCount": 94,
"citationCount": 150,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Exploration of new superconductors and functional materials, and fabrication of superconducting tapes and wires of iron pnictides",
"abstract": "Abstract This review shows the highlights of a 4-year-long research project supported by the Japanese Government to explore new superconducting materials and relevant functional materials. The project found several tens of new superconductors by examining ∼1000 materials, each of which was chosen by Japanese experts with a background in solid state chemistry. This review summarizes the major achievements of the project in newly found superconducting materials, and the fabrication wires and tapes of iron-based superconductors; it incorporates a list of ∼700 unsuccessful materials examined for superconductivity in the project. In addition, described are new functional materials and functionalities discovered during the project.",
"year": 2015,
"venue": "Science and Technology of Advanced Materials",
"authors": [
"H. Hosono",
"K. Tanabe",
"E. Takayama-Muromachi",
"H. Kageyama",
"S. Yamanaka",
"H. Kumakura",
"M. Nohara",
"H. Hiramatsu",
"S. Fujitsu"
],
"externalIds": {
"MAG": "2151807416",
"ArXiv": "1505.02240",
"PubMedCentral": "5099821",
"DOI": "10.1088/1468-6996/16/3/033503",
"CorpusId": 162589,
"PubMed": "27877784"
},
"url": "https://www.semanticscholar.org/paper/3efc263042a831bd76d78dfa28e1d712746d353b",
"referenceCount": 641,
"citationCount": 246,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Chemistry",
"Materials Science",
"Physics",
"Medicine"
]
},
{
"title": "Microstructure and Properties of High-Temperature Superconductors",
"abstract": null,
"year": 2007,
"venue": "",
"authors": [
"I. Parinov"
],
"externalIds": {
"MAG": "1506877895",
"DOI": "10.1007/978-3-642-34441-1",
"CorpusId": 136595704
},
"url": "https://www.semanticscholar.org/paper/4d730e0e9a3270510a5b590c7b903bfe7f6295ee",
"referenceCount": 0,
"citationCount": 45,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Materials Science"
]
},
{
"title": "Theory of Superconductivity",
"abstract": null,
"year": 1948,
"venue": "Nature",
"authors": [
"K. Cheng"
],
"externalIds": {
"MAG": "1751887391",
"DOI": "10.1038/163247a0",
"CorpusId": 4122775
},
"url": "https://www.semanticscholar.org/paper/9f1e7910a3fc0dbc7536a61b38d089ae34022fbd",
"referenceCount": 0,
"citationCount": 4152,
"influentialCitationCount": 337,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics"
]
},
{
"title": "Using GPT-4 in Parameter Selection of Polymer Informatics: Improving Predictive Accuracy Amidst Data Scarcity and 'Ugly Duckling' Dilemma",
"abstract": "Materials informatics and cheminformatics struggle with data scarcity, hindering the extraction of significant relationships between structures and properties. The \"Ugly Duckling\" theorem, suggesting the difficulty of data processing without assumptions...",
"year": 2023,
"venue": "Digital Discovery",
"authors": [
"Kan Hatakeyama‐Sato",
"Seigo Watanabe",
"Naoki Yamane",
"Yasuhiko Igarashi",
"Kenichi Oyaizu"
],
"externalIds": {
"DOI": "10.1039/d3dd00138e",
"CorpusId": 261796711
},
"url": "https://www.semanticscholar.org/paper/7cf03c72c2b0356a24ce4ba44822a15abc59d909",
"referenceCount": 0,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "SemEval-2021 Task 8: MeasEval – Extracting Counts and Measurements and their Related Contexts",
"abstract": "We describe MeasEval, a SemEval task of extracting counts, measurements, and related context from scientific documents, which is of significant importance to the creation of Knowledge Graphs that distill information from the scientific literature. This is a new task in 2021, for which over 75 submissions from 25 participants were received. We expect the data developed for this task and the findings reported to be valuable to the scientific knowledge extraction, metrology, and automated knowledge base construction communities.",
"year": 2021,
"venue": "International Workshop on Semantic Evaluation",
"authors": [
"Corey A. Harper",
"Jessica Cox",
"C. Kohler",
"A. Scerri",
"Ron Daniel",
"Paul Groth"
],
"externalIds": {
"DBLP": "conf/semeval/HarperCKSDG21",
"ACL": "2021.semeval-1.38",
"DOI": "10.18653/v1/2021.semeval-1.38",
"CorpusId": 236459836
},
"url": "https://www.semanticscholar.org/paper/51995d854ef966de332b93a8a765ce8c74f06255",
"referenceCount": 41,
"citationCount": 27,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Machine Learning and Data Mining in Materials Science",
"abstract": null,
"year": 2020,
"venue": "Frontiers Research Topics",
"authors": [],
"externalIds": {
"DOI": "10.3389/978-2-88963-651-8",
"CorpusId": 241534961
},
"url": "https://www.semanticscholar.org/paper/effc72d6fc5db0bccc23f2543723d2382d867b99",
"referenceCount": 0,
"citationCount": 4,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": null
},
{
"title": "A survey of named entity recognition and classification",
"abstract": "This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress.",
"year": 2007,
"venue": "",
"authors": [
"David Nadeau",
"S. Sekine"
],
"externalIds": {
"MAG": "2020278455",
"DOI": "10.1075/LI.30.1.03NAD",
"CorpusId": 8310135
},
"url": "https://www.semanticscholar.org/paper/4a554da55fd9ff76c99e25d2ce937b225dc1100c",
"referenceCount": 84,
"citationCount": 2750,
"influentialCitationCount": 180,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "[Yes... but].",
"abstract": null,
"year": 1994,
"venue": "Cirugía pediátrica",
"authors": [
"J. E. Pollina"
],
"externalIds": {
"CorpusId": 9083950,
"PubMed": "8204420"
},
"url": "https://www.semanticscholar.org/paper/52d84b4cb47b56106e1a0a3fcd895f2b5379d591",
"referenceCount": 0,
"citationCount": 65,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Pattern matching: the gestalt approach",
"abstract": null,
"year": 1988,
"venue": "www.drdobbs.com/ database/pattern-matching-the-gestalt-approach/",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "for the schema {\"properties\": {\"foo\": {\"title\": \"Foo\", \" , → description\": \"a list of strings\", \"type\": \"array\", \"items\": {\" , → type\": \"string\"}}}, \"required\": [\"foo\"]}",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "material chemical form with no variables e.g. LaFe03NaCl2 where the , → doping rates are included in the name",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "You are a useful assistant",
"abstract": null,
"year": null,
"venue": "who knows about materials science, physics, , → chemistry and engineering",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "chemical substitution or replacements, like (A is a random variable, , → can be any symbol): A = Ni, Cu, A = Ni, Ni substituted (which , → means A = Ni)",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Wiktoria Mieleszczenko-Kowszewicz",
"abstract": null,
"year": null,
"venue": "Piotr Mi lkowski",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "You will be asked to compute relation extraction given a text and lists , → of entities",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Include relevant text that indicates the application of a modifier, such , → as \"between\", \"less than\", \"approximately\",",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"nach0: multimodal natural and chemical languages foundation model": {
"paper_title": "nach0: multimodal natural and chemical languages foundation model",
"arxiv_id": "2311.12410",
"authors": [
"M. Livne",
"Z. Miftahutdinov",
"E. Tutubalina",
"Maksim Kuznetsov",
"Daniil Polykovskiy",
"Annika Brundyn",
"Aastha Jhunjhunwala",
"Anthony Costa",
"Alex Aliper",
"A. Zhavoronkov"
],
"year": 2023,
"venue": "Chemical Science",
"abstract": "Large Language Models (LLMs) have substantially driven scientific progress in various domains, and many papers have demonstrated their ability to tackle complex problems with creative solutions. Our paper introduces a new foundation model, nach0, capable of solving various chemical and biological tasks: biomedical question answering, named entity recognition, molecular generation, molecular synthesis, attributes prediction, and others. nach0 is a multi-domain and multi-task encoder–decoder LLM pre-trained on unlabeled text from scientific literature, patents, and molecule strings to incorporate a range of chemical and linguistic knowledge. We employed instruction tuning, where specific task-related instructions are utilized to fine-tune nach0 for the final set of tasks. To train nach0 effectively, we leverage the NeMo framework, enabling efficient parallel optimization of both base and large model versions. Extensive experiments demonstrate that our model outperforms state-of-the-art baselines on single-domain and cross-domain tasks. Furthermore, it can generate high-quality outputs in molecular and textual formats, showcasing its effectiveness in multi-domain setups.",
"references": [
{
"title": "Neural scaling of deep chemical models",
"abstract": null,
"year": 2023,
"venue": "Nature Machine Intelligence",
"authors": [
"Nathan C Frey",
"Ryan Soklaski",
"Simon Axelrod",
"S. Samsi",
"Rafael G´omez-Bombarelli",
"Connor W. Coley",
"V. Gadepally"
],
"externalIds": {
"DBLP": "journals/natmi/FreySASGCG23",
"DOI": "10.1038/s42256-023-00740-3",
"CorpusId": 262152780
},
"url": "https://www.semanticscholar.org/paper/bf93fe733932fd25780ef84911b8a507bec1c372",
"referenceCount": 86,
"citationCount": 63,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "An extensive benchmark study on biomedical text generation and mining with ChatGPT",
"abstract": "Abstract Motivation In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models (LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5 and GPT-4, shows excellent capabilities in general language understanding and reasoning. Researchers also tested the GPTs on a variety of NLP-related tasks and benchmarks and got excellent results. With exciting performance on daily chat, researchers began to explore the capacity of ChatGPT on expertise that requires professional education for human and we are interested in the biomedical domain. Results To evaluate the performance of ChatGPT on biomedical-related tasks, this article presents a comprehensive benchmark study on the use of ChatGPT for biomedical corpus, including article abstracts, clinical trials description, biomedical questions, and so on. Typical NLP tasks like named entity recognization, relation extraction, sentence similarity, question and answering, and document classification are included. Overall, ChatGPT got a BLURB score of 58.50 while the state-of-the-art model had a score of 84.30. Through a series of experiments, we demonstrated the effectiveness and versatility of ChatGPT in biomedical text understanding, reasoning and generation, and the limitation of ChatGPT build on GPT-3.5. Availability and implementation All the datasets are available from BLURB benchmark https://microsoft.github.io/BLURB/index.html. The prompts are described in the article.",
"year": 2023,
"venue": "Bioinform.",
"authors": [
"Qijie Chen",
"Haotong Sun",
"Haoyang Liu",
"Yinghui Jiang",
"Ting Ran",
"Xurui Jin",
"Xianglu Xiao",
"Zhimin Lin",
"Hongming Chen",
"Z. Niu"
],
"externalIds": {
"DBLP": "journals/bioinformatics/ChenSLJRJXLCN23",
"PubMedCentral": "10562950",
"DOI": "10.1093/bioinformatics/btad557",
"CorpusId": 261609442,
"PubMed": "37682111"
},
"url": "https://www.semanticscholar.org/paper/47e2978a10aa3895cfbea0ff27c28d0e93102756",
"referenceCount": 29,
"citationCount": 41,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Prediction of Clinical Trials Outcomes Based on Target Choice and Clinical Trial Design with Multi‐Modal Artificial Intelligence",
"abstract": "Drug discovery and development is a notoriously risky process with high failure rates at every stage, including disease modeling, target discovery, hit discovery, lead optimization, preclinical development, human safety, and efficacy studies. Accurate prediction of clinical trial outcomes may help significantly improve the efficiency of this process by prioritizing therapeutic programs that are more likely to succeed in clinical trials and ultimately benefit patients. Here, we describe inClinico, a transformer‐based artificial intelligence software platform designed to predict the outcome of phase II clinical trials. The platform combines an ensemble of clinical trial outcome prediction engines that leverage generative artificial intelligence and multimodal data, including omics, text, clinical trial design, and small molecule properties. inClinico was validated in retrospective, quasi‐prospective, and prospective validation studies internally and with pharmaceutical companies and financial institutions. The platform achieved 0.88 receiver operating characteristic area under the curve in predicting the phase II to phase III transition on a quasi‐prospective validation dataset. The first prospective predictions were made and placed on date‐stamped preprint servers in 2016. To validate our model in a real‐world setting, we published forecasted outcomes for several phase II clinical trials achieving 79% accuracy for the trials that have read out. We also present an investment application of inClinico using date stamped virtual trading portfolio demonstrating 35% 9‐month return on investment.",
"year": 2023,
"venue": "Clinical pharmacology and therapy",
"authors": [
"A. Aliper",
"R. Kudrin",
"Daniil Polykovskiy",
"Petrina Kamya",
"E. Tutubalina",
"Shan Chen",
"Fengzhi Ren",
"A. Zhavoronkov"
],
"externalIds": {
"DOI": "10.1002/cpt.3008",
"CorpusId": 260106182,
"PubMed": "37483175"
},
"url": "https://www.semanticscholar.org/paper/6fc654c53a453eae00bc28421caa23bd05ae7149",
"referenceCount": 38,
"citationCount": 21,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models",
"abstract": "Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Instructions, a comprehensive instruction dataset designed for the biomolecular domain. Mol-Instructions encompasses three key components: molecule-oriented instructions, protein-oriented instructions, and biomolecular text instructions. Each component aims to improve the understanding and prediction capabilities of LLMs concerning biomolecular features and behaviors. Through extensive instruction tuning experiments on LLMs, we demonstrate the effectiveness of Mol-Instructions in enhancing large models' performance in the intricate realm of biomolecular studies, thus fostering progress in the biomolecular research community. Mol-Instructions is publicly available for ongoing research and will undergo regular updates to enhance its applicability.",
"year": 2023,
"venue": "International Conference on Learning Representations",
"authors": [
"Yin Fang",
"Xiaozhuan Liang",
"Ningyu Zhang",
"Kangwei Liu",
"Rui Huang",
"Zhuo Chen",
"Xiaohui Fan",
"Huajun Chen"
],
"externalIds": {
"DBLP": "conf/iclr/FangL0LH0FC24",
"ArXiv": "2306.08018",
"DOI": "10.48550/arXiv.2306.08018",
"CorpusId": 259164901
},
"url": "https://www.semanticscholar.org/paper/f86aa25603d1f2e4066db9b6a9a6d311b4e8c491",
"referenceCount": 80,
"citationCount": 44,
"influentialCitationCount": 8,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "A Comprehensive Benchmark Study on Biomedical Text Generation and Mining with ChatGPT",
"abstract": "In recent years, the development of natural language process (NLP) technologies and deep learning hardware has led to significant improvement in large language models(LLMs). The ChatGPT, the state-of-the-art LLM built on GPT-3.5, shows excellent capabilities in general language understanding and reasoning. Researchers also tested the GPTs on a variety of NLP related tasks and benchmarks and got excellent results. To evaluate the performance of ChatGPT on biomedical related tasks, this paper presents a comprehensive benchmark study on the use of ChatGPT for biomedical corpus, including article abstracts, clinical trials description, biomedical questions and so on. Through a series of experiments, we demonstrated the effectiveness and versatility of Chat-GPT in biomedical text understanding, reasoning and generation.",
"year": 2023,
"venue": "bioRxiv",
"authors": [
"Qijie Chen",
"Haotong Sun",
"Haoyang Liu",
"Yinghui Jiang",
"Ting Ran",
"Xurui Jin",
"Xianglu Xiao",
"Zhimin Lin",
"Z. Niu",
"Hongming Chen"
],
"externalIds": {
"DOI": "10.1101/2023.04.19.537463",
"CorpusId": 258293153
},
"url": "https://www.semanticscholar.org/paper/1aea23ef6545523e3f0f38d7f62feed11dfe0e44",
"referenceCount": 33,
"citationCount": 19,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology"
]
},
{
"title": "Does Synthetic Data Generation of LLMs Help Clinical Text Mining?",
"abstract": "Recent advancements in large language models (LLMs) have led to the development of highly potent models like OpenAI's ChatGPT. These models have exhibited exceptional performance in a variety of tasks, such as question answering, essay composition, and code generation. However, their effectiveness in the healthcare sector remains uncertain. In this study, we seek to investigate the potential of ChatGPT to aid in clinical text mining by examining its ability to extract structured information from unstructured healthcare texts, with a focus on biological named entity recognition and relation extraction. However, our preliminary results indicate that employing ChatGPT directly for these tasks resulted in poor performance and raised privacy concerns associated with uploading patients' information to the ChatGPT API. To overcome these limitations, we propose a new training paradigm that involves generating a vast quantity of high-quality synthetic data with labels utilizing ChatGPT and fine-tuning a local model for the downstream task. Our method has resulted in significant improvements in the performance of downstream tasks, improving the F1-score from 23.37% to 63.99% for the named entity recognition task and from 75.86% to 83.59% for the relation extraction task. Furthermore, generating data using ChatGPT can significantly reduce the time and effort required for data collection and labeling, as well as mitigate data privacy concerns. In summary, the proposed framework presents a promising solution to enhance the applicability of LLM models to clinical text mining.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Ruixiang Tang",
"Xiaotian Han",
"Xiaoqian Jiang",
"Xia Hu"
],
"externalIds": {
"DBLP": "journals/corr/abs-2303-04360",
"ArXiv": "2303.04360",
"DOI": "10.48550/arXiv.2303.04360",
"CorpusId": 257405132
},
"url": "https://www.semanticscholar.org/paper/bdf7bf9e81a6c12e22323d0402885b2ba62f623e",
"referenceCount": 48,
"citationCount": 125,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Chemistry42: An AI-Driven Platform for Molecular Design and Optimization",
"abstract": "Chemistry42 is a software platform for de novo small molecule design and optimization that integrates Artificial Intelligence (AI) techniques with computational and medicinal chemistry methodologies. Chemistry42 efficiently generates novel molecular structures with optimized properties validated in both in vitro and in vivo studies and is available through licensing or collaboration. Chemistry42 is the core component of Insilico Medicine’s Pharma.ai drug discovery suite. Pharma.ai also includes PandaOmics for target discovery and multiomics data analysis, and inClinico—a data-driven multimodal forecast of a clinical trial’s probability of success (PoS). In this paper, we demonstrate how the platform can be used to efficiently find novel molecular structures against DDR1 and CDK20.",
"year": 2023,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Y. Ivanenkov",
"Daniil Polykovskiy",
"Dmitry Bezrukov",
"B. Zagribelnyy",
"V. Aladinskiy",
"Petrina Kamya",
"A. Aliper",
"Fengzhi Ren",
"A. Zhavoronkov"
],
"externalIds": {
"DBLP": "journals/jcisd/IvanenkovPBZAKARZ23",
"PubMedCentral": "9930109",
"DOI": "10.1021/acs.jcim.2c01191",
"CorpusId": 256500986,
"PubMed": "36728505"
},
"url": "https://www.semanticscholar.org/paper/8cff9612111ed99b909c151d312e68639345eefa",
"referenceCount": 37,
"citationCount": 53,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "BARTSmiles: Generative Masked Language Models for Molecular Representations",
"abstract": "We discover a robust self-supervised strategy tailored toward molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pretraining strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks, setting a new state-of-the-art on eight tasks. We then show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and pretrained model are publicly available.",
"year": 2022,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Gayane Chilingaryan",
"Hovhannes Tamoyan",
"A. Tevosyan",
"N. Babayan",
"L. Khondkaryan",
"Karen Hambardzumyan",
"Z. Navoyan",
"Hrant Khachatrian",
"Armen Aghajanyan"
],
"externalIds": {
"ArXiv": "2211.16349",
"DBLP": "journals/corr/abs-2211-16349",
"DOI": "10.48550/arXiv.2211.16349",
"CorpusId": 254069435,
"PubMed": "39054761"
},
"url": "https://www.semanticscholar.org/paper/577e96243eaffa374a977fa46cf1680877b2b509",
"referenceCount": 74,
"citationCount": 17,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science",
"Biology"
]
},
{
"title": "Galactica: A Large Language Model for Science",
"abstract": "Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Ross Taylor",
"Marcin Kardas",
"Guillem Cucurull",
"Thomas Scialom",
"A. Hartshorn",
"Elvis Saravia",
"Andrew Poulton",
"Viktor Kerkez",
"Robert Stojnic"
],
"externalIds": {
"ArXiv": "2211.09085",
"DBLP": "journals/corr/abs-2211-09085",
"DOI": "10.48550/arXiv.2211.09085",
"CorpusId": 253553203
},
"url": "https://www.semanticscholar.org/paper/7d645a3fd276918374fd9483fd675c28e46506d1",
"referenceCount": 107,
"citationCount": 570,
"influentialCitationCount": 66,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining",
"abstract": "Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.",
"year": 2022,
"venue": "Briefings Bioinform.",
"authors": [
"Renqian Luo",
"Liai Sun",
"Yingce Xia",
"Tao Qin",
"Sheng Zhang",
"Hoifung Poon",
"Tie-Yan Liu"
],
"externalIds": {
"ArXiv": "2210.10341",
"DBLP": "journals/bib/LuoSXQZPL22",
"DOI": "10.1093/bib/bbac409",
"CorpusId": 252542956,
"PubMed": "36156661"
},
"url": "https://www.semanticscholar.org/paper/44279244407a64431810f982be6d0c7da4429dd7",
"referenceCount": 59,
"citationCount": 549,
"influentialCitationCount": 52,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Improving Small Molecule Generation using Mutual Information Machine",
"abstract": "We address the task of controlled generation of small molecules, which entails finding novel molecules with desired properties under certain constraints (e.g., similarity to a reference molecule). Here we introduce MolMIM, a probabilistic auto-encoder for small molecule drug discovery that learns an informative and clustered latent space. MolMIM is trained with Mutual Information Machine (MIM) learning, and provides a fixed length representation of variable length SMILES strings. Since encoder-decoder models can learn representations with ``holes'' of invalid samples, here we propose a novel extension to the training procedure which promotes a dense latent space, and allows the model to sample valid molecules from random perturbations of latent codes. We provide a thorough comparison of MolMIM to several variable-size and fixed-size encoder-decoder models, demonstrating MolMIM's superior generation as measured in terms of validity, uniqueness, and novelty. We then utilize CMA-ES, a naive black-box and gradient free search algorithm, over MolMIM's latent space for the task of property guided molecule optimization. We achieve state-of-the-art results in several constrained single property optimization tasks as well as in the challenging task of multi-objective optimization, improving over previous success rate SOTA by more than 5\\% . We attribute the strong results to MolMIM's latent representation which clusters similar molecules in the latent space, whereas CMA-ES is often used as a baseline optimization method. We also demonstrate MolMIM to be favourable in a compute limited regime, making it an attractive model for such cases.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Daniel A. Reidenbach",
"M. Livne",
"Rajesh Ilango",
"M. Gill",
"Johnny Israeli"
],
"externalIds": {
"ArXiv": "2208.09016",
"DBLP": "journals/corr/abs-2208-09016",
"DOI": "10.48550/arXiv.2208.09016",
"CorpusId": 251710127
},
"url": "https://www.semanticscholar.org/paper/86cf3fbbc6d2e08474268e1b337b5446c04d6757",
"referenceCount": 63,
"citationCount": 9,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "BigBIO: A Framework for Data-Centric Biomedical Natural Language Processing",
"abstract": "Training and evaluating language models increasingly requires the construction of meta-datasets --diverse collections of curated data with clear provenance. Natural language prompting has recently lead to improved zero-shot generalization by transforming existing, supervised datasets into a diversity of novel pretraining tasks, highlighting the benefits of meta-dataset curation. While successful in general-domain text, translating these data-centric approaches to biomedical language modeling remains challenging, as labeled biomedical datasets are significantly underrepresented in popular data hubs. To address this challenge, we introduce BigBIO a community library of 126+ biomedical NLP datasets, currently covering 12 task categories and 10+ languages. BigBIO facilitates reproducible meta-dataset curation via programmatic access to datasets and their metadata, and is compatible with current platforms for prompt engineering and end-to-end few/zero shot language model evaluation. We discuss our process for task schema harmonization, data auditing, contribution guidelines, and outline two illustrative use cases: zero-shot evaluation of biomedical prompts and large-scale, multi-task learning. BigBIO is an ongoing community effort and is available at https://github.com/bigscience-workshop/biomedical",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Jason Alan Fries",
"Leon Weber",
"Natasha Seelam",
"Gabriel Altay",
"Debajyoti Datta",
"Samuele Garda",
"Myungsun Kang",
"Ruisi Su",
"Wojciech Kusa",
"Samuel Cahyawijaya",
"Fabio Barth",
"Simon Ott",
"M. Samwald",
"Stephen H. Bach",
"Stella Biderman",
"Mario Sanger",
"Bo Wang",
"A. Callahan",
"Daniel Le'on Perin'an",
"Théo Gigant",
"Patrick Haller",
"Jenny Chim",
"J. Posada",
"John Giorgi",
"Karthi Sivaraman",
"Marc Pàmies",
"Marianna Nezhurina",
"Robert Martin",
"Michael Cullan",
"M. Freidank",
"N. Dahlberg",
"Shubhanshu Mishra",
"Shamik Bose",
"N. Broad",
"Yanis Labrak",
"Shlok S Deshmukh",
"Sid Kiblawi",
"Ayush Singh",
"Minh Chien Vu",
"Trishala Neeraj",
"Jonas Golde",
"Albert Villanova del Moral",
"Benjamin Beilharz"
],
"externalIds": {
"DBLP": "conf/nips/FriesWSADGKSKCB22",
"ArXiv": "2206.15076",
"DOI": "10.48550/arXiv.2206.15076",
"CorpusId": 250144481
},
"url": "https://www.semanticscholar.org/paper/d3e4553f0a1fd465ae358701f1bdc2e8265308d6",
"referenceCount": 180,
"citationCount": 39,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Large Language Models are Zero-Shot Reasoners",
"abstract": "Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding\"Let's think step by step\"before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.",
"year": 2022,
"venue": "Neural Information Processing Systems",
"authors": [
"Takeshi Kojima",
"S. Gu",
"Machel Reid",
"Yutaka Matsuo",
"Yusuke Iwasawa"
],
"externalIds": {
"DBLP": "journals/corr/abs-2205-11916",
"ArXiv": "2205.11916",
"CorpusId": 249017743
},
"url": "https://www.semanticscholar.org/paper/e7ad08848d5d7c5c47673ffe0da06af443643bda",
"referenceCount": 61,
"citationCount": 2724,
"influentialCitationCount": 259,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Translation between Molecules and Natural Language",
"abstract": "We present MolT5 - a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. MolT5 allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since MolT5 pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that MolT5-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.",
"year": 2022,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carl N. Edwards",
"T. Lai",
"Kevin Ros",
"Garrett Honke",
"Heng Ji"
],
"externalIds": {
"ACL": "2022.emnlp-main.26",
"DBLP": "journals/corr/abs-2204-11817",
"ArXiv": "2204.11817",
"DOI": "10.48550/arXiv.2204.11817",
"CorpusId": 248376906
},
"url": "https://www.semanticscholar.org/paper/3b9b1aba877ecd3f7e508cbc78a41b623349902b",
"referenceCount": 90,
"citationCount": 113,
"influentialCitationCount": 33,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model",
"abstract": "Pretrained language models have served as important backbones for natural language processing. Recently, in-domain pretraining has been shown to benefit various domain-specific downstream tasks. In the biomedical domain, natural language generation (NLG) tasks are of critical importance, while understudied. Approaching natural language understanding (NLU) tasks as NLG achieves satisfying performance in the general domain through constrained language generation or language prompting. We emphasize the lack of in-domain generative language models and the unsystematic generative downstream benchmarks in the biomedical domain, hindering the development of the research community. In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain. We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition. BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks. Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.",
"year": 2022,
"venue": "Workshop on Biomedical Natural Language Processing",
"authors": [
"Hongyi Yuan",
"Zheng Yuan",
"Ruyi Gan",
"Jiaxing Zhang",
"Yutao Xie",
"Sheng Yu"
],
"externalIds": {
"DBLP": "journals/corr/abs-2204-03905",
"ArXiv": "2204.03905",
"ACL": "2022.bionlp-1.9",
"DOI": "10.48550/arXiv.2204.03905",
"CorpusId": 248069469
},
"url": "https://www.semanticscholar.org/paper/0db5207510819b9956849eb84bfe8703f8f3688d",
"referenceCount": 80,
"citationCount": 107,
"influentialCitationCount": 18,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "PaLM: Scaling Language Modeling with Pathways",
"abstract": "Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.",
"year": 2022,
"venue": "Journal of machine learning research",
"authors": [
"Aakanksha Chowdhery",
"Sharan Narang",
"Jacob Devlin",
"Maarten Bosma",
"Gaurav Mishra",
"Adam Roberts",
"P. Barham",
"Hyung Won Chung",
"Charles Sutton",
"Sebastian Gehrmann",
"Parker Schuh",
"Kensen Shi",
"Sasha Tsvyashchenko",
"Joshua Maynez",
"Abhishek Rao",
"Parker Barnes",
"Yi Tay",
"Noam M. Shazeer",
"Vinodkumar Prabhakaran",
"Emily Reif",
"Nan Du",
"Ben Hutchinson",
"Reiner Pope",
"James Bradbury",
"Jacob Austin",
"M. Isard",
"Guy Gur-Ari",
"Pengcheng Yin",
"Toju Duke",
"Anselm Levskaya",
"Sanjay Ghemawat",
"Sunipa Dev",
"H. Michalewski",
"Xavier García",
"Vedant Misra",
"Kevin Robinson",
"Liam Fedus",
"Denny Zhou",
"Daphne Ippolito",
"D. Luan",
"Hyeontaek Lim",
"Barret Zoph",
"A. Spiridonov",
"Ryan Sepassi",
"David Dohan",
"Shivani Agrawal",
"Mark Omernick",
"Andrew M. Dai",
"Thanumalayan Sankaranarayana Pillai",
"Marie Pellat",
"Aitor Lewkowycz",
"Erica Moreira",
"R. Child",
"Oleksandr Polozov",
"Katherine Lee",
"Zongwei Zhou",
"Xuezhi Wang",
"Brennan Saeta",
"Mark Díaz",
"Orhan Firat",
"Michele Catasta",
"Jason Wei",
"K. Meier-Hellstern",
"D. Eck",
"J. Dean",
"Slav Petrov",
"Noah Fiedel"
],
"externalIds": {
"ArXiv": "2204.02311",
"DBLP": "journals/corr/abs-2204-02311",
"CorpusId": 247951931
},
"url": "https://www.semanticscholar.org/paper/094ff971d6a8b8ff870946c9b3ce5aa173617bfb",
"referenceCount": 173,
"citationCount": 4792,
"influentialCitationCount": 335,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SELFIES and the future of molecular string representations",
"abstract": null,
"year": 2022,
"venue": "Patterns",
"authors": [
"Mario Krenn",
"Qianxiang Ai",
"Senja Barthel",
"Nessa Carson",
"Angelo Frei",
"Nathan C Frey",
"Pascal Friederich",
"T. Gaudin",
"A. Gayle",
"K. Jablonka",
"R. Lameiro",
"Dominik Lemm",
"Alston Lo",
"S. M. Moosavi",
"Jos'e Manuel N'apoles-Duarte",
"AkshatKumar Nigam",
"R. Pollice",
"Kohulan Rajan",
"U. Schatzschneider",
"P. Schwaller",
"Marta Skreta",
"B. Smit",
"Felix Strieth-Kalthoff",
"Chong Sun",
"G. Tom",
"Guido Falk von Rudorff",
"Andrew Wang",
"Andrew D. White",
"A. Young",
"Rose Yu",
"A. Aspuru‐Guzik"
],
"externalIds": {
"DBLP": "journals/corr/abs-2204-00056",
"ArXiv": "2204.00056",
"PubMedCentral": "9583042",
"DOI": "10.1016/j.patter.2022.100588",
"CorpusId": 247922757,
"PubMed": "36277819"
},
"url": "https://www.semanticscholar.org/paper/d358ce242dacfd2ea738aa538779f06c79777f2b",
"referenceCount": 189,
"citationCount": 105,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics",
"Medicine"
]
},
{
"title": "LinkBERT: Pretraining Language Models with Document Links",
"abstract": "Language model (LM) pretraining captures various knowledge from text corpora, helping downstream tasks. However, existing methods such as BERT model a single document, and do not capture dependencies or knowledge that span across documents. In this work, we propose LinkBERT, an LM pretraining method that leverages links between documents, e.g., hyperlinks. Given a text corpus, we view it as a graph of documents and create LM inputs by placing linked documents in the same context. We then pretrain the LM with two joint self-supervised objectives: masked language modeling and our new proposal, document relation prediction. We show that LinkBERT outperforms BERT on various downstream tasks across two domains: the general domain (pretrained on Wikipedia with hyperlinks) and biomedical domain (pretrained on PubMed with citation links). LinkBERT is especially effective for multi-hop reasoning and few-shot QA (+5% absolute improvement on HotpotQA and TriviaQA), and our biomedical LinkBERT sets new states of the art on various BioNLP tasks (+7% on BioASQ and USMLE). We release our pretrained models, LinkBERT and BioLinkBERT, as well as code and data.",
"year": 2022,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Michihiro Yasunaga",
"J. Leskovec",
"Percy Liang"
],
"externalIds": {
"ArXiv": "2203.15827",
"DBLP": "conf/acl/YasunagaLL22",
"ACL": "2022.acl-long.551",
"DOI": "10.48550/arXiv.2203.15827",
"CorpusId": 247793456
},
"url": "https://www.semanticscholar.org/paper/a83cdcc0135c58fddf89fc72f1b92b7a9d1e170f",
"referenceCount": 98,
"citationCount": 284,
"influentialCitationCount": 48,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Unified Deep Learning Model for Multitask Reaction Predictions with Explanation",
"abstract": "There is significant interest and importance to develop robust machine learning models to assist organic chemistry synthesis. Typically, task-specific machine learning models for distinct reaction prediction tasks have been developed. In this work, we develop a unified deep learning model, T5Chem, for a variety of chemical reaction predictions tasks by adapting the \"Text-to-Text Transfer Transformer\" (T5) framework in natural language processing (NLP). On the basis of self-supervised pretraining with PubChem molecules, the T5Chem model can achieve state-of-the-art performances for four distinct types of task-specific reaction prediction tasks using four different open-source data sets, including reaction type classification on USPTO_TPL, forward reaction prediction on USPTO_MIT, single-step retrosynthesis on USPTO_50k, and reaction yield prediction on high-throughput C-N coupling reactions. Meanwhile, we introduced a new unified multitask reaction prediction data set USPTO_500_MT, which can be used to train and test five different types of reaction tasks, including the above four as well as a new reagent suggestion task. Our results showed that models trained with multiple tasks are more robust and can benefit from mutual learning on related tasks. Furthermore, we demonstrated the use of SHAP (SHapley Additive exPlanations) to explain T5Chem predictions at the functional group level, which provides a way to demystify sequence-based deep learning models in chemistry. T5Chem is accessible through https://yzhang.hpc.nyu.edu/T5Chem.",
"year": 2022,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Jieyu Lu",
"Yingkai Zhang"
],
"externalIds": {
"DBLP": "journals/jcisd/LuZ22",
"DOI": "10.1021/acs.jcim.1c01467",
"CorpusId": 247362020,
"PubMed": "35266390"
},
"url": "https://www.semanticscholar.org/paper/eee7997106834442f1704e4681a9a761df6696a1",
"referenceCount": 49,
"citationCount": 51,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Language models can learn complex molecular distributions",
"abstract": null,
"year": 2021,
"venue": "Nature Communications",
"authors": [
"Daniel Flam-Shepherd",
"Kevin Zhu",
"A. Aspuru‐Guzik"
],
"externalIds": {
"DBLP": "journals/corr/abs-2112-03041",
"ArXiv": "2112.03041",
"PubMedCentral": "9174447",
"DOI": "10.1038/s41467-022-30839-x",
"CorpusId": 249463675,
"PubMed": "35672310"
},
"url": "https://www.semanticscholar.org/paper/0f1c956f84a68ea4d6f5ebcf3c4d1d4e1f41d8f3",
"referenceCount": 68,
"citationCount": 110,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology",
"Medicine"
]
},
{
"title": "On the Opportunities and Risks of Foundation Models",
"abstract": "AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.",
"year": 2021,
"venue": "arXiv.org",
"authors": [
"Rishi Bommasani",
"Drew A. Hudson",
"E. Adeli",
"R. Altman",
"Simran Arora",
"Sydney von Arx",
"Michael S. Bernstein",
"J. Bohg",
"Antoine Bosselut",
"E. Brunskill",
"Erik Brynjolfsson",
"S. Buch",
"Dallas Card",
"Rodrigo Castellon",
"Niladri S. Chatterji",
"Annie S. Chen",
"Kathleen A. Creel",
"Jared Davis",
"Dora Demszky",
"Chris Donahue",
"M. Doumbouya",
"Esin Durmus",
"Stefano Ermon",
"J. Etchemendy",
"Kawin Ethayarajh",
"L. Fei-Fei",
"Chelsea Finn",
"Trevor Gale",
"Lauren Gillespie",
"Karan Goel",
"Noah D. Goodman",
"S. Grossman",
"Neel Guha",
"Tatsunori Hashimoto",
"Peter Henderson",
"John Hewitt",
"Daniel E. Ho",
"Jenny Hong",
"Kyle Hsu",
"Jing Huang",
"Thomas F. Icard",
"Saahil Jain",
"Dan Jurafsky",
"Pratyusha Kalluri",
"Siddharth Karamcheti",
"G. Keeling",
"Fereshte Khani",
"O. Khattab",
"Pang Wei Koh",
"M. Krass",
"Ranjay Krishna",
"Rohith Kuditipudi",
"Ananya Kumar",
"Faisal Ladhak",
"Mina Lee",
"Tony Lee",
"J. Leskovec",
"Isabelle Levent",
"Xiang Lisa Li",
"Xuechen Li",
"Tengyu Ma",
"Ali Malik",
"Christopher D. Manning",
"Suvir Mirchandani",
"E. Mitchell",
"Zanele Munyikwa",
"Suraj Nair",
"A. Narayan",
"D. Narayanan",
"Benjamin Newman",
"Allen Nie",
"Juan Carlos Niebles",
"H. Nilforoshan",
"J. Nyarko",
"Giray Ogut",
"Laurel J. Orr",
"Isabel Papadimitriou",
"J. Park",
"C. Piech",
"Eva Portelance",
"Christopher Potts",
"Aditi Raghunathan",
"Robert Reich",
"Hongyu Ren",
"Frieda Rong",
"Yusuf Roohani",
"Camilo Ruiz",
"Jack Ryan",
"Christopher R'e",
"Dorsa Sadigh",
"Shiori Sagawa",
"Keshav Santhanam",
"Andy Shih",
"K. Srinivasan",
"Alex Tamkin",
"Rohan Taori",
"A. Thomas",
"Florian Tramèr",
"Rose E. Wang",
"William Wang",
"Bohan Wu",
"Jiajun Wu",
"Yuhuai Wu",
"Sang Michael Xie",
"Michihiro Yasunaga",
"Jiaxuan You",
"M. Zaharia",
"Michael Zhang",
"Tianyi Zhang",
"Xikun Zhang",
"Yuhui Zhang",
"Lucia Zheng",
"Kaitlyn Zhou",
"Percy Liang"
],
"externalIds": {
"ArXiv": "2108.07258",
"DBLP": "journals/corr/abs-2108-07258",
"CorpusId": 237091588
},
"url": "https://www.semanticscholar.org/paper/76e9e2ec3de437ffb30d8b7b629f7fe3e61de5c2",
"referenceCount": 0,
"citationCount": 3228,
"influentialCitationCount": 144,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Chemformer: a pre-trained transformer for computational chemistry",
"abstract": "Transformer models coupled with a simplified molecular line entry system (SMILES) have recently proven to be a powerful combination for solving challenges in cheminformatics. These models, however, are often developed specifically for a single application and can be very resource-intensive to train. In this work we present the Chemformer model—a Transformer-based model which can be quickly applied to both sequence-to-sequence and discriminative cheminformatics tasks. Additionally, we show that self-supervised pre-training can improve performance and significantly speed up convergence on downstream tasks. On direct synthesis and retrosynthesis prediction benchmark datasets we publish state-of-the-art results for top-1 accuracy. We also improve on existing approaches for a molecular optimisation task and show that Chemformer can optimise on multiple discriminative tasks simultaneously. Models, datasets and code will be made available after publication.",
"year": 2021,
"venue": "Machine Learning: Science and Technology",
"authors": [],
"externalIds": {
"DBLP": "journals/mlst/IrwinDHB22",
"MAG": "3178976598",
"DOI": "10.1088/2632-2153/ac3ffb",
"CorpusId": 237747003
},
"url": "https://www.semanticscholar.org/paper/3f9f7f690e003176316d0ee56fbcbfed4b6b0948",
"referenceCount": 0,
"citationCount": 190,
"influentialCitationCount": 20,
"isOpenAccess": true,
"fieldsOfStudy": [
"Physics",
"Computer Science"
]
},
{
"title": "Medical concept normalization in clinical trials with drug and disease representation learning",
"abstract": "Abstract Motivation Clinical trials are the essential stage of every drug development program for the treatment to become available to patients. Despite the importance of well-structured clinical trial databases and their tremendous value for drug discovery and development such instances are very rare. Presently large-scale information on clinical trials is stored in clinical trial registers which are relatively structured, but the mappings to external databases of drugs and diseases are increasingly lacking. The precise production of such links would enable us to interrogate richer harmonized datasets for invaluable insights. Results We present a neural approach for medical concept normalization of diseases and drugs. Our two-stage approach is based on Bidirectional Encoder Representations from Transformers (BERT). In the training stage, we optimize the relative similarity of mentions and concept names from a terminology via triplet loss. In the inference stage, we obtain the closest concept name representation in a common embedding space to a given mention representation. We performed a set of experiments on a dataset of abstracts and a real-world dataset of trial records with interventions and conditions mapped to drug and disease terminologies. The latter includes mentions associated with one or more concepts (in-KB) or zero (out-of-KB, nil prediction). Experiments show that our approach significantly outperforms baseline and state-of-the-art architectures. Moreover, we demonstrate that our approach is effective in knowledge transfer from the scientific literature to clinical trial data. Availability and implementation We make code and data freely available at https://github.com/insilicomedicine/DILBERT.",
"year": 2021,
"venue": "Bioinform.",
"authors": [
"Z. Miftahutdinov",
"Artur Kadurin",
"R. Kudrin",
"E. Tutubalina"
],
"externalIds": {
"DBLP": "journals/bioinformatics/MiftahutdinovKK21",
"PubMedCentral": "8570806",
"DOI": "10.1093/bioinformatics/btab474",
"CorpusId": 235713895,
"PubMed": "34213526"
},
"url": "https://www.semanticscholar.org/paper/fdb00a0674dcf3bc1c5e3ddcdb9bf9f11fdc7d3e",
"referenceCount": 51,
"citationCount": 22,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "SciFive: a text-to-text transformer model for biomedical literature",
"abstract": "In this report, we introduce SciFive, a domain-specific T5 model that has been pre-trained on large biomedical corpora. Our model outperforms the current SOTA methods (i.e. BERT, BioBERT, Base T5) on tasks in named entity relation, relation extraction, natural language inference, and question-answering. We show that text-generation methods have significant potential in a broad array of biomedical NLP tasks, particularly those requiring longer, more complex outputs. Our results support the exploration of more difficult text generation tasks and the development of new methods in this area",
"year": 2021,
"venue": "arXiv.org",
"authors": [
"Long Phan",
"J. Anibal",
"H. Tran",
"Shaurya Chanana",
"Erol Bahadroglu",
"Alec Peltekian",
"G. Altan-Bonnet"
],
"externalIds": {
"DBLP": "journals/corr/abs-2106-03598",
"ArXiv": "2106.03598",
"CorpusId": 235358786
},
"url": "https://www.semanticscholar.org/paper/6003d268e9b5230dbc3e320497b50329d6186816",
"referenceCount": 23,
"citationCount": 134,
"influentialCitationCount": 25,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus",
"abstract": "Large language models have led to remarkable progress on many NLP tasks, and researchers are turning to ever-larger text corpora to train them. Some of the largest corpora available are made by scraping significant portions of the internet, and are frequently introduced with only minimal documentation. In this work we provide some of the first documentation for the Colossal Clean Crawled Corpus (C4; Raffel et al., 2020), a dataset created by applying a set of filters to a single snapshot of Common Crawl. We begin by investigating where the data came from, and find a significant amount of text from unexpected sources like patents and US military websites. Then we explore the content of the text itself, and find machine-generated text (e.g., from machine translation systems) and evaluation examples from other benchmark NLP datasets. To understand the impact of the filters applied to create this dataset, we evaluate the text that was removed, and show that blocklist filtering disproportionately removes text from and about minority individuals. Finally, we conclude with some recommendations for how to created and document web-scale datasets from a scrape of the internet.",
"year": 2021,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Jesse Dodge",
"Ana Marasovic",
"Gabriel Ilharco",
"Dirk Groeneveld",
"Margaret Mitchell",
"Matt Gardner"
],
"externalIds": {
"ArXiv": "2104.08758",
"DBLP": "conf/emnlp/DodgeSMAIGM021",
"ACL": "2021.emnlp-main.98",
"DOI": "10.18653/v1/2021.emnlp-main.98",
"CorpusId": 237568724
},
"url": "https://www.semanticscholar.org/paper/1adadbfa95e43a70fcd17e6ce947a0652b86bfc3",
"referenceCount": 87,
"citationCount": 324,
"influentialCitationCount": 29,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM",
"abstract": "Large language models have led to state-of-the-art accuracies across several tasks. However, training these models efficiently is challenging because: a) GPU memory capacity is limited, making it impossible to fit large models on even a multi-GPU server, and b) the number of compute operations required can result in unrealistically long training times. Consequently, new methods of model parallelism such as tensor and pipeline parallelism have been proposed. Unfortunately, naive usage of these methods leads to scaling issues at thousands of GPUs. In this paper, we show how tensor, pipeline, and data parallelism can be composed to scale to thousands of GPUs. We propose a novel interleaved pipelining schedule that can improve throughput by 10+% with memory footprint comparable to existing approaches. Our approach allows us to perform training iterations on a model with 1 trillion parameters at 502 petaFLOP/s on 3072 GPUs (per-GPU throughput of 52% of theoretical peak).",
"year": 2021,
"venue": "International Conference for High Performance Computing, Networking, Storage and Analysis",
"authors": [
"D. Narayanan",
"M. Shoeybi",
"J. Casper",
"P. LeGresley",
"M. Patwary",
"V. Korthikanti",
"Dmitri Vainbrand",
"Prethvi Kashinkunti",
"J. Bernauer",
"Bryan Catanzaro",
"Amar Phanishayee",
"M. Zaharia"
],
"externalIds": {
"DBLP": "conf/sc/NarayananSCLPKV21",
"ArXiv": "2104.04473",
"DOI": "10.1145/3458817.3476209",
"CorpusId": 236635565
},
"url": "https://www.semanticscholar.org/paper/774591fdd988eaaff3917e7c5171d044b0843e63",
"referenceCount": 39,
"citationCount": 438,
"influentialCitationCount": 79,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MolGrow: A Graph Normalizing Flow for Hierarchical Molecular Generation",
"abstract": "We propose a hierarchical normalizing flow model for generating molecular graphs. The model produces new molecular structures from a single-node graph by recursively splitting every node into two. All operations are invertible and can be used as plug-and-play modules. The hierarchical nature of the latent codes allows for precise changes in the resulting graph: perturbations in the first layer cause global structural changes, while perturbations in the consequent layers change the resulting molecule only marginally. Proposed model outperforms existing generative graph models on the distribution learning task. We also show successful experiments on global and constrained optimization of chemical properties using latent codes of the model.",
"year": 2021,
"venue": "AAAI Conference on Artificial Intelligence",
"authors": [
"Maksim Kuznetsov",
"Daniil Polykovskiy"
],
"externalIds": {
"DBLP": "journals/corr/abs-2106-05856",
"ArXiv": "2106.05856",
"DOI": "10.1609/aaai.v35i9.17001",
"CorpusId": 235306569
},
"url": "https://www.semanticscholar.org/paper/6bcc0ca39c6d120b6a7e8ab0d7e988e577403ab8",
"referenceCount": 46,
"citationCount": 33,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics"
]
},
{
"title": "Drug and Disease Interpretation Learning with Biomedical Entity Representation Transformer",
"abstract": null,
"year": 2021,
"venue": "European Conference on Information Retrieval",
"authors": [
"Z. Miftahutdinov",
"Artur Kadurin",
"R. Kudrin",
"E. Tutubalina"
],
"externalIds": {
"ArXiv": "2101.09311",
"DBLP": "conf/ecir/MiftahutdinovKK21",
"DOI": "10.1007/978-3-030-72113-8_30",
"CorpusId": 231698642
},
"url": "https://www.semanticscholar.org/paper/ceabc6247e6b0bd04c39c5d7db6c8a467c2aa526",
"referenceCount": 53,
"citationCount": 8,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Fair Evaluation in Concept Normalization: a Large-scale Comparative Analysis for BERT-based Models",
"abstract": "Linking of biomedical entity mentions to various terminologies of chemicals, diseases, genes, adverse drug reactions is a challenging task, often requiring non-syntactic interpretation. A large number of biomedical corpora and state-of-the-art models have been introduced in the past five years. However, there are no general guidelines regarding the evaluation of models on these corpora in single- and cross-terminology settings. In this work, we perform a comparative evaluation of various benchmarks and study the efficiency of state-of-the-art neural architectures based on Bidirectional Encoder Representations from Transformers (BERT) for linking of three entity types across three domains: research abstracts, drug labels, and user-generated texts on drug therapy in English. We have made the source code and results available at https://github.com/insilicomedicine/Fair-Evaluation-BERT.",
"year": 2020,
"venue": "International Conference on Computational Linguistics",
"authors": [
"E. Tutubalina",
"Artur Kadurin",
"Z. Miftahutdinov"
],
"externalIds": {
"MAG": "3116070535",
"ACL": "2020.coling-main.588",
"DBLP": "conf/coling/TutubalinaKM20",
"DOI": "10.18653/V1/2020.COLING-MAIN.588",
"CorpusId": 227231160
},
"url": "https://www.semanticscholar.org/paper/97d5e1a7ed819f6f10b3a1dfacd98ea2a0ae4954",
"referenceCount": 27,
"citationCount": 25,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Molecular representation learning with language models and domain-relevant auxiliary tasks",
"abstract": "We apply a Transformer architecture, specifically BERT, to learn flexible and high quality molecular representations for drug discovery problems. We study the impact of using different combinations of self-supervised tasks for pre-training, and present our results for the established Virtual Screening and QSAR benchmarks. We show that: i) The selection of appropriate self-supervised task(s) for pre-training has a significant impact on performance in subsequent downstream tasks such as Virtual Screening. ii) Using auxiliary tasks with more domain relevance for Chemistry, such as learning to predict calculated molecular properties, increases the fidelity of our learnt representations. iii) Finally, we show that molecular representations learnt by our model `MolBert' improve upon the current state of the art on the benchmark datasets.",
"year": 2020,
"venue": "arXiv.org",
"authors": [
"Benedek Fabian",
"T. Edlich",
"H. Gaspar",
"Marwin H. S. Segler",
"Joshua Meyers",
"Marco Fiscato",
"Mohamed Ahmed"
],
"externalIds": {
"DBLP": "journals/corr/abs-2011-13230",
"MAG": "3109892317",
"ArXiv": "2011.13230",
"CorpusId": 227209142
},
"url": "https://www.semanticscholar.org/paper/e4c5e81e6e337bb94af3eb719df5f029b40434fa",
"referenceCount": 44,
"citationCount": 106,
"influentialCitationCount": 12,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Bio-Megatron: Larger Biomedical Domain Language Model",
"abstract": "There has been an influx of biomedical domain-specific language models, showing language models pre-trained on biomedical text perform better on biomedical domain benchmarks than those trained on general domain text corpora such as Wikipedia and Books. Yet, most works do not study the factors affecting each domain language application deeply. Additionally, the study of model size on domain-specific models has been mostly missing. We empirically study and evaluate several factors that can affect performance on domain language applications, such as the sub-word vocabulary set, model size, pre-training corpus, and domain transfer. We show consistent improvements on benchmarks with our larger BioMegatron model trained on a larger domain corpus, contributing to our understanding of domain language model applications. We demonstrate noticeable improvements over the previous state-of-the-art (SOTA) on standard biomedical NLP benchmarks of named entity recognition, relation extraction, and question answering. Model checkpoints and code are available at [this https URL] and [this https URL].",
"year": 2020,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Hoo-Chang Shin",
"Yang Zhang",
"Evelina Bakhturina",
"Raul Puri",
"M. Patwary",
"M. Shoeybi",
"Raghav Mani"
],
"externalIds": {
"MAG": "3093097038",
"ACL": "2020.emnlp-main.379",
"DBLP": "conf/emnlp/ShinZBPPSM20",
"ArXiv": "2010.06060",
"DOI": "10.18653/v1/2020.emnlp-main.379",
"CorpusId": 222310618
},
"url": "https://www.semanticscholar.org/paper/6d6595766a35f12a6ad671d05634b5e2159d4f3e",
"referenceCount": 27,
"citationCount": 127,
"influentialCitationCount": 16,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing",
"abstract": "Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. In this article, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. To facilitate this investigation, we compile a comprehensive biomedical NLP benchmark from publicly available datasets. Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks, leading to new state-of-the-art results across the board. Further, in conducting a thorough evaluation of modeling choices, both for pretraining and task-specific fine-tuning, we discover that some common practices are unnecessary with BERT models, such as using complex tagging schemes in named entity recognition. To help accelerate research in biomedical NLP, we have released our state-of-the-art pretrained and task-specific models for the community, and created a leaderboard featuring our BLURB benchmark (short for Biomedical Language Understanding & Reasoning Benchmark) at https://aka.ms/BLURB.",
"year": 2020,
"venue": "ACM Trans. Comput. Heal.",
"authors": [
"Yu Gu",
"Robert Tinn",
"Hao Cheng",
"Michael R. Lucas",
"Naoto Usuyama",
"Xiaodong Liu",
"Tristan Naumann",
"Jianfeng Gao",
"Hoifung Poon"
],
"externalIds": {
"MAG": "3046375318",
"DBLP": "journals/corr/abs-2007-15779",
"ArXiv": "2007.15779",
"DOI": "10.1145/3458754",
"CorpusId": 220919723
},
"url": "https://www.semanticscholar.org/paper/a2f38d03fd363e920494ad65a5f0ad8bd18cd60b",
"referenceCount": 64,
"citationCount": 1378,
"influentialCitationCount": 213,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Results of the Seventh Edition of the BioASQ Challenge",
"abstract": null,
"year": 2020,
"venue": "PKDD/ECML Workshops",
"authors": [
"A. Nentidis",
"K. Bougiatiotis",
"Anastasia Krithara",
"G. Paliouras"
],
"externalIds": {
"DBLP": "conf/pkdd/NentidisBKP19",
"ArXiv": "2006.09174",
"MAG": "3035763680",
"DOI": "10.1007/978-3-030-43887-6_51",
"CorpusId": 214730413
},
"url": "https://www.semanticscholar.org/paper/7a23a2948fa3a9d48e3c3bd071b522f417e59955",
"referenceCount": 69,
"citationCount": 59,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Language Models are Few-Shot Learners",
"abstract": "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.",
"year": 2020,
"venue": "Neural Information Processing Systems",
"authors": [
"Tom B. Brown",
"Benjamin Mann",
"Nick Ryder",
"Melanie Subbiah",
"J. Kaplan",
"Prafulla Dhariwal",
"Arvind Neelakantan",
"Pranav Shyam",
"Girish Sastry",
"Amanda Askell",
"Sandhini Agarwal",
"Ariel Herbert-Voss",
"Gretchen Krueger",
"T. Henighan",
"R. Child",
"A. Ramesh",
"Daniel M. Ziegler",
"Jeff Wu",
"Clemens Winter",
"Christopher Hesse",
"Mark Chen",
"Eric Sigler",
"Ma-teusz Litwin",
"Scott Gray",
"B. Chess",
"Jack Clark",
"Christopher Berner",
"Sam McCandlish",
"Alec Radford",
"I. Sutskever",
"Dario Amodei"
],
"externalIds": {
"ArXiv": "2005.14165",
"DBLP": "conf/nips/BrownMRSKDNSSAA20",
"MAG": "3030163527",
"CorpusId": 218971783
},
"url": "https://www.semanticscholar.org/paper/90abbc2cf38462b954ae1b772fac9532e2ccd8b0",
"referenceCount": 146,
"citationCount": 30859,
"influentialCitationCount": 3528,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Molecular Generation for Desired Transcriptome Changes With Adversarial Autoencoders",
"abstract": "Gene expression profiles are useful for assessing the efficacy and side effects of drugs. In this paper, we propose a new generative model that infers drug molecules that could induce a desired change in gene expression. Our model—the Bidirectional Adversarial Autoencoder—explicitly separates cellular processes captured in gene expression changes into two feature sets: those related and unrelated to the drug incubation. The model uses related features to produce a drug hypothesis. We have validated our model on the LINCS L1000 dataset by generating molecular structures in the SMILES format for the desired transcriptional response. In the experiments, we have shown that the proposed model can generate novel molecular structures that could induce a given gene expression change or predict a gene expression difference after incubation of a given molecular structure. The code of the model is available at https://github.com/insilicomedicine/BiAAE.",
"year": 2020,
"venue": "Frontiers in Pharmacology",
"authors": [
"Shayakhmetov Rim",
"Maksim Kuznetsov",
"Alexander Zhebrak",
"Artur Kadurin",
"S. Nikolenko",
"A. Aliper",
"Daniil Polykovskiy"
],
"externalIds": {
"MAG": "3016876724",
"PubMedCentral": "7182000",
"DOI": "10.3389/fphar.2020.00269",
"CorpusId": 215801154,
"PubMed": "32973498"
},
"url": "https://www.semanticscholar.org/paper/810821a48b5ba2908c538a1f2f55161a0a3f1202",
"referenceCount": 68,
"citationCount": 33,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"abstract": "We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.",
"year": 2019,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"M. Lewis",
"Yinhan Liu",
"Naman Goyal",
"Marjan Ghazvininejad",
"Abdel-rahman Mohamed",
"Omer Levy",
"Veselin Stoyanov",
"Luke Zettlemoyer"
],
"externalIds": {
"MAG": "2982399380",
"DBLP": "conf/acl/LewisLGGMLSZ20",
"ACL": "2020.acl-main.703",
"ArXiv": "1910.13461",
"DOI": "10.18653/v1/2020.acl-main.703",
"CorpusId": 204960716
},
"url": "https://www.semanticscholar.org/paper/395de0bd3837fdf4b4b5e5f04835bcc69c279481",
"referenceCount": 36,
"citationCount": 9215,
"influentialCitationCount": 1963,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.",
"year": 2019,
"venue": "Journal of machine learning research",
"authors": [
"Colin Raffel",
"Noam M. Shazeer",
"Adam Roberts",
"Katherine Lee",
"Sharan Narang",
"Michael Matena",
"Yanqi Zhou",
"Wei Li",
"Peter J. Liu"
],
"externalIds": {
"MAG": "2981852735",
"DBLP": "journals/corr/abs-1910-10683",
"ArXiv": "1910.10683",
"CorpusId": 204838007
},
"url": "https://www.semanticscholar.org/paper/6c4b76232bb72897685d19b3d264c6ee3005bc2b",
"referenceCount": 134,
"citationCount": 15989,
"influentialCitationCount": 2031,
"isOpenAccess": false,
"fieldsOfStudy": [
"Mathematics",
"Computer Science"
]
},
{
"title": "NeMo: a toolkit for building AI applications using Neural Modules",
"abstract": "NeMo (Neural Modules) is a Python framework-agnostic toolkit for creating AI applications through re-usability, abstraction, and composition. NeMo is built around neural modules, conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations. NeMo makes it easy to combine and re-use these building blocks while providing a level of semantic correctness checking via its neural type system. The toolkit comes with extendable collections of pre-built modules for automatic speech recognition and natural language processing. Furthermore, NeMo provides built-in support for distributed training and mixed precision on latest NVIDIA GPUs. NeMo is open-source this https URL",
"year": 2019,
"venue": "arXiv.org",
"authors": [
"Oleksii Kuchaiev",
"Jason Li",
"Huyen Nguyen",
"Oleksii Hrinchuk",
"Ryan Leary",
"Boris Ginsburg",
"Samuel Kriman",
"Stanislav Beliaev",
"Vitaly Lavrukhin",
"Jack Cook",
"P. Castonguay",
"Mariya Popova",
"Jocelyn Huang",
"Jonathan M. Cohen"
],
"externalIds": {
"ArXiv": "1909.09577",
"DBLP": "journals/corr/abs-1909-09577",
"MAG": "2974231335",
"CorpusId": 202712805
},
"url": "https://www.semanticscholar.org/paper/3f5f0899751f4222217e1b0c39c5ee8eac527f5c",
"referenceCount": 18,
"citationCount": 244,
"influentialCitationCount": 13,
"isOpenAccess": false,
"fieldsOfStudy": [
"Mathematics",
"Computer Science",
"Engineering"
]
},
{
"title": "PubMedQA: A Dataset for Biomedical Research Question Answering",
"abstract": "We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at https://pubmedqa.github.io.",
"year": 2019,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Qiao Jin",
"Bhuwan Dhingra",
"Zhengping Liu",
"William W. Cohen",
"Xinghua Lu"
],
"externalIds": {
"ArXiv": "1909.06146",
"ACL": "D19-1259",
"MAG": "2972522091",
"DBLP": "conf/emnlp/JinDLCL19",
"DOI": "10.18653/v1/D19-1259",
"CorpusId": 202572622
},
"url": "https://www.semanticscholar.org/paper/0c3c4c88c7b07596221ac640c7b7102686e3eae3",
"referenceCount": 23,
"citationCount": 555,
"influentialCitationCount": 85,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Biology"
]
},
{
"title": "Deep learning enables rapid identification of potent DDR1 kinase inhibitors",
"abstract": null,
"year": 2019,
"venue": "Nature Biotechnology",
"authors": [
"A. Zhavoronkov",
"Y. Ivanenkov",
"A. Aliper",
"M. Veselov",
"V. Aladinskiy",
"Anastasiya V Aladinskaya",
"V. Terentiev",
"Daniil Polykovskiy",
"Maksim Kuznetsov",
"Arip Asadulaev",
"Yury Volkov",
"Artem Zholus",
"Shayakhmetov Rim",
"Alexander Zhebrak",
"L. Minaeva",
"B. Zagribelnyy",
"Lennart H Lee",
"R. Soll",
"D. Madge",
"Li Xing",
"Tao Guo",
"Alán Aspuru-Guzik"
],
"externalIds": {
"MAG": "2971690404",
"DOI": "10.1038/s41587-019-0224-x",
"CorpusId": 201716327,
"PubMed": "31477924"
},
"url": "https://www.semanticscholar.org/paper/d44ac0a7fd4734187bccafc4a2771027b8bb595e",
"referenceCount": 29,
"citationCount": 765,
"influentialCitationCount": 18,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Trends in clinical success rates and therapeutic focus",
"abstract": null,
"year": 2019,
"venue": "Nature reviews. Drug discovery",
"authors": [
"H. Dowden",
"Jamie Munro"
],
"externalIds": {
"MAG": "2944466104",
"DOI": "10.1038/d41573-019-00074-z",
"CorpusId": 164564888,
"PubMed": "31267067"
},
"url": "https://www.semanticscholar.org/paper/2d7d08db8a71301d439276616e84946b15d0ca9f",
"referenceCount": 0,
"citationCount": 342,
"influentialCitationCount": 15,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining",
"abstract": "Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.",
"year": 2019,
"venue": "Bioinform.",
"authors": [
"Jinhyuk Lee",
"Wonjin Yoon",
"Sungdong Kim",
"Donghyeon Kim",
"Sunkyu Kim",
"Chan Ho So",
"Jaewoo Kang"
],
"externalIds": {
"MAG": "2972964850",
"ArXiv": "1901.08746",
"DBLP": "journals/bioinformatics/LeeYKKKSK20",
"PubMedCentral": "7703786",
"DOI": "10.1093/bioinformatics/btz682",
"CorpusId": 59291975,
"PubMed": "31501885"
},
"url": "https://www.semanticscholar.org/paper/1e43c7084bdcb6b3102afaf301cce10faead2702",
"referenceCount": 45,
"citationCount": 4720,
"influentialCitationCount": 679,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models",
"abstract": "Generative models are becoming a tool of choice for exploring the molecular space. These models learn on a large training dataset and produce novel molecular structures with similar properties. Generated structures can be utilized for virtual screening or training semi-supervized predictive models in the downstream tasks. While there are plenty of generative models, it is unclear how to compare and rank them. In this work, we introduce a benchmarking platform called Molecular Sets (MOSES) to standardize training and comparison of molecular generative models. MOSES provides training and testing datasets, and a set of metrics to evaluate the quality and diversity of generated structures. We have implemented and compared several molecular generation models and suggest to use our results as reference points for further advancements in generative chemistry research. The platform and source code are available at https://github.com/molecularsets/moses.",
"year": 2018,
"venue": "Frontiers in Pharmacology",
"authors": [
"Daniil Polykovskiy",
"Alexander Zhebrak",
"Benjamín Sánchez-Lengeling",
"Sergey Golovanov",
"Oktai Tatanov",
"Stanislav Belyaev",
"Rauf Kurbanov",
"A. Artamonov",
"V. Aladinskiy",
"M. Veselov",
"Artur Kadurin",
"S. Nikolenko",
"Alán Aspuru-Guzik",
"A. Zhavoronkov"
],
"externalIds": {
"DBLP": "journals/corr/abs-1811-12823",
"MAG": "2903425689",
"PubMedCentral": "7775580",
"ArXiv": "1811.12823",
"DOI": "10.3389/fphar.2020.565644",
"CorpusId": 54434517,
"PubMed": "33390943"
},
"url": "https://www.semanticscholar.org/paper/404571933e3e87942768c9a5cde8a6285732ad6f",
"referenceCount": 124,
"citationCount": 532,
"influentialCitationCount": 77,
"isOpenAccess": true,
"fieldsOfStudy": [
"Mathematics",
"Medicine",
"Computer Science"
]
},
{
"title": "Molecular Transformer: A Model for Uncertainty-Calibrated Chemical Reaction Prediction",
"abstract": "Organic synthesis is one of the key stumbling blocks in medicinal chemistry. A necessary yet unsolved step in planning synthesis is solving the forward problem: Given reactants and reagents, predict the products. Similar to other work, we treat reaction prediction as a machine translation problem between simplified molecular-input line-entry system (SMILES) strings (a text-based representation) of reactants, reagents, and the products. We show that a multihead attention Molecular Transformer model outperforms all algorithms in the literature, achieving a top-1 accuracy above 90% on a common benchmark data set. Molecular Transformer makes predictions by inferring the correlations between the presence and absence of chemical motifs in the reactant, reagent, and product present in the data set. Our model requires no handcrafted rules and accurately predicts subtle chemical transformations. Crucially, our model can accurately estimate its own uncertainty, with an uncertainty score that is 89% accurate in terms of classifying whether a prediction is correct. Furthermore, we show that the model is able to handle inputs without a reactant–reagent split and including stereochemistry, which makes our method universally applicable.",
"year": 2018,
"venue": "ACS Central Science",
"authors": [
"P. Schwaller",
"T. Laino",
"T. Gaudin",
"P. Bolgar",
"Constantine Bekas",
"A. Lee"
],
"externalIds": {
"ArXiv": "1811.02633",
"DBLP": "journals/corr/abs-1811-02633",
"MAG": "3103092523",
"PubMedCentral": "6764164",
"DOI": "10.1021/acscentsci.9b00576",
"CorpusId": 53232749,
"PubMed": "31572784"
},
"url": "https://www.semanticscholar.org/paper/de20b6488e148a19ae6c63defbfca8a6373e4110",
"referenceCount": 61,
"citationCount": 621,
"influentialCitationCount": 27,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Physics",
"Medicine"
]
},
{
"title": "Entangled Conditional Adversarial Autoencoder for de Novo Drug Discovery.",
"abstract": "Modern computational approaches and machine learning techniques accelerate the invention of new drugs. Generative models can discover novel molecular structures within hours, while conventional drug discovery pipelines require months of work. In this article, we propose a new generative architecture, entangled conditional adversarial autoencoder, that generates molecular structures based on various properties, such as activity against a specific protein, solubility, or ease of synthesis. We apply the proposed model to generate a novel inhibitor of Janus kinase 3, implicated in rheumatoid arthritis, psoriasis, and vitiligo. The discovered molecule was tested in vitro and showed good activity and selectivity.",
"year": 2018,
"venue": "Molecular Pharmaceutics",
"authors": [
"Daniil Polykovskiy",
"Alexander Zhebrak",
"D. Vetrov",
"Y. Ivanenkov",
"V. Aladinskiy",
"Polina Mamoshina",
"M. Bozdaganyan",
"A. Aliper",
"A. Zhavoronkov",
"Artur Kadurin"
],
"externalIds": {
"MAG": "2891868449",
"DOI": "10.1021/acs.molpharmaceut.8b00839",
"CorpusId": 52157845,
"PubMed": "30180591"
},
"url": "https://www.semanticscholar.org/paper/00747ad8ecbaa2478f929747c78b312647df9e68",
"referenceCount": 61,
"citationCount": 191,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature",
"abstract": "We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the ‘PICO’ elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.",
"year": 2018,
"venue": "Annual Meeting of the Association for Computational Linguistics",
"authors": [
"Benjamin E. Nye",
"Junyi Jessy Li",
"Roma Patel",
"Yinfei Yang",
"I. Marshall",
"A. Nenkova",
"Byron C. Wallace"
],
"externalIds": {
"ACL": "P18-1019",
"MAG": "2798373858",
"DBLP": "conf/acl/NenkovaLYMWNP18",
"ArXiv": "1806.04185",
"DOI": "10.18653/v1/P18-1019",
"CorpusId": 48353672,
"PubMed": "30305770"
},
"url": "https://www.semanticscholar.org/paper/0a78873e41615798d09391d9f40d41666b8c9beb",
"referenceCount": 38,
"citationCount": 198,
"influentialCitationCount": 32,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "SciTaiL: A Textual Entailment Dataset from Science Question Answering",
"abstract": "\n \n We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SciTail is the first entailment set that is created solely from natural sentences that already exist independently ``in the wild'' rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SciTail, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SciTail by 5% using a new neural model that exploits linguistic structure.\n \n",
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"authors": [
"Tushar Khot",
"Ashish Sabharwal",
"Peter Clark"
],
"externalIds": {
"DBLP": "conf/aaai/KhotSC18",
"MAG": "2788496822",
"DOI": "10.1609/aaai.v32i1.12022",
"CorpusId": 24462950
},
"url": "https://www.semanticscholar.org/paper/cf8c493079702ec420ab4fc9c0fabb56b2a16c84",
"referenceCount": 36,
"citationCount": 443,
"influentialCitationCount": 87,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Adversarial Threshold Neural Computer for Molecular de Novo Design.",
"abstract": "In this article, we propose the deep neural network Adversarial Threshold Neural Computer (ATNC). The ATNC model is intended for the de novo design of novel small-molecule organic structures. The model is based on generative adversarial network architecture and reinforcement learning. ATNC uses a Differentiable Neural Computer as a generator and has a new specific block, called adversarial threshold (AT). AT acts as a filter between the agent (generator) and the environment (discriminator + objective reward functions). Furthermore, to generate more diverse molecules we introduce a new objective reward function named Internal Diversity Clustering (IDC). In this work, ATNC is tested and compared with the ORGANIC model. Both models were trained on the SMILES string representation of the molecules, using four objective functions (internal similarity, Muegge druglikeness filter, presence or absence of sp3-rich fragments, and IDC). The SMILES representations of 15K druglike molecules from the ChemDiv collection were used as a training data set. For the different functions, ATNC outperforms ORGANIC. Combined with the IDC, ATNC generates 72% of valid and 77% of unique SMILES strings, while ORGANIC generates only 7% of valid and 86% of unique SMILES strings. For each set of molecules generated by ATNC and ORGANIC, we analyzed distributions of four molecular descriptors (number of atoms, molecular weight, logP, and tpsa) and calculated five chemical statistical features (internal diversity, number of unique heterocycles, number of clusters, number of singletons, and number of compounds that have not been passed through medicinal chemistry filters). Analysis of key molecular descriptors and chemical statistical features demonstrated that the molecules generated by ATNC elicited better druglikeness properties. We also performed in vitro validation of the molecules generated by ATNC; results indicated that ATNC is an effective method for producing hit compounds.",
"year": 2018,
"venue": "Molecular Pharmaceutics",
"authors": [
"E. Putin",
"Arip Asadulaev",
"Q. Vanhaelen",
"Y. Ivanenkov",
"Anastasia V Aladinskaya",
"A. Aliper",
"A. Zhavoronkov"
],
"externalIds": {
"MAG": "2793945656",
"DOI": "10.1021/acs.molpharmaceut.7b01137",
"CorpusId": 4475537,
"PubMed": "29569445"
},
"url": "https://www.semanticscholar.org/paper/0a983a9ac014489e6cc5457c8d5bada5aa31bec0",
"referenceCount": 3,
"citationCount": 163,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks",
"abstract": "In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery.",
"year": 2017,
"venue": "ACS Central Science",
"authors": [
"Marwin H. S. Segler",
"T. Kogej",
"C. Tyrchan",
"M. Waller"
],
"externalIds": {
"PubMedCentral": "5785775",
"MAG": "2578240541",
"DOI": "10.1021/acscentsci.7b00512",
"CorpusId": 27688749,
"PubMed": "29392184"
},
"url": "https://www.semanticscholar.org/paper/1cb789ab8925bda02758bcb69eb0ed1547b5f4b9",
"referenceCount": 76,
"citationCount": 1113,
"influentialCitationCount": 47,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "BIOSSES: a semantic sentence similarity estimation system for the biomedical domain",
"abstract": "Motivation: The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text. Methods: We propose several approaches for sentence‐level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology‐based approaches are presented that utilize general and domain‐specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods. Results: The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state‐of‐the‐art domain‐independent systems up to 42.6% in terms of the Pearson correlation metric. Availability and implementation: A web‐based system for biomedical semantic sentence similarity computation, the source code, and the annotated benchmark data set are available at: http://tabilab.cmpe.boun.edu.tr/BIOSSES/. Contact: gizemsogancioglu@gmail.com or arzucan.ozgur@boun.edu.tr",
"year": 2017,
"venue": "Bioinform.",
"authors": [
"Gizem Sogancioglu",
"Hakime Öztürk",
"Arzucan Özgür"
],
"externalIds": {
"DBLP": "journals/bioinformatics/SoganciogluOO17",
"MAG": "2735784619",
"PubMedCentral": "5870675",
"DOI": "10.1093/bioinformatics/btx238",
"CorpusId": 3778978,
"PubMed": "28881973"
},
"url": "https://www.semanticscholar.org/paper/d01450f9f0f7603e599f4e3ed4aa3eee1468af84",
"referenceCount": 69,
"citationCount": 155,
"influentialCitationCount": 21,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Attention is All you Need",
"abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"year": 2017,
"venue": "Neural Information Processing Systems",
"authors": [
"Ashish Vaswani",
"Noam M. Shazeer",
"Niki Parmar",
"Jakob Uszkoreit",
"Llion Jones",
"Aidan N. Gomez",
"Lukasz Kaiser",
"Illia Polosukhin"
],
"externalIds": {
"MAG": "2963403868",
"DBLP": "conf/nips/VaswaniSPUJGKP17",
"ArXiv": "1706.03762",
"CorpusId": 13756489
},
"url": "https://www.semanticscholar.org/paper/204e3073870fae3d05bcbc2f6a8e263d9b72e776",
"referenceCount": 41,
"citationCount": 105006,
"influentialCitationCount": 15361,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MoleculeNet: a benchmark for molecular machine learning",
"abstract": "A large scale benchmark for molecular machine learning consisting of multiple public datasets, metrics, featurizations and learning algorithms.",
"year": 2017,
"venue": "Chemical Science",
"authors": [
"Zhenqin Wu",
"Bharath Ramsundar",
"Evan N. Feinberg",
"Joseph Gomes",
"C. Geniesse",
"Aneesh S. Pappu",
"K. Leswing",
"V. Pande"
],
"externalIds": {
"MAG": "2949858440",
"PubMedCentral": "5868307",
"ArXiv": "1703.00564",
"DBLP": "journals/corr/WuRFGGPLP17",
"DOI": "10.1039/c7sc02664a",
"CorpusId": 217680306,
"PubMed": "29629118"
},
"url": "https://www.semanticscholar.org/paper/d0ab11de3077490c80a08abd0fb8827bac84c454",
"referenceCount": 124,
"citationCount": 1488,
"influentialCitationCount": 265,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Physics",
"Mathematics"
]
},
{
"title": "Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules",
"abstract": "We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in a set of molecules with fewer that nine heavy atoms.",
"year": 2016,
"venue": "ACS Central Science",
"authors": [
"Rafael Gómez-Bombarelli",
"D. Duvenaud",
"José Miguel Hernández-Lobato",
"J. Aguilera-Iparraguirre",
"Timothy D. Hirzel",
"Ryan P. Adams",
"Alán Aspuru-Guzik"
],
"externalIds": {
"MAG": "2529996553",
"DBLP": "journals/corr/Gomez-Bombarelli16",
"ArXiv": "1610.02415",
"PubMedCentral": "5833007",
"DOI": "10.1021/acscentsci.7b00572",
"CorpusId": 3347345,
"PubMed": "29532027"
},
"url": "https://www.semanticscholar.org/paper/88427d2143d4cff357c3b393ae7580a7b6e19940",
"referenceCount": 73,
"citationCount": 2609,
"influentialCitationCount": 162,
"isOpenAccess": true,
"fieldsOfStudy": [
"Mathematics",
"Computer Science",
"Physics",
"Medicine"
]
},
{
"title": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
"abstract": null,
"year": 2016,
"venue": "",
"authors": [
"Sobhan Yassipour Tehrani",
"S. Zschaler",
"K. Lano"
],
"externalIds": {
"MAG": "2619216210",
"DOI": "10.1007/978-3-319-42064-6_9",
"CorpusId": 30111068
},
"url": "https://www.semanticscholar.org/paper/8d7be4f06dce11741df1a64e26bac93d8a52a9bc",
"referenceCount": 0,
"citationCount": 153,
"influentialCitationCount": 17,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Deep Learning Applications for Predicting Pharmacological Properties of Drugs and Drug Repurposing Using Transcriptomic Data.",
"abstract": "Deep learning is rapidly advancing many areas of science and technology with multiple success stories in image, text, voice and video recognition, robotics, and autonomous driving. In this paper we demonstrate how deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles. We used the perturbation samples of 678 drugs across A549, MCF-7, and PC-3 cell lines from the LINCS Project and linked those to 12 therapeutic use categories derived from MeSH. To train the DNN, we utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled data set of samples perturbed with different concentrations of the drug for 6 and 24 hours. In both pathway and gene level classification, DNN achieved high classification accuracy and convincingly outperformed the support vector machine (SVM) model on every multiclass classification problem, however, models based on pathway level data performed significantly better. For the first time we demonstrate a deep learning neural net trained on transcriptomic data to recognize pharmacological properties of multiple drugs across different biological systems and conditions. We also propose using deep neural net confusion matrices for drug repositioning. This work is a proof of principle for applying deep learning to drug discovery and development.",
"year": 2016,
"venue": "Molecular Pharmaceutics",
"authors": [
"A. Aliper",
"S. Plis",
"Artem V. Artemov",
"Alvaro E. Ulloa",
"Polina Mamoshina",
"A. Zhavoronkov"
],
"externalIds": {
"MAG": "2397757171",
"DOI": "10.1021/acs.molpharmaceut.6b00248",
"CorpusId": 206686308,
"PubMed": "27200455"
},
"url": "https://www.semanticscholar.org/paper/61938257dcd466f868c5c81ac5b2eebec0fdd0ee",
"referenceCount": 37,
"citationCount": 410,
"influentialCitationCount": 8,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "BioCreative V CDR task corpus: a resource for chemical disease relation extraction",
"abstract": "Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",
"year": 2016,
"venue": "Database J. Biol. Databases Curation",
"authors": [
"Jiao Li",
"Yueping Sun",
"Robin J. Johnson",
"D. Sciaky",
"Chih-Hsuan Wei",
"Robert Leaman",
"A. P. Davis",
"C. Mattingly",
"Thomas C. Wiegers",
"Zhiyong Lu"
],
"externalIds": {
"DBLP": "journals/biodb/LiSJSWLDMWL16",
"MAG": "2346452181",
"PubMedCentral": "4860626",
"DOI": "10.1093/database/baw068",
"CorpusId": 88817,
"PubMed": "27161011"
},
"url": "https://www.semanticscholar.org/paper/61322ec6cfc54fe9723d4637239b8fb9938dc501",
"referenceCount": 36,
"citationCount": 787,
"influentialCitationCount": 131,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "ZINC 15 – Ligand Discovery for Everyone",
"abstract": "Many questions about the biological activity and availability of small molecules remain inaccessible to investigators who could most benefit from their answers. To narrow the gap between chemoinformatics and biology, we have developed a suite of ligand annotation, purchasability, target, and biology association tools, incorporated into ZINC and meant for investigators who are not computer specialists. The new version contains over 120 million purchasable “drug-like” compounds – effectively all organic molecules that are for sale – a quarter of which are available for immediate delivery. ZINC connects purchasable compounds to high-value ones such as metabolites, drugs, natural products, and annotated compounds from the literature. Compounds may be accessed by the genes for which they are annotated as well as the major and minor target classes to which those genes belong. It offers new analysis tools that are easy for nonspecialists yet with few limitations for experts. ZINC retains its original 3D roots – all molecules are available in biologically relevant, ready-to-dock formats. ZINC is freely available at http://zinc15.docking.org.",
"year": 2015,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"T. Sterling",
"J. Irwin"
],
"externalIds": {
"PubMedCentral": "4658288",
"MAG": "1757990252",
"DBLP": "journals/jcisd/SterlingI15",
"DOI": "10.1021/acs.jcim.5b00559",
"CorpusId": 327319,
"PubMed": "26479676"
},
"url": "https://www.semanticscholar.org/paper/5972c3d8507359a6cff6ef17c4af206ec76b32bb",
"referenceCount": 109,
"citationCount": 2459,
"influentialCitationCount": 164,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Computer Science",
"Medicine"
]
},
{
"title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research",
"abstract": null,
"year": 2014,
"venue": "BMC Bioinformatics",
"authors": [
"À. Bravo",
"J. Piñero",
"N. Queralt",
"Michael Rautschka",
"L. Furlong"
],
"externalIds": {
"MAG": "1563147074",
"PubMedCentral": "4466840",
"DBLP": "journals/bmcbi/BravoPQRF15",
"DOI": "10.1186/s12859-015-0472-9",
"CorpusId": 5978918,
"PubMed": "25886734"
},
"url": "https://www.semanticscholar.org/paper/cfb4edb7541fafcf593b466320c63ae32d27f57e",
"referenceCount": 52,
"citationCount": 245,
"influentialCitationCount": 17,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine",
"Biology"
]
},
{
"title": "NCBI disease corpus: A resource for disease name recognition and concept normalization",
"abstract": null,
"year": 2014,
"venue": "Journal of Biomedical Informatics",
"authors": [
"R. Dogan",
"Robert Leaman",
"Zhiyong Lu"
],
"externalIds": {
"MAG": "2291931301",
"DBLP": "journals/jbi/DoganLL14",
"DOI": "10.1016/j.jbi.2013.12.006",
"CorpusId": 234064,
"PubMed": "24393765"
},
"url": "https://www.semanticscholar.org/paper/696753d59185436ec95ecf3021c413f353be4874",
"referenceCount": 50,
"citationCount": 762,
"influentialCitationCount": 75,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions",
"abstract": null,
"year": 2013,
"venue": "Journal of Biomedical Informatics",
"authors": [
"María Herrero-Zazo",
"Isabel Segura-Bedmar",
"Paloma Martínez",
"Thierry Declerck"
],
"externalIds": {
"DBLP": "journals/jbi/Herrero-ZazoSMD13",
"MAG": "2170189740",
"DOI": "10.1016/j.jbi.2013.07.011",
"CorpusId": 23935739,
"PubMed": "23906817"
},
"url": "https://www.semanticscholar.org/paper/6ea353ada2b89763f58d8068a74b2e6def526948",
"referenceCount": 43,
"citationCount": 378,
"influentialCitationCount": 40,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "The hallmarks of cancer",
"abstract": "With the advent of next generation sequencing methods and progress in transcriptome analysis, it became obvious that the human genome contains much more than just protein-coding genes. In fact, up to 70% of our genome is transcribed into RNA that does not serve as templates for proteins. In this review, we focus on the emerging roles of these long non-coding RNAs (lncRNAs) in the field of tumor biology. Long ncRNAs were found to be deregulated in several human cancers and show tissue-specific expression. Functional studies revealed a broad spectrum of mechanisms applied by lncRNAs such as HOTAIR, MALAT1, ANRIL or lincRNA-p21 to fulfill their functions. Here, we link the cellular processes influenced by long ncRNAs to the hallmarks of cancer and therefore provide an ncRNA point-of-view on tumor biology. This should stimulate new research directions and therapeutic options considering long ncRNAs as novel prognostic markers and therapeutic targets.",
"year": 2012,
"venue": "RNA Biology",
"authors": [
"Tony Gutschner",
"S. Diederichs"
],
"externalIds": {
"MAG": "1976149758",
"PubMedCentral": "3495743",
"DOI": "10.4161/rna.20481",
"CorpusId": 2535316,
"PubMed": "22664915"
},
"url": "https://www.semanticscholar.org/paper/1470722bd776c4c5b1bc7a6cbcf9ff93c952461f",
"referenceCount": 218,
"citationCount": 21300,
"influentialCitationCount": 795,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Overview of BioCreative II gene mention recognition",
"abstract": null,
"year": 2008,
"venue": "Genome Biology",
"authors": [
"Larry L. Smith",
"L. Tanabe",
"Rie Ando",
"Cheng-Ju Kuo",
"I. Chung",
"Chun-Nan Hsu",
"Yu-Shi Lin",
"Roman Klinger",
"C. Friedrich",
"Kuzman Ganchev",
"Manabu Torii",
"Hongfang Liu",
"B. Haddow",
"C. Struble",
"R. Povinelli",
"Andreas Vlachos",
"W. Baumgartner",
"L. Hunter",
"Bob Carpenter",
"Richard Tzong-Han Tsai",
"Hong-Jie Dai",
"Feng Liu",
"Yifei Chen",
"Chengjie Sun",
"S. Katrenko",
"P. Adriaans",
"C. Blaschke",
"Rafael Torres",
"M. Neves",
"Preslav Nakov",
"A. Divoli",
"M. Maña-López",
"J. Mata",
"W. Wilbur"
],
"externalIds": {
"PubMedCentral": "2559986",
"MAG": "2154142897",
"DOI": "10.1186/gb-2008-9-s2-s2",
"CorpusId": 215780186,
"PubMed": "18834493"
},
"url": "https://www.semanticscholar.org/paper/f5a0c6593ba95d23c025608ce9280848da8b929f",
"referenceCount": 54,
"citationCount": 485,
"influentialCitationCount": 42,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Biology"
]
},
{
"title": "Introduction to the Bio-entity Recognition Task at JNLPBA",
"abstract": "We describe here the JNLPBA shared task of bio-entity recognition using an extended version of the GENIA version 3 named entity corpus of MEDLINE abstracts. We provide background information on the task and present a general discussion of the approaches taken by participating systems.",
"year": 2004,
"venue": "NLPBA/BioNLP",
"authors": [
"Nigel Collier",
"Jin-Dong Kim"
],
"externalIds": {
"MAG": "2047782770",
"ACL": "W04-1213",
"DBLP": "conf/bionlp/CollierK04",
"DOI": "10.3115/1567594.1567610",
"CorpusId": 7985741
},
"url": "https://www.semanticscholar.org/paper/3bd4d2de49d8a092abb295b845dba14874f8787d",
"referenceCount": 14,
"citationCount": 689,
"influentialCitationCount": 82,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules",
"abstract": "18-24.",
"year": 1988,
"venue": "Journal of chemical information and computer sciences",
"authors": [
"D. Weininger"
],
"externalIds": {
"MAG": "1975147762",
"DBLP": "journals/jcisd/Weininger88",
"DOI": "10.1021/ci00057a005",
"CorpusId": 5445756
},
"url": "https://www.semanticscholar.org/paper/3f7983818b76a5f1b5daf9b605877ed401c8e73c",
"referenceCount": 18,
"citationCount": 5268,
"influentialCitationCount": 339,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Database",
"abstract": null,
"year": 1985,
"venue": "",
"authors": [
"H. Chandler"
],
"externalIds": {
"DOI": "10.1177/002221948501800715",
"CorpusId": 220675618
},
"url": "https://www.semanticscholar.org/paper/01297b19ec00f5487a522a573ff6e0a9aeac4f05",
"referenceCount": 1,
"citationCount": 3121,
"influentialCitationCount": 226,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Digital Discovery , 2023, 2 (3",
"abstract": null,
"year": 2024,
"venue": "Chem. Sci.",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "A Comprehensive Evaluation of Biomedical Entity-centric Search",
"abstract": "Biomedical information retrieval has often been studied as a task of detecting whether a system correctly detects entity spans and links these entities to concepts from a given terminology. Most academic research has focused on evaluation of named entity recognition (NER) and entity linking (EL) models which are key components to recognizing diseases and genes in PubMed abstracts. In this work, we perform a fine-grained evaluation intended to understand the efficiency of state-of-the-art BERT-based information extraction (IE) architecture as a biomedical search engine. We present a novel manually annotated dataset of abstracts for disease and gene search. The dataset contains 23K query-abstract pairs, where 152 queries are selected from logs of our target discovery platform and PubMed abstracts annotated with relevance judgments. Specifically, the query list also includes a subset of concepts with at least one ambiguous concept name. As a base-line, we use off-she-shelf Elasticsearch with BM25. Our experiments on NER, EL, and retrieval in a zero-shot setup show the neural IE architecture shows superior performance for both disease and gene concept queries.",
"year": 2022,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"E. Tutubalina",
"Z. Miftahutdinov",
"Vladimir Muravlev",
"Anastasia Shneyderman"
],
"externalIds": {
"DBLP": "conf/emnlp/TutubalinaMMS22",
"DOI": "10.18653/v1/2022.emnlp-industry.61",
"CorpusId": 257806387
},
"url": "https://www.semanticscholar.org/paper/ac6c23a5ee616266d902cd7b67e8e70a09edc276",
"referenceCount": 37,
"citationCount": 2,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries",
"abstract": "We propose a new task, Text2Mol, to retrieve molecules using natural language descriptions as queries. Natural language and molecules encode information in very different ways, which leads to the exciting but challenging problem of integrating these two very different modalities. Although some work has been done on text-based retrieval and structure-based retrieval, this new task requires integrating molecules and natural language more directly. Moreover, this can be viewed as an especially challenging cross-lingual retrieval problem by considering the molecules as a language with a very unique grammar. We construct a paired dataset of molecules and their corresponding text descriptions, which we use to learn an aligned common semantic embedding space for retrieval. We extend this to create a cross-modal attention-based model for explainability and reranking by interpreting the attentions as association rules. We also employ an ensemble approach to integrate our different architectures, which significantly improves results from 0.372 to 0.499 MRR. This new multimodal approach opens a new perspective on solving problems in chemistry literature understanding and molecular machine learning.",
"year": 2021,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Carl N. Edwards",
"Chengxiang Zhai",
"Heng Ji"
],
"externalIds": {
"DBLP": "conf/emnlp/EdwardsZJ21",
"ACL": "2021.emnlp-main.47",
"DOI": "10.18653/v1/2021.emnlp-main.47",
"CorpusId": 243865204
},
"url": "https://www.semanticscholar.org/paper/57651d65078818821234d13544ac1f29858dcd67",
"referenceCount": 70,
"citationCount": 93,
"influentialCitationCount": 24,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Symmetric Variational Inference with High Mutual Information",
"abstract": null,
"year": 2020,
"venue": "",
"authors": [
"M. Livne"
],
"externalIds": {
"DBLP": "phd/ca/Livne20",
"CorpusId": 251161866
},
"url": "https://www.semanticscholar.org/paper/a7cdafa4bede471636646546f6315ebd6534b20e",
"referenceCount": 0,
"citationCount": 1,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"year": 2019,
"venue": "North American Chapter of the Association for Computational Linguistics",
"authors": [
"Jacob Devlin",
"Ming-Wei Chang",
"Kenton Lee",
"Kristina Toutanova"
],
"externalIds": {
"MAG": "2951055169",
"ACL": "N19-1423",
"DBLP": "journals/corr/abs-1810-04805",
"ArXiv": "1810.04805",
"DOI": "10.18653/v1/N19-1423",
"CorpusId": 52967399
},
"url": "https://www.semanticscholar.org/paper/df2b0e26d0599ce3e70df8a9da02e51594e0e992",
"referenceCount": 63,
"citationCount": 81690,
"influentialCitationCount": 19054,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Language Models are Unsupervised Multitask Learners",
"abstract": "Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.",
"year": 2019,
"venue": "",
"authors": [
"Alec Radford",
"Jeff Wu",
"R. Child",
"D. Luan",
"Dario Amodei",
"I. Sutskever"
],
"externalIds": {
"MAG": "2955855238",
"CorpusId": 160025533
},
"url": "https://www.semanticscholar.org/paper/9405cc0d6169988371b2755e573cc28650d14dfe",
"referenceCount": 75,
"citationCount": 18460,
"influentialCitationCount": 3039,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "NeMo: A Toolkit for Conversational AI and Large Language Models",
"abstract": null,
"year": 2019,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Mednli-a natural language inference dataset for the clinical domain",
"abstract": null,
"year": 2019,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Overview of the BioCreative VI chemical-protein interaction Track",
"abstract": "—Despite the considerable number of available systems that recognize automatically mentions of genes/proteins and chemicals in text, only a limited number of attempts were made so far to extract interactions between them. Most biomedical relation extraction systems focus on the extraction of protein-protein or gene/chemical-disease relations. The detection of interactions between drugs and proteins/genes is of key relevance for pharmacological and clinical research, playing an important role for drug discovery, understanding of molecular mechanism of adverse drug reactions, describing drug metabolism or drawing regulatory networks of importance for systems pharmacology. The BioCreative VI - ChemProt track represents the first attempt to promote the development of systems for extracting chemical-protein interactions (CPIs), of relevance for precision medicine as well as for drug discovery and basic biomedical research. The novel ChemProt corpus consists of text exhaustively annotated by hand with mentions of chemical compounds/drugs and genes/proteins, as well as 22 different types of compound-protein relations. To focus on a subset of important relations, 5 relation classes were used for evaluation purposes, including agonist, antagonist, inhibitor, activator and substrate/product relations. A total of 13 participating teams returned 45 runs for this track. Despite the biological complexity of the considered relation types, top-scoring teams could obtain an F-measure across relation classes of 64.10%. Performance varied depending on the relation class: for the antagonist relation class the best team obtained an F-measure of 72.56% (precision of 80.75%, recall of 65.87%) while for inhibition/down-regulation the best value was of 71.48% (with a precision of 76.51% and a recall of 67.07%).",
"year": 2017,
"venue": "",
"authors": [
"Martin Krallinger",
"Obdulia Rabal",
"S. Akhondi",
"Martín Pérez Pérez",
"J. Santamaría",
"Gael Pérez Rodríguez",
"G. Tsatsaronis",
"Ander Intxaurrondo",
"José Antonio Baso López",
"U. Nandal",
"E. V. Buel",
"A. Chandrasekhar",
"Marleen Rodenburg",
"A. Lægreid",
"Marius A. Doornenbal",
"J. Oyarzábal",
"A. Lourenço",
"A. Valencia"
],
"externalIds": {
"MAG": "2905718646",
"CorpusId": 13690520
},
"url": "https://www.semanticscholar.org/paper/eed781f498b563df5a9e8a241c67d63dd1d92ad5",
"referenceCount": 20,
"citationCount": 232,
"influentialCitationCount": 19,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The biocreative ii-critical assessment for information extraction in biology challenge",
"abstract": null,
"year": 2008,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "14 | 1–15 Journal Name",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "ace-prometazine",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "Machine Learning and Knowledge Discovery in Databases: International Workshops of ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019",
"abstract": null,
"year": null,
"venue": "Proceedings, Part II, 2020",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "might",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "(2022) Gatortron: A large clinical language model to unlock patient information from unstructured electronic health records",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "(2022) In-boxbart: Get instructions into biomedical multi-task learning",
"abstract": null,
"year": null,
"venue": "2022 Findings of the Association for Computational Linguistics: NAACL 2022",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "2022) Large-scale chemical language representations capture molecular structure and properties",
"abstract": null,
"year": null,
"venue": "Nature Machine Intelligence",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "2022) Scaling instruction-finetuned language models",
"abstract": null,
"year": null,
"venue": "arXiv preprint",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "We introduce a biochemical foundation model nach0 and pre-train base and large versions of nach0 on molecular structures and textual data",
"abstract": null,
"year": null,
"venue": "from scientific articles and patents",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
},
{
"title": "We fine-tune nach0 in a supervised and multi-task manner, using a combination of diverse tasks specified through natural language prompts",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
},
"Chemist-X: Large Language Model-empowered Agent for Reaction Condition Recommendation in Chemical Synthesis": {
"paper_title": "Chemist-X: Large Language Model-empowered Agent for Reaction Condition Recommendation in Chemical Synthesis",
"arxiv_id": "2311.10776",
"authors": [
"Kexin Chen",
"Junyou Li",
"Kunyi Wang",
"Yuyang Du",
"Jiahui Yu",
"Jiamin Lu",
"Lanqing Li",
"Jiezhong Qiu",
"Jianzhang Pan",
"Yi Huang",
"Qun Fang",
"P. Heng",
"Guangyong Chen"
],
"year": 2023,
"venue": "",
"abstract": "Recent AI research plots a promising future of automatic chemical reactions within the chemistry society. This study proposes Chemist-X, a transformative AI agent that automates the reaction condition recommendation (RCR) task in chemical synthesis with retrieval-augmented generation (RAG) technology. To emulate expert chemists' strategies when solving RCR tasks, Chemist-X utilizes advanced RAG schemes to interrogate online molecular databases and distill critical data from the latest literature database. Further, the agent leverages state-of-the-art computer-aided design (CAD) tools with a large language model (LLM) supervised programming interface. With the ability to utilize updated chemical knowledge and CAD tools, our agent significantly outperforms conventional synthesis AIs confined to the fixed knowledge within its training data. Chemist-X considerably reduces chemists' workload and allows them to focus on more fundamental and creative problems, thereby bringing closer computational techniques and chemical research and making a remarkable leap toward harnessing AI's full capabilities in scientific discovery.",
"references": [
{
"title": "LLMind: Orchestrating AI and IoT with LLM for Complex Task Execution",
"abstract": "Task-oriented communications are an important element in future intelligent IoT systems. Existing IoT systems, however, are limited in their capacity to handle complex tasks, particularly in their interactions with humans to accomplish these tasks. In this paper, we present LLMind, an LLM-based task-oriented AI agent framework that enables effective collaboration among IoT devices, with humans communicating high-level verbal instructions, to perform complex tasks. Inspired by the functional specialization theory of the brain, our framework integrates an LLM with domain-specific AI modules, enhancing its capabilities. Complex tasks, which may involve collaborations of multiple domain-specific AI modules and IoT devices, are executed through a control script generated by the LLM using a Language-Code transformation approach, which first converts language descriptions to an intermediate finite-state machine (FSM) before final precise transformation to code. Furthermore, the framework incorporates a novel experience accumulation mechanism to enhance response speed and effectiveness, allowing the framework to evolve and become progressively sophisticated through continuing user and machine interactions.",
"year": 2023,
"venue": "IEEE Communications Magazine",
"authors": [
"Hongwei Cui",
"Yuyang Du",
"Qun Yang",
"Yulin Shao",
"S. Liew"
],
"externalIds": {
"ArXiv": "2312.09007",
"DOI": "10.1109/mcom.002.2400106",
"CorpusId": 266210033
},
"url": "https://www.semanticscholar.org/paper/641df7a62bfb54e5e9d83ee679bc51e03e8d9b04",
"referenceCount": 59,
"citationCount": 13,
"influentialCitationCount": 0,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Autonomous chemical research with large language models",
"abstract": null,
"year": 2023,
"venue": "The Naturalist",
"authors": [
"Daniil A. Boiko",
"R. MacKnight",
"Ben Kline",
"Gabe Gomes"
],
"externalIds": {
"PubMedCentral": "10733136",
"DBLP": "journals/nature/BoikoMKG23",
"DOI": "10.1038/s41586-023-06792-0",
"CorpusId": 266432059,
"PubMed": "38123806"
},
"url": "https://www.semanticscholar.org/paper/6fe3779fe5f2e9402abdd08ad8db41a0f13a99eb",
"referenceCount": 19,
"citationCount": 138,
"influentialCitationCount": 6,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine",
"abstract": "Generalist foundation models such as GPT-4 have displayed surprising capabilities in a wide variety of domains and tasks. Yet, there is a prevalent assumption that they cannot match specialist capabilities of fine-tuned models. For example, most explorations to date on medical competency benchmarks have leveraged domain-specific training, as exemplified by efforts on BioGPT and Med-PaLM. We build on a prior study of GPT-4's capabilities on medical challenge benchmarks in the absence of special training. Rather than using simple prompting to highlight the model's out-of-the-box capabilities, we perform a systematic exploration of prompt engineering. We find that prompting innovation can unlock deeper specialist capabilities and show that GPT-4 easily tops prior leading results for medical benchmarks. The prompting methods we explore are general purpose, and make no specific use of domain expertise, removing the need for expert-curated content. Our experimental design carefully controls for overfitting during the prompt engineering process. We introduce Medprompt, based on a composition of several prompting strategies. With Medprompt, GPT-4 achieves state-of-the-art results on all nine of the benchmark datasets in the MultiMedQA suite. The method outperforms leading specialist models such as Med-PaLM 2 by a significant margin with an order of magnitude fewer calls to the model. Steering GPT-4 with Medprompt achieves a 27% reduction in error rate on the MedQA dataset over the best methods to date achieved with specialist models and surpasses a score of 90% for the first time. Beyond medical problems, we show the power of Medprompt to generalize to other domains and provide evidence for the broad applicability of the approach via studies of the strategy on exams in electrical engineering, machine learning, philosophy, accounting, law, nursing, and clinical psychology.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Harsha Nori",
"Yin Tat Lee",
"Sheng Zhang",
"Dean Carignan",
"Richard Edgar",
"Nicoló Fusi",
"Nicholas King",
"Jonathan Larson",
"Yuanzhi Li",
"Weishung Liu",
"Renqian Luo",
"S. McKinney",
"Robert Osazuwa Ness",
"Hoifung Poon",
"Tao Qin",
"Naoto Usuyama",
"Chris White",
"Eric Horvitz"
],
"externalIds": {
"DBLP": "journals/corr/abs-2311-16452",
"ArXiv": "2311.16452",
"DOI": "10.48550/arXiv.2311.16452",
"CorpusId": 265466787
},
"url": "https://www.semanticscholar.org/paper/bde9da9a39a065588d7f4573936731510d6f4f29",
"referenceCount": 38,
"citationCount": 179,
"influentialCitationCount": 33,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following",
"abstract": "We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video. Guided by ImageBind, we construct a joint embedding space between 3D and multi-modalities, enabling many promising applications, e.g., any-to-3D generation, 3D embedding arithmetic, and 3D open-world understanding. On top of this, we further present Point-LLM, the first 3D large language model (LLM) following 3D multi-modal instructions. By parameter-efficient fine-tuning techniques, Point-LLM injects the semantics of Point-Bind into pre-trained LLMs, e.g., LLaMA, which requires no 3D instruction data, but exhibits superior 3D and multi-modal question-answering capacity. We hope our work may cast a light on the community for extending 3D point clouds to multi-modality applications. Code is available at https://github.com/ZiyuGuo99/Point-Bind_Point-LLM.",
"year": 2023,
"venue": "arXiv.org",
"authors": [
"Ziyu Guo",
"Renrui Zhang",
"Xiangyang Zhu",
"Yiwen Tang",
"Xianzheng Ma",
"Jiaming Han",
"Ke Chen",
"Peng Gao",
"Xianzhi Li",
"Hongsheng Li",
"P. Heng"
],
"externalIds": {
"ArXiv": "2309.00615",
"DBLP": "journals/corr/abs-2309-00615",
"DOI": "10.48550/arXiv.2309.00615",
"CorpusId": 261493787
},
"url": "https://www.semanticscholar.org/paper/22ebfc211d184ed615729378a43fde175bf14478",
"referenceCount": 100,
"citationCount": 80,
"influentialCitationCount": 12,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms",
"abstract": "Large language models (LLMs) have garnered significant attention across various research disciplines, including the wireless communication community. There have been several heated discussions on the intersection of LLMs and wireless technologies. While recent studies have demonstrated the ability of LLMs to generate hardware description language (HDL) code for simple computation tasks, developing wireless prototypes and products via HDL poses far greater challenges because of the more complex computation tasks involved. In this paper, we aim to address this challenge by investigating the role of LLMs in FPGA-based hardware development for advanced wireless signal processing. We begin by exploring LLM-assisted code refactoring, reuse, and validation, using an open-source software-defined radio (SDR) project as a case study. Through the case study, we find that an LLM assistant can potentially yield substantial productivity gains for researchers and developers. We then examine the feasibility of using LLMs to generate HDL code for advanced wireless signal processing, using the Fast Fourier Transform (FFT) algorithm as an example. This task presents two unique challenges: the scheduling of subtasks within the overall task and the multi-step thinking required to solve certain arithmetic problem within the task. To address these challenges, we employ in-context learning (ICL) and Chain-of-Thought (CoT) prompting techniques, culminating in the successful generation of a 64-point Verilog FFT module. Our results demonstrate the potential of LLMs for generalization and imitation, affirming their usefulness in writing HDL code for wireless communication systems. Overall, this work contributes to understanding the role of LLMs in wireless communication and motivates further exploration of their capabilities.",
"year": 2023,
"venue": "",
"authors": [
"Yuyang Du",
"S. Liew",
"Kexin Chen",
"Yulin Shao"
],
"externalIds": {
"ArXiv": "2307.07319",
"CorpusId": 259924486
},
"url": "https://www.semanticscholar.org/paper/fdb5b1ade3dfcdb3f7deb62392416942c0effe65",
"referenceCount": 47,
"citationCount": 21,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Engineering"
]
},
{
"title": "A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development",
"abstract": "ChatGPT, an artificial intelligence generated content (AIGC) model developed by OpenAI, has attracted world-wide attention for its capability of dealing with challenging language understanding and generation tasks in the form of conversations. This paper briefly provides an overview on the history, status quo and potential future development of ChatGPT, helping to provide an entry point to think about ChatGPT. Specifically, from the limited open-accessed resources, we conclude the core techniques of ChatGPT, mainly including large-scale language models, in-context learning, reinforcement learning from human feedback and the key technical steps for developing Chat-GPT. We further analyze the pros and cons of ChatGPT and we rethink the duality of ChatGPT in various fields. Although it has been widely acknowledged that ChatGPT brings plenty of opportunities for various fields, mankind should still treat and use ChatGPT properly to avoid the potential threat, e.g., academic integrity and safety challenge. Finally, we discuss several open problems as the potential development of ChatGPT.",
"year": 2023,
"venue": "IEEE/CAA Journal of Automatica Sinica",
"authors": [
"Tianyu Wu",
"Shizhu He",
"Jingping Liu",
"Siqi Sun",
"Kang Liu",
"Qing‐Long Han",
"Yang Tang"
],
"externalIds": {
"DBLP": "journals/ieeejas/WuHLSLHT23",
"DOI": "10.1109/JAS.2023.123618",
"CorpusId": 258447166
},
"url": "https://www.semanticscholar.org/paper/8cf819f6ee33909484ece40d79944c9c37f01a89",
"referenceCount": 133,
"citationCount": 364,
"influentialCitationCount": 18,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "MetaRF: attention-based random forest for reaction yield prediction with a few trails",
"abstract": null,
"year": 2023,
"venue": "Journal of Cheminformatics",
"authors": [
"Ke Chen",
"Guangyong Chen",
"Junyou Li",
"Yuansheng Huang",
"Ercheng Wang",
"Tingjun Hou",
"P. Heng"
],
"externalIds": {
"PubMedCentral": "10084704",
"DBLP": "journals/jcheminf/ChenCLHWHH23",
"DOI": "10.1186/s13321-023-00715-x",
"CorpusId": 258042733,
"PubMed": "37038222"
},
"url": "https://www.semanticscholar.org/paper/077ab79d72699dee10c663feec0a40f41bc53115",
"referenceCount": 47,
"citationCount": 15,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "PubChem 2023 update",
"abstract": "PubChem (https://pubchem.ncbi.nlm.nih.gov) is a popular chemical information resource that serves a wide range of use cases. In the past two years, a number of changes were made to PubChem. Data from more than 120 data sources was added to PubChem. Some major highlights include: the integration of Google Patents data into PubChem, which greatly expanded the coverage of the PubChem Patent data collection; the creation of the Cell Line and Taxonomy data collections, which provide quick and easy access to chemical information for a given cell line and taxon, respectively; and the update of the bioassay data model. In addition, new functionalities were added to the PubChem programmatic access protocols, PUG-REST and PUG-View, including support for target-centric data download for a given protein, gene, pathway, cell line, and taxon and the addition of the 'standardize' option to PUG-REST, which returns the standardized form of an input chemical structure. A significant update was also made to PubChemRDF. The present paper provides an overview of these changes.",
"year": 2022,
"venue": "Nucleic Acids Res.",
"authors": [
"Sunghwan Kim",
"Jie Chen",
"Tiejun Cheng",
"A. Gindulyte",
"Jia He",
"Siqian He",
"Qingliang Li",
"Benjamin A. Shoemaker",
"P. Thiessen",
"Bo Yu",
"L. Zaslavsky",
"Jian Zhang",
"Evan E. Bolton"
],
"externalIds": {
"DBLP": "journals/nar/KimCCGHHLSTYZZB23",
"DOI": "10.1093/nar/gkac956",
"CorpusId": 253182955,
"PubMed": "36305812"
},
"url": "https://www.semanticscholar.org/paper/bc9a851ac1003274d7f3c7ada5c8e35a8dd15c72",
"referenceCount": 0,
"citationCount": 992,
"influentialCitationCount": 51,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Understanding HTML with Large Language Models",
"abstract": "Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding -- i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval -- have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, fine-tuned LLMs are 12% more accurate at semantic classification compared to models trained exclusively on the task dataset. Moreover, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50% more tasks using 192x less data compared to the previous best supervised model. Out of the LLMs we evaluate, we show evidence that T5-based models are ideal due to their bidirectional encoder-decoder architecture. To promote further research on LLMs for HTML understanding, we create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl.",
"year": 2022,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"authors": [
"Izzeddin Gur",
"Ofir Nachum",
"Yingjie Miao",
"Mustafa Safdari",
"Austin Huang",
"Aakanksha Chowdhery",
"Sharan Narang",
"Noah Fiedel",
"Aleksandra Faust"
],
"externalIds": {
"DBLP": "conf/emnlp/GurNMSHCNFF23",
"ArXiv": "2210.03945",
"DOI": "10.48550/arXiv.2210.03945",
"CorpusId": 252780086
},
"url": "https://www.semanticscholar.org/paper/e37310ccc5f368355c19b4de45826278aeaff280",
"referenceCount": 43,
"citationCount": 47,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering",
"abstract": "Retrieval Augment Generation (RAG) is a recent advancement in Open-Domain Question Answering (ODQA). RAG has only been trained and explored with a Wikipedia-based external knowledge base and is not optimized for use in other specialized domains such as healthcare and news. In this paper, we evaluate the impact of joint training of the retriever and generator components of RAG for the task of domain adaptation in ODQA. We propose RAG-end2end, an extension to RAG that can adapt to a domain-specific knowledge base by updating all components of the external knowledge base during training. In addition, we introduce an auxiliary training signal to inject more domain-specific knowledge. This auxiliary signal forces RAG-end2end to reconstruct a given sentence by accessing the relevant information from the external knowledge base. Our novel contribution is that, unlike RAG, RAG-end2end does joint training of the retriever and generator for the end QA task and domain adaptation. We evaluate our approach with datasets from three domains: COVID-19, News, and Conversations, and achieve significant performance improvements compared to the original RAG model. Our work has been open-sourced through the HuggingFace Transformers library, attesting to our work’s credibility and technical consistency.",
"year": 2022,
"venue": "Transactions of the Association for Computational Linguistics",
"authors": [
"Shamane Siriwardhana",
"Rivindu Weerasekera",
"Elliott Wen",
"Tharindu Kaluarachchi",
"R. Rana",
"Suranga Nanayakkara"
],
"externalIds": {
"DBLP": "journals/corr/abs-2210-02627",
"ArXiv": "2210.02627",
"ACL": "2023.tacl-1.1",
"DOI": "10.1162/tacl_a_00530",
"CorpusId": 252735056
},
"url": "https://www.semanticscholar.org/paper/6fcdad7b8d6b60b23bc51859e736c29f913b249a",
"referenceCount": 45,
"citationCount": 63,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Chemistry-informed molecular graph as reaction descriptor for machine-learned retrosynthesis planning",
"abstract": "Significance Machine learning has achieved great success in retrosynthesis planning. We introduced chemical information, including NMR chemical shifts, bond energies, catalysts, and solvents into the descriptor of molecules and reactions and into molecular graphs to represent molecules and reactions, and constructed a retrosynthesis planning model. It was developed using five molecular graph–based neural networks and Monte Carlo tree search. Our model, trained with a dataset of 1.4 million reaction data, achieved a top 50 accuracy of 0.94 for reaction template selection, a top 10 accuracy of 0.93 for catalyst prediction, and a top 10 accuracy of 0.89 for solvent prediction. The introduction of chemical information greatly enhances the accuracy, reliability, and efficiency of both single-step and multistep path planning.",
"year": 2022,
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"authors": [
"Baicheng Zhang",
"Xiaolong Zhang",
"Wenjie Du",
"Zhaokun Song",
"Guozhen Zhang",
"Guoqing Zhang",
"Yang Wang",
"Xin Chen",
"Jun Jiang",
"Yi Luo"
],
"externalIds": {
"PubMedCentral": "9564830",
"DOI": "10.1073/pnas.2212711119",
"CorpusId": 252694879,
"PubMed": "36191228"
},
"url": "https://www.semanticscholar.org/paper/bd248383c94adcef5a15373b97d7c5fcc34c9376",
"referenceCount": 32,
"citationCount": 16,
"influentialCitationCount": 0,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation",
"abstract": "Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable knowledge base. However, they are also more complex systems and need to handle long inputs. In this work, we introduce FiD-Light to strongly increase the efficiency of the state-of-the-art retrieval-augmented FiD model, while maintaining the same level of effectiveness. Our FiD-Light model constrains the information flow from the encoder (which encodes passages separately) to the decoder (using concatenated encoded representations). Furthermore, we adapt FiD-Light with re-ranking capabilities through textual source pointers, to improve the top-ranked provenance precision. Our experiments on a diverse set of seven knowledge intensive tasks (KILT) show FiD-Light consistently improves the Pareto frontier between query latency and effectiveness. FiD-Light with source pointing sets substantial new state-of-the-art results on six KILT tasks for combined text generation and provenance retrieval evaluation, while maintaining high efficiency.",
"year": 2022,
"venue": "Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"authors": [
"Sebastian Hofstätter",
"Jiecao Chen",
"K. Raman",
"Hamed Zamani"
],
"externalIds": {
"DBLP": "journals/corr/abs-2209-14290",
"ArXiv": "2209.14290",
"DOI": "10.1145/3539618.3591687",
"CorpusId": 252568176
},
"url": "https://www.semanticscholar.org/paper/fc74fe92fb9e34d3ae1237bf9a6e718c723f4f3e",
"referenceCount": 57,
"citationCount": 57,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Text and Code Embeddings by Contrastive Pre-Training",
"abstract": "Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architecture. In this work, we show that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code. The same unsupervised text embeddings that achieve new state-of-the-art results in linear-probe classification also display impressive semantic search capabilities and sometimes even perform competitively with fine-tuned models. On linear-probe classification accuracy averaging over 7 tasks, our best unsupervised model achieves a relative improvement of 4% and 1.8% over previous best unsupervised and supervised text embedding models respectively. The same text embeddings when evaluated on large-scale semantic search attains a relative improvement of 23.4%, 14.7%, and 10.6% over previous best unsupervised methods on MSMARCO, Natural Questions and TriviaQA benchmarks, respectively. Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20.8% relative improvement over prior best work on code search.",
"year": 2022,
"venue": "arXiv.org",
"authors": [
"Arvind Neelakantan",
"Tao Xu",
"Raul Puri",
"Alec Radford",
"Jesse Michael Han",
"Jerry Tworek",
"Qiming Yuan",
"N. Tezak",
"Jong Wook Kim",
"Chris Hallacy",
"Johannes Heidecke",
"Pranav Shyam",
"Boris Power",
"Tyna Eloundou Nekoul",
"Girish Sastry",
"Gretchen Krueger",
"D. Schnurr",
"F. Such",
"K. Hsu",
"Madeleine Thompson",
"Tabarak Khan",
"Toki Sherbakov",
"Joanne Jang",
"P. Welinder",
"Lilian Weng"
],
"externalIds": {
"ArXiv": "2201.10005",
"DBLP": "journals/corr/abs-2201-10005",
"CorpusId": 246275593
},
"url": "https://www.semanticscholar.org/paper/6d7d4fca9840504f630e9bea6acaa07322a6e889",
"referenceCount": 87,
"citationCount": 316,
"influentialCitationCount": 30,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "On the use of real-world datasets for reaction yield prediction",
"abstract": "The lack of publicly available, large, and unbiased datasets is a key bottleneck for the application of machine learning (ML) methods in synthetic chemistry. Data from electronic laboratory notebooks (ELNs) could provide less biased, large datasets, but no such datasets have been made publicly available. The first real-world dataset from the ELNs of a large pharmaceutical company is disclosed and its relationship to high-throughput experimentation (HTE) datasets is described. For chemical yield predictions, a key task in chemical synthesis, an attributed graph neural network (AGNN) performs as well as or better than the best previous models on two HTE datasets for the Suzuki–Miyaura and Buchwald–Hartwig reactions. However, training the AGNN on an ELN dataset does not lead to a predictive model. The implications of using ELN data for training ML-based models are discussed in the context of yield predictions.",
"year": 2021,
"venue": "Chemical Science",
"authors": [
"M. Saebi",
"B. Nan",
"John E. Herr",
"J. Wahlers",
"Zhichun Guo",
"A. Zurański",
"T. Kogej",
"P. Norrby",
"A. Doyle",
"O. Wiest",
"N. Chawla"
],
"externalIds": {
"PubMedCentral": "10189898",
"DOI": "10.1039/d2sc06041h",
"CorpusId": 239077670,
"PubMed": "37206399"
},
"url": "https://www.semanticscholar.org/paper/d6a40c17c04ea026c9f40a27793a5e5fd522c770",
"referenceCount": 40,
"citationCount": 18,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "Artificial Intelligence in Chemistry: Current Trends and Future Directions",
"abstract": "The application of artificial intelligence (AI) to chemistry has grown tremendously in recent years. In this Review, we studied the growth and distribution of AI-related chemistry publications in the last two decades using the CAS Content Collection. The volume of both journal and patent publications have increased dramatically, especially since 2015. Study of the distribution of publications over various chemistry research areas revealed that analytical chemistry and biochemistry are integrating AI to the greatest extent and with the highest growth rates. We also investigated trends in interdisciplinary research and identified frequently occurring combinations of research areas in publications. Furthermore, topic analyses were conducted for journal and patent publications to illustrate emerging associations of AI with certain chemistry research topics. Notable publications in various chemistry disciplines were then evaluated and presented to highlight emerging use cases. Finally, the occurrence of different classes of substances and their roles in AI-related chemistry research were quantified, further detailing the popularity of AI adoption in the life sciences and analytical chemistry. In summary, this Review offers a broad overview of how AI has progressed in various fields of chemistry and aims to provide an understanding of its future directions.",
"year": 2021,
"venue": "Journal of Chemical Information and Modeling",
"authors": [
"Zachary J. Baum",
"Xiang Yu",
"P. Y. Ayala",
"Yanan Zhao",
"S. P. Watkins",
"Q. Zhou"
],
"externalIds": {
"DBLP": "journals/jcisd/BaumYAZWZ21",
"DOI": "10.1021/acs.jcim.1c00619",
"CorpusId": 235907399,
"PubMed": "34264069"
},
"url": "https://www.semanticscholar.org/paper/37e52a5eb24df9fc7d8fecfaaa478195f92c2c94",
"referenceCount": 93,
"citationCount": 110,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "HTLM: Hyper-Text Pre-Training and Prompting of Language Models",
"abstract": "We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research.",
"year": 2021,
"venue": "International Conference on Learning Representations",
"authors": [
"Armen Aghajanyan",
"Dmytro Okhonko",
"M. Lewis",
"Mandar Joshi",
"Hu Xu",
"Gargi Ghosh",
"Luke Zettlemoyer"
],
"externalIds": {
"DBLP": "conf/iclr/AghajanyanOLJ0G22",
"ArXiv": "2107.06955",
"CorpusId": 235899205
},
"url": "https://www.semanticscholar.org/paper/e596b8adbffa546dbc163e817fb3de72744ec4f6",
"referenceCount": 42,
"citationCount": 66,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Reaction classification and yield prediction using the differential reaction fingerprint DRFP",
"abstract": "Predicting the nature and outcome of reactions using computational methods is a crucial tool to accelerate chemical research. The recent application of deep learning-based learned fingerprints to reaction classification and reaction yield prediction has shown an impressive increase in performance compared to previous methods such as DFT- and structure-based fingerprints. However, learned fingerprints require large training data sets, are inherently biased, and are based on complex deep learning architectures. Here we present the differential reaction fingerprint DRFP. The DRFP algorithm takes a reaction SMILES as an input and creates a binary fingerprint based on the symmetric difference of two sets containing the circular molecular n-grams generated from the molecules listed left and right from the reaction arrow, respectively, without the need for distinguishing between reactants and reagents. We show that DRFP performs better than DFT-based fingerprints in reaction yield prediction and other structure-based fingerprints in reaction classification, reaching the performance of state-of-the-art learned fingerprints in both tasks while being data-independent.",
"year": 2021,
"venue": "Digital Discovery",
"authors": [
"Daniel Probst",
"P. Schwaller",
"J. Reymond"
],
"externalIds": {
"PubMedCentral": "8996827",
"MAG": "3179868148",
"DOI": "10.33774/CHEMRXIV-2021-MC870",
"CorpusId": 237753374,
"PubMed": "35515081"
},
"url": "https://www.semanticscholar.org/paper/9dcc2e0bc446ed900cfd3e1660f8d3f1bc9dc592",
"referenceCount": 20,
"citationCount": 47,
"influentialCitationCount": 5,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Bayesian reaction optimization as a tool for chemical synthesis",
"abstract": null,
"year": 2021,
"venue": "Nature",
"authors": [
"Benjamin J. Shields",
"Jason M. Stevens",
"Jun Li",
"Marvin Parasram",
"Farhan N. Damani",
"Jesus I Martinez Alvarado",
"J. Janey",
"Ryan P. Adams",
"A. Doyle"
],
"externalIds": {
"DBLP": "journals/nature/ShieldsSLPDAJAD21",
"DOI": "10.1038/s41586-021-03213-y",
"CorpusId": 231804657,
"PubMed": "33536653"
},
"url": "https://www.semanticscholar.org/paper/ba22cfff05cad65204860e34c2bb3cdd5f5e8f22",
"referenceCount": 65,
"citationCount": 410,
"influentialCitationCount": 10,
"isOpenAccess": false,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "Learning the Best Pooling Strategy for Visual Semantic Embedding",
"abstract": "Visual Semantic Embedding (VSE) is a dominant approach for vision-language retrieval, which aims at learning a deep embedding space such that visual data are embedded close to their semantic text labels or descriptions. Recent VSE models use complex methods to better contextualize and aggregate multi-modal features into holistic embeddings. However, we discover that surprisingly simple (but carefully selected) global pooling functions (e.g., max pooling) outperform those complex models, across different feature extractors. Despite its simplicity and effectiveness, seeking the best pooling function for different data modality and feature extractor is costly and tedious, especially when the size of features varies (e.g., text, video). Therefore, we propose a Generalized Pooling Operator (GPO), which learns to automatically adapt itself to the best pooling strategy for different features, requiring no manual tuning while staying effective and efficient. We extend the VSE model using this proposed GPO and denote it as VSE∞.Without bells and whistles, VSE∞ outperforms previous VSE methods significantly on image-text retrieval benchmarks across popular feature extractors. With a simple adaptation, variants of VSE∞ further demonstrate its strength by achieving the new state of the art on two video-text retrieval datasets. Comprehensive experiments and visualizations confirm that GPO always discovers the best pooling strategy and can be a plug-and-play feature aggregation module for standard VSE models. Code and pre-trained models are available at http://jcchen.me/vse_infty/",
"year": 2020,
"venue": "Computer Vision and Pattern Recognition",
"authors": [
"Jiacheng Chen",
"Hexiang Hu",
"Hao Wu",
"Yuning Jiang",
"C. Wang"
],
"externalIds": {
"DBLP": "conf/cvpr/ChenH0JW21",
"MAG": "3106780750",
"ArXiv": "2011.04305",
"DOI": "10.1109/CVPR46437.2021.01553",
"CorpusId": 226282316
},
"url": "https://www.semanticscholar.org/paper/608006b0c6c223ef449cbafdc064afdb52bf4410",
"referenceCount": 54,
"citationCount": 196,
"influentialCitationCount": 56,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Predicting chemical shifts with graph neural networks",
"abstract": "Inferring molecular structure from NMR measurements requires an accurate forward model that can predict chemical shifts from 3D structure. Current forward models are limited to specific molecules like proteins and state of the art models are not differentiable. Thus they cannot be used with gradient methods like biased molecular dynamics. Here we use graph neural networks (GNNs) for NMR chemical shift prediction. Our GNN can model chemical shifts accurately and capture important phenomena like hydrogen bonding induced downfield shift between multiple proteins, secondary structure effects, and predict shifts of organic molecules. Previous empirical NMR models of protein NMR have relied on careful feature engineering with domain expertise. These GNNs are trained from data alone with no feature engineering yet are as accurate and can work on arbitrary molecular structures. The models are also efficient, able to compute one million chemical shifts in about 5 seconds. This work enables a new category of NMR models that have multiple interacting types of macromolecules.",
"year": 2020,
"venue": "bioRxiv",
"authors": [
"Ziyue Yang",
"M. Chakraborty",
"A. White"
],
"externalIds": {
"PubMedCentral": "8372537",
"MAG": "3081416984",
"DOI": "10.1101/2020.08.26.267971",
"CorpusId": 221371524,
"PubMed": "34476061"
},
"url": "https://www.semanticscholar.org/paper/2b1b3df52c8f89a5c6782fa90f9c8b79bb34562f",
"referenceCount": 87,
"citationCount": 25,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Biology",
"Physics",
"Medicine"
]
},
{
"title": "Language Models are Few-Shot Learners",
"abstract": "Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.",
"year": 2020,
"venue": "Neural Information Processing Systems",
"authors": [
"Tom B. Brown",
"Benjamin Mann",
"Nick Ryder",
"Melanie Subbiah",
"J. Kaplan",
"Prafulla Dhariwal",
"Arvind Neelakantan",
"Pranav Shyam",
"Girish Sastry",
"Amanda Askell",
"Sandhini Agarwal",
"Ariel Herbert-Voss",
"Gretchen Krueger",
"T. Henighan",
"R. Child",
"A. Ramesh",
"Daniel M. Ziegler",
"Jeff Wu",
"Clemens Winter",
"Christopher Hesse",
"Mark Chen",
"Eric Sigler",
"Ma-teusz Litwin",
"Scott Gray",
"B. Chess",
"Jack Clark",
"Christopher Berner",
"Sam McCandlish",
"Alec Radford",
"I. Sutskever",
"Dario Amodei"
],
"externalIds": {
"ArXiv": "2005.14165",
"DBLP": "conf/nips/BrownMRSKDNSSAA20",
"MAG": "3030163527",
"CorpusId": 218971783
},
"url": "https://www.semanticscholar.org/paper/90abbc2cf38462b954ae1b772fac9532e2ccd8b0",
"referenceCount": 146,
"citationCount": 30859,
"influentialCitationCount": 3528,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Prediction of organic homolytic bond dissociation enthalpies at near chemical accuracy with sub-second computational cost",
"abstract": null,
"year": 2020,
"venue": "Nature Communications",
"authors": [
"Peter C. St. John",
"Yanfei Guan",
"Yeonjoon Kim",
"Seonah Kim",
"Robert S Paton"
],
"externalIds": {
"MAG": "3021480024",
"PubMedCentral": "7214445",
"DOI": "10.1038/s41467-020-16201-z",
"CorpusId": 218572886,
"PubMed": "32393773"
},
"url": "https://www.semanticscholar.org/paper/3d27fd09132a9978ce5bb04dff5fd4560e25f563",
"referenceCount": 75,
"citationCount": 168,
"influentialCitationCount": 2,
"isOpenAccess": true,
"fieldsOfStudy": [
"Chemistry",
"Medicine"
]
},
{
"title": "Supervised Contrastive Learning",
"abstract": "Cross entropy is the most widely used loss function for supervised training of image classification models. In this paper, we propose a novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations. We modify the batch contrastive loss, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting. We are thus able to leverage label information more effectively than cross entropy. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. In addition to this, we leverage key ingredients such as large batch sizes and normalized embeddings, which have been shown to benefit self-supervised learning. On both ResNet-50 and ResNet-200, we outperform cross entropy by over 1%, setting a new state of the art number of 78.8% among methods that use AutoAugment data augmentation. The loss also shows clear benefits for robustness to natural corruptions on standard benchmarks on both calibration and accuracy. Compared to cross entropy, our supervised contrastive loss is more stable to hyperparameter settings such as optimizers or data augmentations.",
"year": 2020,
"venue": "Neural Information Processing Systems",
"authors": [
"Prannay Khosla",
"Piotr Teterwak",
"Chen Wang",
"Aaron Sarna",
"Yonglong Tian",
"Phillip Isola",
"Aaron Maschinot",
"Ce Liu",
"Dilip Krishnan"
],
"externalIds": {
"MAG": "3018378048",
"ArXiv": "2004.11362",
"DBLP": "journals/corr/abs-2004-11362",
"CorpusId": 216080787
},
"url": "https://www.semanticscholar.org/paper/38643c2926b10f6f74f122a7037e2cd20d77c0f1",
"referenceCount": 75,
"citationCount": 3662,
"influentialCitationCount": 620,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "Synthetic organic chemistry driven by artificial intelligence",
"abstract": null,
"year": 2019,
"venue": "Nature Reviews Chemistry",
"authors": [
"A. Filipa de Almeida",
"R. Moreira",
"T. Rodrigues"
],
"externalIds": {
"MAG": "2969507301",
"DOI": "10.1038/s41570-019-0124-0",
"CorpusId": 201103131
},
"url": "https://www.semanticscholar.org/paper/4f726dc4be701267a46284fc1ab5cd68548b83d5",
"referenceCount": 134,
"citationCount": 175,
"influentialCitationCount": 1,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "Graph Neural Networks: A Review of Methods and Applications",
"abstract": null,
"year": 2018,
"venue": "AI Open",
"authors": [
"Jie Zhou",
"Ganqu Cui",
"Zhengyan Zhang",
"Cheng Yang",
"Zhiyuan Liu",
"Maosong Sun"
],
"externalIds": {
"MAG": "3152893301",
"DBLP": "journals/aiopen/ZhouCHZYLWLS20",
"ArXiv": "1812.08434",
"DOI": "10.1016/J.AIOPEN.2021.01.001",
"CorpusId": 56517517
},
"url": "https://www.semanticscholar.org/paper/ea5dd6a3d8f210d05e53a7b6fa5e16f1b115f693",
"referenceCount": 301,
"citationCount": 4538,
"influentialCitationCount": 184,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Mathematics"
]
},
{
"title": "SciFinder",
"abstract": "SciFinder, a resource from the Chemical Abstracts Service (CAS), is a curated database of chemical and bibliographic information that covers several scientific and biomedical fields, with an emphasis on chemistry. SciFinder is an appropriate resource to consult for literature searches and to find background information on chemicals, drugs, and substances.",
"year": 2018,
"venue": "Journal of the Medical Library Association",
"authors": [
"Stephen Gabrielson"
],
"externalIds": {
"PubMedCentral": "6148602",
"DOI": "10.5195/jmla.2018.515",
"CorpusId": 211565708
},
"url": "https://www.semanticscholar.org/paper/9bdf6d1200398c6184a5961c0655d6aa4956466d",
"referenceCount": 7,
"citationCount": 72,
"influentialCitationCount": 3,
"isOpenAccess": true,
"fieldsOfStudy": null
},
{
"title": "Predicting reaction performance in C–N cross-coupling using machine learning",
"abstract": "A guide for catalyst choice in the forest Chemists often discover reactions by applying catalysts to a series of simple compounds. Tweaking those reactions to tolerate more structural complexity in pharmaceutical research is time-consuming. Ahneman et al. report that machine learning can help. Using a high-throughput data set, they trained a random forest algorithm to predict which specific palladium catalysts would best tolerate isoxazoles (cyclic structures with an N–O bond) during C–N bond formation. The predictions also helped to guide analysis of the catalyst inhibition mechanism. Science, this issue p. 186 A random forest algorithm trained on high-throughput data predicts which catalysts best tolerate certain heterocycles. Machine learning methods are becoming integral to scientific inquiry in numerous disciplines. We demonstrated that machine learning can be used to predict the performance of a synthetic reaction in multidimensional chemical space using data obtained via high-throughput experimentation. We created scripts to compute and extract atomic, molecular, and vibrational descriptors for the components of a palladium-catalyzed Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline in the presence of various potentially inhibitory additives. Using these descriptors as inputs and reaction yield as output, we showed that a random forest algorithm provides significantly improved predictive performance over linear regression analysis. The random forest model was also successfully applied to sparse training sets and out-of-sample prediction, suggesting its value in facilitating adoption of synthetic methodology.",
"year": 2018,
"venue": "Science",
"authors": [
"Derek T. Ahneman",
"Jesús G Estrada",
"Shishi Lin",
"S. Dreher",
"A. Doyle"
],
"externalIds": {
"MAG": "2785942661",
"DOI": "10.1126/science.aar5169",
"CorpusId": 206666015,
"PubMed": "29449509"
},
"url": "https://www.semanticscholar.org/paper/a44f2ad10815051142c97fb466c51c8c9eda1d7f",
"referenceCount": 48,
"citationCount": 584,
"influentialCitationCount": 15,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science",
"Medicine"
]
},
{
"title": "Mordred: a molecular descriptor calculator",
"abstract": null,
"year": 2018,
"venue": "Journal of Cheminformatics",
"authors": [
"H. Moriwaki",
"Yu-Shi Tian",
"N. Kawashita",
"T. Takagi"
],
"externalIds": {
"MAG": "2791355014",
"PubMedCentral": "5801138",
"DBLP": "journals/jcheminf/MoriwakiTKT18",
"DOI": "10.1186/s13321-018-0258-y",
"CorpusId": 3726588,
"PubMed": "29411163"
},
"url": "https://www.semanticscholar.org/paper/2829720b1f38ebc988c90a9da5bc1a7ed9b8c920",
"referenceCount": 35,
"citationCount": 729,
"influentialCitationCount": 47,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine",
"Computer Science"
]
},
{
"title": "A platform for automated nanomole-scale reaction screening and micromole-scale synthesis in flow",
"abstract": "A reaction screen in flowing solvent Chemists charged with manufacturing pharmaceuticals have recently been exploring the efficiency advantages of continuous flow techniques. Perera et al. now show that a flow apparatus can also accelerate reaction optimization earlier in the drug discovery process. They modified a high-performance liquid chromatography system to screen a wide variety of solvent, ligand, and base combinations to optimize carbon-carbon bond formation. Injecting stock solution aliquots of the catalyst and reactants into a carrier solvent stream let the authors vary the main solvent efficiently and scale up the optimal conditions for product isolation. Science, this issue p. 429 Chromatographic, flow-based screening of reaction conditions is demonstrated for Suzuki coupling in pharmaceutical research. The scarcity of complex intermediates in pharmaceutical research motivates the pursuit of reaction optimization protocols on submilligram scales. We report here the development of an automated flow-based synthesis platform, designed from commercially available components, that integrates both rapid nanomole-scale reaction screening and micromole-scale synthesis into a single modular unit. This system was validated by exploring a diverse range of reaction variables in a Suzuki-Miyaura coupling on nanomole scale at elevated temperatures, generating liquid chromatography–mass spectrometry data points for 5760 reactions at a rate of >1500 reactions per 24 hours. Through multiple injections of the same segment, the system directly produced micromole quantities of desired material. The optimal conditions were also replicated in traditional flow and batch mode at 50- to 200-milligram scale to provide good to excellent yields.",
"year": 2018,
"venue": "Science",
"authors": [
"D. Perera",
"Joseph W. Tucker",
"Shalini Brahmbhatt",
"Christopher J. Helal",
"Ashley Chong",
"W. Farrell",
"P. Richardson",
"N. Sach"
],
"externalIds": {
"MAG": "2784918212",
"DOI": "10.1126/science.aap9112",
"CorpusId": 5003014,
"PubMed": "29371464"
},
"url": "https://www.semanticscholar.org/paper/44f60e595cf4583a7ae0494999c739fab9cfd03a",
"referenceCount": 33,
"citationCount": 259,
"influentialCitationCount": 4,
"isOpenAccess": true,
"fieldsOfStudy": [
"Medicine"
]
},
{
"title": "XGBoost: A Scalable Tree Boosting System",
"abstract": "Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.",
"year": 2016,
"venue": "Knowledge Discovery and Data Mining",
"authors": [
"Tianqi Chen",
"Carlos Guestrin"
],
"externalIds": {
"ArXiv": "1603.02754",
"DBLP": "conf/kdd/ChenG16",
"MAG": "3102476541",
"DOI": "10.1145/2939672.2939785",
"CorpusId": 4650265
},
"url": "https://www.semanticscholar.org/paper/26bc9195c6343e4d7f434dd65b4ad67efe2be27a",
"referenceCount": 26,
"citationCount": 30775,
"influentialCitationCount": 2876,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A random forest guided tour",
"abstract": null,
"year": 2015,
"venue": "Test (Madrid)",
"authors": [
"G. Biau",
"Erwan Scornet"
],
"externalIds": {
"MAG": "2953366032",
"ArXiv": "1511.05741",
"DOI": "10.1007/s11749-016-0481-7",
"CorpusId": 14518730
},
"url": "https://www.semanticscholar.org/paper/bddf65576d9305435fac4a1da2cb1ed017718abf",
"referenceCount": 125,
"citationCount": 2197,
"influentialCitationCount": 67,
"isOpenAccess": true,
"fieldsOfStudy": [
"Mathematics"
]
},
{
"title": "Learning similarity with cosine similarity ensemble",
"abstract": null,
"year": 2015,
"venue": "Information Sciences",
"authors": [
"Peipei Xia",
"Li Zhang",
"Fanzhang Li"
],
"externalIds": {
"DBLP": "journals/isci/XiaZL15",
"MAG": "2026498605",
"DOI": "10.1016/j.ins.2015.02.024",
"CorpusId": 34578726
},
"url": "https://www.semanticscholar.org/paper/c795f7711ab4bab54574ccad83b9084e15678ad0",
"referenceCount": 31,
"citationCount": 282,
"influentialCitationCount": 4,
"isOpenAccess": false,
"fieldsOfStudy": [
"Mathematics",
"Computer Science"
]
},
{
"title": "PubMed: The Bibliographic Database",
"abstract": "PubMed comprises over 22 million citations and abstracts for biomedical literature indexed in NLM’s MEDLINE database, as well as from other life science journals and online books. PubMed citations and abstracts include the fields of biomedicine and health, and cover portions of the life sciences, behavioral sciences, chemical sciences, and bioengineering. PubMed also provides access to additional relevant websites and links to other NCBI resources, including its various molecular biology databases.",
"year": 2013,
"venue": "",
"authors": [
"Kathi Canese",
"Sarah Weis"
],
"externalIds": {
"MAG": "378747138",
"CorpusId": 59968466
},
"url": "https://www.semanticscholar.org/paper/23325e2ef18ecb799af81d39c29d5870d2c0dab4",
"referenceCount": 0,
"citationCount": 165,
"influentialCitationCount": 19,
"isOpenAccess": false,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "ChemSpider: The Free Chemical Database",
"abstract": null,
"year": 2012,
"venue": "",
"authors": [
"Meredith Ayers"
],
"externalIds": {
"MAG": "1534069151",
"DOI": "10.1108/09504121211271059",
"CorpusId": 93233072
},
"url": "https://www.semanticscholar.org/paper/3a09a3cdd7598d619b69d754c72c37d9cdd77022",
"referenceCount": 1,
"citationCount": 49,
"influentialCitationCount": 2,
"isOpenAccess": false,
"fieldsOfStudy": [
"Chemistry"
]
},
{
"title": "Similarity between Euclidean and cosine angle distance for nearest neighbor queries",
"abstract": "Understanding the relationship among different distance measures is helpful in choosing a proper one for a particular application. In this paper, we compare two commonly used distance measures in vector models, namely, Euclidean distance (EUD) and cosine angle distance (CAD), for nearest neighbor (NN) queries in high dimensional data spaces. Using theoretical analysis and experimental results, we show that the retrieval results based on EUD are similar to those based on CAD when dimension is high. We have applied CAD for content based image retrieval (CBIR). Retrieval results show that CAD works no worse than EUD, which is a commonly used distance measure for CBIR, while providing other advantages, such as naturally normalized distance.",
"year": 2004,
"venue": "ACM Symposium on Applied Computing",
"authors": [
"Gang Qian",
"S. Sural",
"Yuelong Gu",
"S. Pramanik"
],
"externalIds": {
"MAG": "2145343266",
"DBLP": "conf/sac/QianSGP04",
"DOI": "10.1145/967900.968151",
"CorpusId": 207750419
},
"url": "https://www.semanticscholar.org/paper/7635e2071dbea24e70bb170e480d24f868bbef7b",
"referenceCount": 18,
"citationCount": 314,
"influentialCitationCount": 8,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A Survey on Context Learning",
"abstract": "Learning semantics based on context information has been researched in many research areas for decades. Context information can not only be directly used as the input data, but also sometimes used as auxiliary knowledge to improve existing models. This survey aims at providing a structured and comprehensive overview of the research on context learning. We summarize and group the existing literature into four categories, Explicit Analysis, Implicit Analysis, Neural Network Models, and Composite Models, based on the underlying techniques adopted by them. For each category, we talk about the basic idea and techniques, and also introduce how context information is utilized as the model input or incorporated into the model to enhance the performance or extend the domain of application as auxiliary knowledge. In addition, we discuss the advantages and disadvantages of each model from both the technical and practical point of view.",
"year": 2017,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"authors": [
"Guangxu Xun",
"Xiaowei Jia",
"Vishrawas Gopalakrishnan",
"Aidong Zhang"
],
"externalIds": {
"MAG": "2527310337",
"DBLP": "journals/tkde/XunJGZ17",
"DOI": "10.1109/TKDE.2016.2614508",
"CorpusId": 13490401
},
"url": "https://www.semanticscholar.org/paper/0c0a778e6fdf7e36b1750c533dcc916f86608607",
"referenceCount": 81,
"citationCount": 545,
"influentialCitationCount": 22,
"isOpenAccess": true,
"fieldsOfStudy": [
"Computer Science"
]
},
{
"title": "A brief overview of 17",
"abstract": null,
"year": null,
"venue": "",
"authors": [],
"externalIds": null,
"url": null,
"referenceCount": null,
"citationCount": null,
"influentialCitationCount": null,
"isOpenAccess": null,
"fieldsOfStudy": null
}
]
}
}