Duplicate from microsoft/biogpt
Browse filesCo-authored-by: Kamal Raj Kanakarajan <kamalkraj@users.noreply.huggingface.co>
- .gitattributes +34 -0
- README.md +87 -0
- config.json +25 -0
- merges.txt +0 -0
- pytorch_model.bin +3 -0
- vocab.json +0 -0
.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
widget:
|
5 |
+
- text: COVID-19 is
|
6 |
+
duplicated_from: microsoft/biogpt
|
7 |
+
---
|
8 |
+
|
9 |
+
## BioGPT
|
10 |
+
|
11 |
+
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
|
12 |
+
|
13 |
+
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
|
14 |
+
set a seed for reproducibility:
|
15 |
+
|
16 |
+
```python
|
17 |
+
>>> from transformers import pipeline, set_seed
|
18 |
+
>>> from transformers import BioGptTokenizer, BioGptForCausalLM
|
19 |
+
>>> model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
|
20 |
+
>>> tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
|
21 |
+
>>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
|
22 |
+
>>> set_seed(42)
|
23 |
+
>>> generator("COVID-19 is", max_length=20, num_return_sequences=5, do_sample=True)
|
24 |
+
[{'generated_text': 'COVID-19 is a disease that spreads worldwide and is currently found in a growing proportion of the population'},
|
25 |
+
{'generated_text': 'COVID-19 is one of the largest viral epidemics in the world.'},
|
26 |
+
{'generated_text': 'COVID-19 is a common condition affecting an estimated 1.1 million people in the United States alone.'},
|
27 |
+
{'generated_text': 'COVID-19 is a pandemic, the incidence has been increased in a manner similar to that in other'},
|
28 |
+
{'generated_text': 'COVID-19 is transmitted via droplets, air-borne, or airborne transmission.'}]
|
29 |
+
```
|
30 |
+
|
31 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
32 |
+
|
33 |
+
```python
|
34 |
+
from transformers import BioGptTokenizer, BioGptForCausalLM
|
35 |
+
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
|
36 |
+
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
|
37 |
+
text = "Replace me by any text you'd like."
|
38 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
39 |
+
output = model(**encoded_input)
|
40 |
+
```
|
41 |
+
|
42 |
+
Beam-search decoding:
|
43 |
+
|
44 |
+
```python
|
45 |
+
import torch
|
46 |
+
from transformers import BioGptTokenizer, BioGptForCausalLM, set_seed
|
47 |
+
|
48 |
+
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
|
49 |
+
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
|
50 |
+
|
51 |
+
sentence = "COVID-19 is"
|
52 |
+
inputs = tokenizer(sentence, return_tensors="pt")
|
53 |
+
|
54 |
+
set_seed(42)
|
55 |
+
|
56 |
+
with torch.no_grad():
|
57 |
+
beam_output = model.generate(**inputs,
|
58 |
+
min_length=100,
|
59 |
+
max_length=1024,
|
60 |
+
num_beams=5,
|
61 |
+
early_stopping=True
|
62 |
+
)
|
63 |
+
tokenizer.decode(beam_output[0], skip_special_tokens=True)
|
64 |
+
'COVID-19 is a global pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), which has spread to more than 200 countries and territories, including the United States (US), Canada, Australia, New Zealand, the United Kingdom (UK), and the United States of America (USA), as of March 11, 2020, with more than 800,000 confirmed cases and more than 800,000 deaths.'
|
65 |
+
```
|
66 |
+
|
67 |
+
## Citation
|
68 |
+
|
69 |
+
If you find BioGPT useful in your research, please cite the following paper:
|
70 |
+
|
71 |
+
```latex
|
72 |
+
@article{10.1093/bib/bbac409,
|
73 |
+
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
|
74 |
+
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
|
75 |
+
journal = {Briefings in Bioinformatics},
|
76 |
+
volume = {23},
|
77 |
+
number = {6},
|
78 |
+
year = {2022},
|
79 |
+
month = {09},
|
80 |
+
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
|
81 |
+
issn = {1477-4054},
|
82 |
+
doi = {10.1093/bib/bbac409},
|
83 |
+
url = {https://doi.org/10.1093/bib/bbac409},
|
84 |
+
note = {bbac409},
|
85 |
+
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
|
86 |
+
}
|
87 |
+
```
|
config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"activation_dropout": 0.0,
|
3 |
+
"architectures": [
|
4 |
+
"BioGptForCausalLM"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"eos_token_id": 2,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 1024,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 4096,
|
14 |
+
"layer_norm_eps": 1e-12,
|
15 |
+
"layerdrop": 0.0,
|
16 |
+
"max_position_embeddings": 1024,
|
17 |
+
"model_type": "biogpt",
|
18 |
+
"num_attention_heads": 16,
|
19 |
+
"num_hidden_layers": 24,
|
20 |
+
"pad_token_id": 1,
|
21 |
+
"scale_embedding": true,
|
22 |
+
"transformers_version": "4.25.0.dev0",
|
23 |
+
"use_cache": true,
|
24 |
+
"vocab_size": 42384
|
25 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:11b9d608003652a84595aecee376d5c09156779228639b0ecb147cafb43cd409
|
3 |
+
size 1560781537
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|