File size: 1,170 Bytes
8f5aec0 265afcd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: openrail
---
# PMC_LLaMA
To obtain the foundation model in medical field, we propose [MedLLaMA_13B](https://huggingface.co/chaoyi-wu/MedLLaMA_13B) and PMC_LLaMA_13B.
MedLLaMA_13B is initialized from LLaMA-13B and further pretrained with medical corpus. Despite the expert knowledge gained, it lacks instruction-following ability.
Hereby we construct a instruction-tuning dataset and evaluate the tuned model.
As shown in the table, PMC_LLaMA_13B achieves comparable results to ChatGPT on medical QA benchmarks.
![medical_qa](https://pic4.zhimg.com/80/v2-bf43393cd753018e11fdb1c64a1a87df.png)
## Usage
```python
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('axiong/PMC_LLaMA_13B')
model = transformers.LlamaForCausalLM.from_pretrained('axiong/PMC_LLaMA_13B')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(
inputs = batch["input_ids"],
max_length=200,
do_sample=True,
top_k=50
)
print('model predict: ',tokenizer.decode(generated[0]))
```
|