|
--- |
|
license: cc |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- medical |
|
inference: false |
|
--- |
|
|
|
# medalpaca-13B-GGML |
|
|
|
This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b). |
|
|
|
This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp). |
|
|
|
## Repositories available |
|
|
|
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit). |
|
* [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/medalpaca-13B-GGML). |
|
* [medalpaca's float32 HF format repo for GPU inference and further conversions](https://huggingface.co/medalpaca/medalpaca-13b). |
|
|
|
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)! |
|
|
|
llama.cpp recently made a breaking change to its quantisation methods. |
|
|
|
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them. |
|
|
|
## Provided files |
|
| Name | Quant method | Bits | Size | RAM required | Use case | |
|
| ---- | ---- | ---- | ---- | ---- | ----- | |
|
`medalpaca-13B.ggmlv2.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. | |
|
`medalpaca-13B.ggmlv2.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | |
|
`medalpaca-13B.ggmlv2.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | |
|
`medalpaca-13B.ggmlv2.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. | |
|
`medalpaca-13B.ggmlv2.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. | |
|
|
|
## How to run in `llama.cpp` |
|
|
|
I use the following command line; adjust for your tastes and needs: |
|
|
|
``` |
|
./main -t 8 -m medalpaca-13B.ggmlv2.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:" |
|
``` |
|
|
|
Change `-t 8` to the number of physical CPU cores you have. |
|
|
|
## How to run in `text-generation-webui` |
|
|
|
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual. |
|
|
|
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). |
|
|
|
|
|
# Original model card: MedAlpaca 13b |
|
|
|
|
|
## Table of Contents |
|
|
|
[Model Description](#model-description) |
|
- [Architecture](#architecture) |
|
- [Training Data](#trainig-data) |
|
[Model Usage](#model-usage) |
|
[Limitations](#limitations) |
|
|
|
## Model Description |
|
### Architecture |
|
`medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks. |
|
It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters. |
|
The primary goal of this model is to improve question-answering and medical dialogue tasks. |
|
|
|
### Training Data |
|
The training data for this project was sourced from various resources. |
|
Firstly, we used Anki flashcards to automatically generate questions, |
|
from the front of the cards and anwers from the back of the card. |
|
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page). |
|
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5 |
|
to generate questions from the headings and using the corresponding paragraphs |
|
as answers. This dataset is still under development and we believe |
|
that approximately 70% of these question answer pairs are factual correct. |
|
Thirdly, we used StackExchange to extract question-answer pairs, taking the |
|
top-rated question from five categories: Academia, Bioinformatics, Biology, |
|
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070) |
|
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor. |
|
|
|
| Source | n items | |
|
|------------------------------|--------| |
|
| ChatDoc large | 200000 | |
|
| wikidoc | 67704 | |
|
| Stackexchange academia | 40865 | |
|
| Anki flashcards | 33955 | |
|
| Stackexchange biology | 27887 | |
|
| Stackexchange fitness | 9833 | |
|
| Stackexchange health | 7721 | |
|
| Wikidoc patient information | 5942 | |
|
| Stackexchange bioinformatics | 5407 | |
|
|
|
## Model Usage |
|
To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information. |
|
Inference |
|
|
|
You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task: |
|
|
|
```python |
|
|
|
from transformers import pipeline |
|
|
|
qa_pipeline = pipeline("question-answering", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b") |
|
question = "What are the symptoms of diabetes?" |
|
context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss." |
|
answer = qa_pipeline({"question": question, "context": context}) |
|
print(answer) |
|
``` |
|
|
|
## Limitations |
|
The model may not perform effectively outside the scope of the medical domain. |
|
The training data primarily targets the knowledge level of medical students, |
|
which may result in limitations when addressing the needs of board-certified physicians. |
|
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown. |
|
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only. |
|
|