Edit model card

MedGPT-Llama3.1-8B-v.1-GGUF

  • This model is a fine-tuned version of unsloth/Meta-Llama-3.1-8B on an dataset created by Valerio Job together with GPs based on real medical data.
  • Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
  • This repo includes the quantized models in the GGUF format. There is a separate repo called valeriojob/MedGPT-Llama3.1-8B-BA-v.1 that includes the default 16bit format of the model as well as the LoRA adapters of the model.
  • This model was quantized using llama.cpp.
  • This model is available in the following quantization formats:
    • BF16
    • Q4_K_M
    • Q5_K_M
    • Q8_0

Model description

This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.

Intended uses & limitations

The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.

Training and evaluation data

The dataset (train and test) used for fine-tuning this model can be found here: datasets/valeriojob/BA-v.1

Licenses

  • License: apache-2.0
Downloads last month
99
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for valeriojob/MedGPT-Llama3.1-8B-BA-v.1-GGUF

Quantized
(13)
this model