Model Card for HemonChat

Model Details

Model Description

HemonChat is a finetuned version of Llama-2-7b-chat-hf specifically trained on hematology and oncology knowledge from HemOnc.org's ontology data. The model was designed and created to expand Llama 2 knowledge base to include hemonc.org ontology.

Model Sources

Uses

This model is intended for non-commercial or research use in English.

To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the INST and <> tags, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces). See parent model meta-llama/Llama-2-7b-chat-hf for more information.

Out-of-Scope Use

Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.

Do not under any circumstances use this model to provide direct medical advice of any kind.

Bias, Risks, and Limitations

This first version of the model was created mostly for educational purposes of the creator. For this reason, the model has not been carefully validated and should not be trusted for a production environment. Having said so, feel free to experiment with it as much as you would like.

Training Details

Training Data

Training data was generated using the code in Repository: https://github.com/PerifanosPrometheus/HemonChat. The tranining data used is available in the data folder of the repository.

Model Card Authors

Giorgio Di Salvo

Model Card Contact

disalvogiorgio97@gmail.com

Downloads last month
72
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for GiorgioDiSalvo/Llama-2-7b-hemonchat-v1

Quantizations
1 model