Gecko-7B-v0.1-DPO / README.md
NeuralNovel's picture
Update README.md
789c0a9 verified
|
raw
history blame
1.48 kB
metadata
license: apache-2.0
base_model: NeuralNovel/Gecko-7B-v0.1
library_name: transformers
inference: false
datasets:
  - Intel/orca_dpo_pairs

Gecko

NeuralNovel/Gecko-7B-v0.1-DPO

Designed to generate instructive and narrative text, with a focus on mathematics & numeracy.

Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2, with apache-2.0 license.

You may download and use this model for research, training and commercial purposes.

This model is suitable for commercial deployment.

Data-set

The model was finetuned using the orca_dpo_pairs dataset

Summary

Fine-tuned with the intention of following all prompt directions, making it more suitable for math questions and problem solving.

Out-of-Scope Use

The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.

Bias, Risks, and Limitations

This model may not work as intended. As such all users are encouraged to use this model with caution and respect.

This model is for testing and research purposes only, it has reduced levels of alignment and as a result may produce NSFW or harmful content. The user is responsible for their output and must use this model responsibly.

Hardware and Training

Trained on a single 80GB A100 for 2 hours trained using Axolotl

Thank you to h2m for the generous funding.