File size: 1,464 Bytes
329fdce 239e2a6 b79ca21 239e2a6 b79ca21 329fdce b79ca21 329fdce b79ca21 329fdce 239e2a6 329fdce b79ca21 329fdce b79ca21 329fdce b79ca21 329fdce b79ca21 329fdce b79ca21 329fdce b79ca21 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
library_name: transformers
license: apache-2.0
datasets:
- Locutusque/hercules-v4.5
language:
- en
inference:
parameters:
do_sample: true
temperature: 1
top_p: 0.7
top_k: 4
max_new_tokens: 250
repetition_penalty: 1.1
---
# Hercules-phi-2
<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned phi2 on Locutusque's Hercules-v4.5.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using all examples of Hercules-v4.5.
- **Developed by:** M4-ai
- **Language(s) (NLP):** English
- **License:** apache-2.0
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
General purpose assistant, question answering, chain-of-thought, etc..
## Evaluation
Coming soon
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Locutusque/hercules-v4.5
#### Training Hyperparameters
- **Training regime:** bf16 non-mixed precision
## Technical Specifications
#### Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 1152. |