kaizuberbuehler's picture
Update README.md
853042c verified
|
raw
history blame
774 Bytes
metadata
license: llama3

Alpesteibock-Llama-3-8B-Alpha

Alpesteibock-Llama-3-8B-Alpha is an experimental QLoRA fine-tune of NousResearch/Hermes-2-Pro-Llama-3-8B on a dataset of more than 28 million tokens of Swiss German text from multiple sources.

Dataset

Training Details

Hardware: 1x RTX 4090
Duration: 30 hours in total (2 hours for first phase and 28 hours for second phase)

Hyperparameters

Adapter: QLoRA
Precision: 4 bit
Optimizer: adamw_bnb_8bit
LoRA Rank: 256
LoRA Alpha: 256
Learning Rate: 1e-5
Context Length: 4096 tokens
Batch Size: 1
Gradient Accumulation Steps: 1
Sample Packing: Off for first phase, on for second phase
Epochs: 2

Limitations