This repo is a clone of mattshumer/Llama-3-8B-16K
This is an extended (16K) context version of LLaMA 3. Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length
dataset.
rope_theta
was set to 1000000.0
. Trained with Axolotl.
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.