metadata
base_model: meta-llama/Meta-Llama-3-70B-Instruct
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
language:
- en
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
inference: false
model_creator: MaziyarPanahi
model_name: Llama-3-8B-Instruct-32k-v0.1
quantized_by: MaziyarPanahi
Llama-3-70B-Instruct-32k-v0.1
This is an experiment by setting rope_theta
to 8m
.
Quantized models
You can find all GGUF quantized models here: MaziyarPanahi/Llama-3-70B-Instruct-32k-v0.1-GGUF