File size: 769 Bytes
904229b
3919c9a
 
904229b
 
 
 
 
 
3919c9a
 
 
904229b
 
 
3919c9a
 
 
 
904229b
 
3919c9a
904229b
 
3919c9a
904229b
3919c9a
904229b
 
3919c9a
904229b
3919c9a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
base_model: meta-llama/Meta-Llama-3-70B-Instruct
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
language:
- en
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
inference: false
model_creator: MaziyarPanahi
model_name: Llama-3-8B-Instruct-32k-v0.1
quantized_by: MaziyarPanahi
---

<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>


# Llama-3-70B-Instruct-32k-v0.1

This is an experiment by setting `rope_theta` to `8m`.


# Quantized models

You can find all GGUF quantized models here: [MaziyarPanahi/Llama-3-70B-Instruct-32k-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-32k-v0.1-GGUF)