File size: 1,401 Bytes
23a9437
64bee1e
1cb9f0d
 
 
23a9437
 
 
 
 
 
 
1cb9f0d
 
 
 
 
23a9437
 
 
 
1cb9f0d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23a9437
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
base_model: Undi95/Meta-Llama-3-8B-hf
license: other
license_name: llama3
license_link: LICENSE
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- facebook
- meta
- pytorch
- llama
- llama-3
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# Undi95/Meta-Llama-3-8B-hf AWQ

- Original model: [Meta-Llama-3-8B-hf](Undi95/Meta-Llama-3-8B-hf)

## Model Summary

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. 

**Model developers** Meta

**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.

**Input** Models input text only.

**Output** Models generate text and code only.

**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.