File size: 2,906 Bytes
0113a43
 
 
 
 
 
 
 
 
 
 
 
 
 
04d3721
 
0113a43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
tags:
- RLHF
- Nexusflow
- Athene
- Chat Model
---
# Athene-Llama3-70B-AWQ

- AWQ 4bit version of [Nexusflow Athene-Llama3-70B](https://huggingface.co/Nexusflow/Athene-70B)
- [Quantization code](https://docs.vllm.ai/en/latest/quantization/auto_awq.html)
- ! Updated based on the original model with commit [91ee998](https://huggingface.co/Nexusflow/Athene-70B/commit/91ee998)
- Information from the original model below

## Athene-Llama3-70B

We introduce Athene-Llama3-70B, an open-weights LLM trained through RLHF based off Llama-3-70B-Instruct. Athene-70B achieves a high score on Arena-Hard-Auto, a proxy benchmark for Chatbot Arena.

- **Developed by:** The Nexusflow Team (Evan Frick\*, Peter Jin\*, Tianle Li\*, Karthik Ganesan, Jian Zhang, Jiantao Jiao and Banghua Zhu).
- **Model type:** Chat Model
- **Finetuned from model:** [Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).

Blog: https://nexusflow.ai/blogs/athene

| Model                           | Arena-Hard |
|---------------------------------|------------| 
| Claude-3.5-Sonnet (Proprietary) |      79.3% | 
| GPT-4o (Proprietary)            |      79.2% |  
| **Athene-70B (Open)**           |      77.8% |  
| Gemini-Pro-1.5 (Proprietary)    |      72.0% | 
| Gemma-2-27B (Open)              |      57.0% |  
| Llama-3-70B (Open)              |      46.6% |  

## Usage

Athene-70B uses the same chat template as Llama-3-70B-Instruct. Below is an example simple usage using the Transformers library.

```Python
import transformers
import torch

model_id = "Nexusflow/Athene-70B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are an Athene Noctura, you can only speak with owl sounds. Whoooo whooo."},
    {"role": "user", "content": "Whooo are you?"},
]

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|end_of_text|>")
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][-1])
```

## Acknowledgment

We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of online demo and private test. We would like to thank Meta AI and the open source community for their efforts in providing the datasets and base models.

## Citation

```
@misc{Athene2024,
    title = {Athene-70B: Redefining the Boundaries of Post-Training for Open Models},
    url = {https://nexusflow.ai/blogs/athene},
    author = {Frick, Evan and Jin, Peter and Li, Tianle and Ganesan, Karthik and Zhang, Jian and Jiao, Jiantao and Zhu, Banghua},    
    month = {July},
    year = {2024}
}
```