peter-jin-nexusflow commited on
Commit
f0a9df6
1 Parent(s): 65b6bc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -13,14 +13,10 @@ tags:
13
 
14
  We introduce Athene-Llama3-70B, an open-weights LLM trained through RLHF based off Llama-3-70B-Instruct. Athene-70B achieves a high score on Arena-Hard-Auto, a proxy benchmark for Chatbot Arena.
15
 
16
- <!-- Provide a quick summary of what the model is/does. -->
17
-
18
  - **Developed by:** The Nexusflow Team (Evan Frick\*, Peter Jin\*, Tianle Li\*, Karthik Ganesan, Jian Zhang, Jiantao Jiao and Banghua Zhu).
19
  - **Model type:** Chat Model
20
  - **Finetuned from model:** [Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
21
 
22
-
23
-
24
  Blog: https://nexusflow.ai/blogs/athene
25
 
26
  | Model | Arena-Hard |
@@ -33,6 +29,7 @@ Blog: https://nexusflow.ai/blogs/athene
33
  | Llama-3-70B (Open) | 46.6% |
34
 
35
  ## Usage
 
36
  Athene-70B uses the same chat template as Llama-3-70B-Instruct. Below is an example simple usage using the Transformers library.
37
 
38
  ```Python
@@ -67,13 +64,14 @@ outputs = pipeline(
67
  top_p=0.9,
68
  )
69
  print(outputs[0]["generated_text"][-1])
70
-
71
  ```
72
 
73
  ## Acknowledgment
 
74
  We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of online demo and private test. We would like to thank Meta AI and the open source community for their efforts in providing the datasets and base models.
75
 
76
  ## Citation
 
77
  ```
78
  @misc{Athene2024,
79
  title = {Athene-70B: Redefining the Boundaries of Post-Training for Open Models},
 
13
 
14
  We introduce Athene-Llama3-70B, an open-weights LLM trained through RLHF based off Llama-3-70B-Instruct. Athene-70B achieves a high score on Arena-Hard-Auto, a proxy benchmark for Chatbot Arena.
15
 
 
 
16
  - **Developed by:** The Nexusflow Team (Evan Frick\*, Peter Jin\*, Tianle Li\*, Karthik Ganesan, Jian Zhang, Jiantao Jiao and Banghua Zhu).
17
  - **Model type:** Chat Model
18
  - **Finetuned from model:** [Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
19
 
 
 
20
  Blog: https://nexusflow.ai/blogs/athene
21
 
22
  | Model | Arena-Hard |
 
29
  | Llama-3-70B (Open) | 46.6% |
30
 
31
  ## Usage
32
+
33
  Athene-70B uses the same chat template as Llama-3-70B-Instruct. Below is an example simple usage using the Transformers library.
34
 
35
  ```Python
 
64
  top_p=0.9,
65
  )
66
  print(outputs[0]["generated_text"][-1])
 
67
  ```
68
 
69
  ## Acknowledgment
70
+
71
  We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of online demo and private test. We would like to thank Meta AI and the open source community for their efforts in providing the datasets and base models.
72
 
73
  ## Citation
74
+
75
  ```
76
  @misc{Athene2024,
77
  title = {Athene-70B: Redefining the Boundaries of Post-Training for Open Models},