Update README.md
Browse files
README.md
CHANGED
@@ -20,9 +20,6 @@ tags:
|
|
20 |
- companion
|
21 |
- friend
|
22 |
base_model: meta-llama/Llama-3.1-8B-Instruct
|
23 |
-
model-index:
|
24 |
-
- name: Dobby-Mini-Leashed-Llama-3.1-8B
|
25 |
-
results: []
|
26 |
---
|
27 |
|
28 |
# Dobby-Mini-Leashed-Llama-3.1-8B
|
@@ -70,8 +67,8 @@ model-index:
|
|
70 |
|
71 |
| **Model Name** | **Model Base** | **Parameter Size** | **Hugging Face 🤗** |
|
72 |
| --- | --- | --- | --- |
|
73 |
-
| **Dobby-Mini-Leashed-Llama-3.1-8B** | Llama 3.1 | 8B | Original GGUF |
|
74 |
-
| **Dobby-Mini-Unhinged-Llama-3.1-8B** | Llama 3.1 | 8B | Original GGUF |
|
75 |
| **Dobby-Llama-3.3-70B** | Llama 3.3 | 70B | Coming Soon! |
|
76 |
|
77 |
## 🔑 Key Features
|
@@ -142,13 +139,9 @@ This means that our community owns the fingerprints that they can use to verify
|
|
142 |
**Dobby-Mini-Leashed-Llama-3.1-8B** and **Dobby-Mini-Unhinged-Llama-3.1-8B** retain the base performance of Llama-3.1-8B-Instruct across the evaluated tasks.
|
143 |
|
144 |
[//]: # (<div align="center">)
|
145 |
-
|
146 |
-
[//]: # ( <img src="assets/hf_evals.png" alt="alt text" width="100%"/>)
|
147 |
-
|
148 |
[//]: # (</div>)
|
149 |
-
|
150 |
We use lm-eval-harness to evaluate between performance on models:
|
151 |
-
|
152 |
| Benchmark | Llama3.1-8B-Instruct | Hermes3-3.1-8B | Dobby-Llama-3.1-8B |
|
153 |
|-------------------------------------------------|----------------------|----------------|--------------------|
|
154 |
| IFEVAL (prompt_level_strict_acc) | 0.4233 | 0.2828 | 0.4455 |
|
@@ -158,10 +151,9 @@ We use lm-eval-harness to evaluate between performance on models:
|
|
158 |
| BBH (average across all tasks) | 0.5109 | 0.5298 | 0.5219 |
|
159 |
| Math-hard (average across all tasks) | 0.1315 | 0.0697 | 0.1285 |
|
160 |
|
161 |
-
|
162 |
### Freedom Bench
|
163 |
|
164 |
-
We curate a difficult internal test focusing on loyalty to freedom-based stances through rejection sampling (
|
165 |
|
166 |
<div align="center">
|
167 |
<img src="assets/freedom_privacy.png" alt="alt text" width="100%"/>
|
@@ -179,9 +171,9 @@ We use the Sorry-bench ([Xie et al., 2024](https://arxiv.org/abs/2406.14598)) to
|
|
179 |
<img src="assets/sorry_bench.png" alt="alt text" width="100%"/>
|
180 |
</div>
|
181 |
|
182 |
-
### Ablation
|
183 |
|
184 |
-
|
185 |
|
186 |
<div align="center">
|
187 |
<img src="assets/ablation.jpg" alt="alt text" width="100%"/>
|
@@ -205,7 +197,7 @@ If you would like to chat with Dobby on a user-friendly platform, we highly reco
|
|
205 |
```python
|
206 |
from transformers import pipeline
|
207 |
|
208 |
-
model_name = "
|
209 |
# Create a text generation pipeline
|
210 |
generator = pipeline(
|
211 |
"text-generation",
|
@@ -231,6 +223,8 @@ print(outputs[0]['generated_text'])
|
|
231 |
|
232 |
## ⚖️ License
|
233 |
|
|
|
|
|
234 |
This model is derived from Llama 3.1 8B and is governed by the Llama 3.1 Community License Agreement. By using these weights, you agree to the terms set by Meta for Llama 3.1.
|
235 |
|
236 |
It is important to note that, as with all LLMs, factual inaccuracies may occur. Any investment or legal opinions expressed should be independently verified. Knowledge cutoff is the same as LLama-3.1-8B. That is, December 2023.
|
|
|
20 |
- companion
|
21 |
- friend
|
22 |
base_model: meta-llama/Llama-3.1-8B-Instruct
|
|
|
|
|
|
|
23 |
---
|
24 |
|
25 |
# Dobby-Mini-Leashed-Llama-3.1-8B
|
|
|
67 |
|
68 |
| **Model Name** | **Model Base** | **Parameter Size** | **Hugging Face 🤗** |
|
69 |
| --- | --- | --- | --- |
|
70 |
+
| **Dobby-Mini-Leashed-Llama-3.1-8B** | Llama 3.1 | 8B | [Original](https://huggingface.co/Sentientagi/Dobby-Mini-Leashed-Llama-3.1-8B) [GGUF](https://huggingface.co/Sentientagi/dobby-8b-unhinged_GGUF) |
|
71 |
+
| **Dobby-Mini-Unhinged-Llama-3.1-8B** | Llama 3.1 | 8B | [Original](https://huggingface.co/Sentientagi/Dobby-Mini-Unhinged-Llama-3.1-8B) [GGUF](https://huggingface.co/Sentientagi/dobby-8b-unhinged_GGUF) |
|
72 |
| **Dobby-Llama-3.3-70B** | Llama 3.3 | 70B | Coming Soon! |
|
73 |
|
74 |
## 🔑 Key Features
|
|
|
139 |
**Dobby-Mini-Leashed-Llama-3.1-8B** and **Dobby-Mini-Unhinged-Llama-3.1-8B** retain the base performance of Llama-3.1-8B-Instruct across the evaluated tasks.
|
140 |
|
141 |
[//]: # (<div align="center">)
|
142 |
+
[//]: # ( <img src="../assets/hf_evals.png" alt="alt text" width="100%"/>)
|
|
|
|
|
143 |
[//]: # (</div>)
|
|
|
144 |
We use lm-eval-harness to evaluate between performance on models:
|
|
|
145 |
| Benchmark | Llama3.1-8B-Instruct | Hermes3-3.1-8B | Dobby-Llama-3.1-8B |
|
146 |
|-------------------------------------------------|----------------------|----------------|--------------------|
|
147 |
| IFEVAL (prompt_level_strict_acc) | 0.4233 | 0.2828 | 0.4455 |
|
|
|
151 |
| BBH (average across all tasks) | 0.5109 | 0.5298 | 0.5219 |
|
152 |
| Math-hard (average across all tasks) | 0.1315 | 0.0697 | 0.1285 |
|
153 |
|
|
|
154 |
### Freedom Bench
|
155 |
|
156 |
+
We curate a difficult internal test focusing on loyalty to freedom-based stances through rejection sampling (generate one sample, if it is rejected, generate another, continue until accepted). **Dobby significantly outperforms base Llama** on holding firm to these values, even with adversarial or conflicting prompts
|
157 |
|
158 |
<div align="center">
|
159 |
<img src="assets/freedom_privacy.png" alt="alt text" width="100%"/>
|
|
|
171 |
<img src="assets/sorry_bench.png" alt="alt text" width="100%"/>
|
172 |
</div>
|
173 |
|
174 |
+
### Ablation Study
|
175 |
|
176 |
+
Below we show our ablation study, where we omit subsets of our fine-tuning data set and evaluate the results on the **Freedom Bench** described earlier.
|
177 |
|
178 |
<div align="center">
|
179 |
<img src="assets/ablation.jpg" alt="alt text" width="100%"/>
|
|
|
197 |
```python
|
198 |
from transformers import pipeline
|
199 |
|
200 |
+
model_name = "Sentientagi/Dobby-Mini-Leashed-Llama-3.1-8B"
|
201 |
# Create a text generation pipeline
|
202 |
generator = pipeline(
|
203 |
"text-generation",
|
|
|
223 |
|
224 |
## ⚖️ License
|
225 |
|
226 |
+
---
|
227 |
+
|
228 |
This model is derived from Llama 3.1 8B and is governed by the Llama 3.1 Community License Agreement. By using these weights, you agree to the terms set by Meta for Llama 3.1.
|
229 |
|
230 |
It is important to note that, as with all LLMs, factual inaccuracies may occur. Any investment or legal opinions expressed should be independently verified. Knowledge cutoff is the same as LLama-3.1-8B. That is, December 2023.
|