add data section
Browse files
README.md
CHANGED
@@ -352,6 +352,16 @@ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
|
352 |
model.generate(**gen_input)
|
353 |
```
|
354 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
355 |
# 🔄 Quantizationed versions
|
356 |
|
357 |
## GGUF [@bartowski](https://huggingface.co/bartowski)
|
|
|
352 |
model.generate(**gen_input)
|
353 |
```
|
354 |
|
355 |
+
# 📊 Datasets used in this model
|
356 |
+
|
357 |
+
The datasets used to train this model are listed in the metadata section of the model card.
|
358 |
+
|
359 |
+
Please note that certain datasets mentioned in the metadata may have undergone filtering based on various criteria.
|
360 |
+
|
361 |
+
The details of this filtering process and its outcomes are documented in the `data` folder of this repository:
|
362 |
+
|
363 |
+
[Weyaxi/Einstein-v6.1-Llama3-8B/data](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B/tree/main/data)
|
364 |
+
|
365 |
# 🔄 Quantizationed versions
|
366 |
|
367 |
## GGUF [@bartowski](https://huggingface.co/bartowski)
|