Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ A girl of peculiar appetites and an even more peculiar imagination lived in a sm
|
|
28 |
|
29 |
|
30 |
|
31 |
-
This model is the result of training a fraction (16M tokens) of the testing data Intended for [LLAMA-3_8B_Unaligned's](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) beta.
|
32 |
The base model is a merge of merges, made by [Invisietch's](https://huggingface.co/invisietch) and named [EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B). The name for this model reflects the base that was used for this finetune while hinting a darker, and more uncensored aspects associated with the nature of the [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) project.
|
33 |
|
34 |
As a result of the unique data added, this model has an exceptional adherence to instructions about paragraph length, and to the story writing prompt. I would like to emphasize, **no ChatGPT \ Claude** was used for any of the additional data I added in this finetune. The goal is to eventually have a model with a **minimal amount of slop**, this cannot be reliably done by relying on API models, which pollute datasets with their bias and repetitive words.
|
|
|
28 |
|
29 |
|
30 |
|
31 |
+
This model is the result of training a fraction (16M tokens) of the testing data Intended for [LLAMA-3_8B_Unaligned's](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) upcoming beta.
|
32 |
The base model is a merge of merges, made by [Invisietch's](https://huggingface.co/invisietch) and named [EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B). The name for this model reflects the base that was used for this finetune while hinting a darker, and more uncensored aspects associated with the nature of the [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) project.
|
33 |
|
34 |
As a result of the unique data added, this model has an exceptional adherence to instructions about paragraph length, and to the story writing prompt. I would like to emphasize, **no ChatGPT \ Claude** was used for any of the additional data I added in this finetune. The goal is to eventually have a model with a **minimal amount of slop**, this cannot be reliably done by relying on API models, which pollute datasets with their bias and repetitive words.
|