GamerUntouch's picture
Update README.md
81ac3f6
|
raw
history blame
364 Bytes
metadata
license: other

See LICENSE file for license. This is a collection of merged, then converted to 4bit LLaMA models trained on the storytelling dataset I used for the storytelling LoRAs.

UPDATE: 04/04 Cleaned data and retrained to 32 groupsize and safetensors. Formatting oddities seem to have been wiped out.

Format: Nothing, paragraphs separated by ***.