TerraMix_L2_13B_16K / README.md
androlike's picture
Update README.md
1e0fecc
---
license: llama2
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
---
Link to GGUF version: [GGUF](https://huggingface.co/androlike/TerraMix_L2_13B_16K_GGUF)
Thanks to everyone, who finenuted base Llama2 model, made (Q)LoRas, created scripts for merge: [ties-merge](https://github.com/cg123/ties-merge), [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient), [zaraki-tools](https://github.com/zarakiquemparte/zaraki-tools)
#### Model details:
Experiment about merging model with large context length.
Use these rope scaling settings:
```
--rope-freq-base 10000 --rope-freq-scale 0.25 -c 16384 (llama.cpp)
```
```
--ropeconfig 0.25 10000 --contextsize 16384 (koboldcpp)
```
You can use various instruct formats:
### Alpaca instruct format (Recommended):
```
### Instruction:
(your instruct prompt is here)
### Response:
```
### Vicuna 1.1 instruct format:
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
### Metharme instruct format:
```
<|system|> (your instruct prompt)
<|user|> (user's reply)<|model|> (for model's output)
```
#### Models used for the merge:
### Part1:
[Airoboros L2 13B 2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1) + [LLAMA2 13B - Holodeck](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1) merged with Creative and Reasoning [Airoboros LMoE 13B 2.1](https://huggingface.co/jondurbin/airoboros-lmoe-13b-2.1)
### Part2:
[Chronos 13B V2](https://huggingface.co/elinas/chronos-13b-v2) merged with [Kimiko-v2-13B](https://huggingface.co/nRuaif/Kimiko-v2-13B) + [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) merged with [limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) and [limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2) + [Synthia-13B](https://huggingface.co/migtissera/Synthia-13B) merged with [BluemoonRP-L2-13B](https://huggingface.co/nRuaif/BluemoonRP-L2-13B-This-time-will-be-better) and [LLama-2-13b-chat-erp-lora-mk2](https://huggingface.co/PocketDoc/DansHalfbakedAdapters) + [WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b) merged with [Llama-2-13B-Storywriter-LORA](https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA)
### Part3:
[Speechless Llama2 13B](https://huggingface.co/uukuguy/speechless-llama2-13b) + [Redmond Puffin 13B](https://huggingface.co/NousResearch/Redmond-Puffin-13B)
### Part4:
[Tsukasa 13B 16K](https://huggingface.co/ludis/tsukasa-13b-16k) (repo is deleted) + [EverythingLM-13b-V2-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V2-16k)
### Part5:
TerraMix_L2_13B (base) was merged with [PIPPA ShareGPT Subset QLoRa 13B](https://huggingface.co/zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b)
### Part6:
Three parts merged in one, then TsuryLM-L2-16K was merged with TerraMix_L2_13B (base).
Model is intended for creativity purposes (roleplay). It can regularly break formatting or sometimes have poor understanding about small details in occuring situations.
But yet, this model is almost absent from alignment, can generate direct output, moderately in prose, good in internet RP style.
### Limitations and risks
Llama2 and its derivatives (finetunes) is licensed under LLama 2 Community License, various finetunes or (Q)LoRAs has appropriate licenses depending on used datasets in finetuning or training (Quantized) Low-Rank Adaptations. This mix can generate heavily biased output, which aren't suitable for minors or common audience.