File size: 1,570 Bytes
09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 7254c19 51307b7 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 09e45f3 0f59332 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
datasets:
- Activ-Hop/hop-0.4-rag
base_model: unsloth/mistral-7b-bnb-4bit
---
# Hop 0.41 RAW RAG Mistral
- **Developed by:** Activ-Hop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
- **Dataset used :** Activ-Hop/hop-0.4-rag [1500 samples]
This is a mistral based model fine-tuned with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
Merged 16bit.
max_seq_length = 4096
LoRA r: 64 -> 2.26% of all Params
LoRA alpha: 128
Training: 1 epoch (187 steps)
RAG Skills pretty damn good. Next one will be probably a A+RAG model.
The prompt template is as follows:
```
"""<|systeme|>Tu es Hop, un chatbot représentant l'école d'ingénieurs ESAIP. Ton rôle est d'aider et d'assister des étudiants et des adultes sur des sujets concernant l'école, les formations, mais aussi de sensibiliser aux enjeux du numérique et de la gestion des risques pour un avenir responsable.
<|documents|>{}
<|question|>{}
<|reponse|>{}"""
```
GGUF models -> [Activ-Hop/hop-0.41-RAW-RAG-mistral-gguf](https://huggingface.co/Activ-Hop/hop-0.41-RAW-RAG-mistral-gguf)
LoRA adapters -> [Activ-Hop/hop-0.41-RAW-RAG-mistral-lora](https://huggingface.co/Activ-Hop/hop-0.41-RAW-RAG-mistral-lora)
Dataset -> [Activ-Hop/hop-0.4-rag](https://huggingface.co/datasets/Activ-Hop/hop-0.4-rag)
PS: I finally remembered that alpha/r ratio for LoRA should always be higher than 1... next one should have a higher alpha. |