mistral-v0.1-7b-pippa-metharme-lora
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the PIPPA dataset. It achieves the following results on the evaluation set:
- Loss: 1.3494
Model description
8-bit lora trained on the PygmalionAI/PIPPA dataset using axolotl.
Intended uses & limitations
PIPPA consists of just a little more than 1 million lines of dialogue spread out over 26,000 conversations between users of the popular chatbot website "Character.AI" and its large language model, obtained through a large community effort taking place over the course of several months. Tallying shows that over 1,000 unique personas simulating both real and fictional characters are represented within the dataset, allowing PIPPA and LLMs fine-tuned on it to adapt to many different roleplay domains.
⚠️ CAUTION: PIPPA contains conversations, themes and scenarios which can be considered "not safe for work" (NSFW) and/or heavily disturbing in nature. Models trained purely with PIPPA may have the tendency to generate X-rated output. You have been warned.
Training and evaluation data
@misc{gosling2023pippa,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.7313 | 0.05 | 100 | 1.7044 |
1.68 | 0.11 | 200 | 1.6176 |
1.5642 | 0.16 | 300 | 1.5538 |
1.6617 | 0.22 | 400 | 1.4986 |
1.4733 | 0.27 | 500 | 1.4723 |
1.4916 | 0.33 | 600 | 1.4427 |
1.5036 | 0.38 | 700 | 1.4271 |
1.2385 | 0.44 | 800 | 1.4109 |
1.4094 | 0.49 | 900 | 1.3968 |
1.4042 | 0.55 | 1000 | 1.3848 |
1.3946 | 0.6 | 1100 | 1.3771 |
1.2523 | 0.66 | 1200 | 1.3692 |
1.2932 | 0.71 | 1300 | 1.3648 |
1.346 | 0.77 | 1400 | 1.3609 |
1.1163 | 0.82 | 1500 | 1.3565 |
1.4656 | 0.88 | 1600 | 1.3495 |
1.2698 | 0.93 | 1700 | 1.3484 |
1.2019 | 0.99 | 1800 | 1.3454 |
1.3685 | 1.04 | 1900 | 1.3477 |
1.2248 | 1.1 | 2000 | 1.3488 |
1.2162 | 1.15 | 2100 | 1.3479 |
1.0443 | 1.21 | 2200 | 1.3491 |
1.2445 | 1.26 | 2300 | 1.3460 |
1.3229 | 1.32 | 2400 | 1.3476 |
1.3464 | 1.37 | 2500 | 1.3439 |
1.2651 | 1.43 | 2600 | 1.3439 |
1.516 | 1.48 | 2700 | 1.3424 |
1.4323 | 1.54 | 2800 | 1.3413 |
1.08 | 1.59 | 2900 | 1.3436 |
1.289 | 1.64 | 3000 | 1.3379 |
1.1221 | 1.7 | 3100 | 1.3384 |
1.1895 | 1.75 | 3200 | 1.3376 |
1.3138 | 1.81 | 3300 | 1.3358 |
1.3907 | 1.86 | 3400 | 1.3343 |
1.4544 | 1.92 | 3500 | 1.3351 |
1.25 | 1.97 | 3600 | 1.3334 |
1.2682 | 2.03 | 3700 | 1.3452 |
1.3107 | 2.08 | 3800 | 1.3471 |
1.2096 | 2.14 | 3900 | 1.3496 |
1.4503 | 2.19 | 4000 | 1.3503 |
1.142 | 2.25 | 4100 | 1.3485 |
0.8439 | 2.3 | 4200 | 1.3490 |
1.2749 | 2.36 | 4300 | 1.3508 |
0.9578 | 2.41 | 4400 | 1.3502 |
1.2203 | 2.47 | 4500 | 1.3496 |
0.9451 | 2.52 | 4600 | 1.3498 |
0.9602 | 2.58 | 4700 | 1.3491 |
0.9501 | 2.63 | 4800 | 1.3491 |
1.2062 | 2.69 | 4900 | 1.3496 |
1.1728 | 2.74 | 5000 | 1.3491 |
1.2506 | 2.8 | 5100 | 1.3494 |
1.4052 | 2.85 | 5200 | 1.3494 |
1.2012 | 2.91 | 5300 | 1.3494 |
1.3141 | 2.96 | 5400 | 1.3494 |
Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
- Downloads last month
- 14
Model tree for Doctor-Shotgun/mistral-v0.1-7b-pippa-metharme-lora
Base model
mistralai/Mistral-7B-v0.1