models/loras/OpenPipe/ft-development-0b0f52d6-bc53-4443-bbad-4a6103c95501-pii-7b-optimized
This model is a fine-tuned version of OpenPipe/mistral-ft-optimized-1218 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0174
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.42 | 0.02 | 1 | 0.3989 |
0.0258 | 0.21 | 13 | 0.0304 |
0.025 | 0.43 | 26 | 0.0220 |
0.0146 | 0.64 | 39 | 0.0204 |
0.0208 | 0.85 | 52 | 0.0196 |
0.0136 | 1.07 | 65 | 0.0187 |
0.0148 | 1.28 | 78 | 0.0181 |
0.0178 | 1.49 | 91 | 0.0180 |
0.0204 | 1.7 | 104 | 0.0175 |
0.0128 | 1.92 | 117 | 0.0174 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 21
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for OpenPipe/sample-lora-pii-redaction
Base model
OpenPipe/mistral-ft-optimized-1218