priamai's picture
Trained on some ORKL examples
747b74b verified
|
raw
history blame
808 Bytes
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
base_model: unsloth/mistral-7b-bnb-4bit

Uploaded model

  • Developed by: priamai
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Unsloth - 2x faster free finetuning | Num GPUs = 1 Num examples = 6 | Num Epochs = 2 Batch size per device = 2 | Gradient Accumulation steps = 4 Total batch size = 8 | Total steps = 2 Number of trainable parameters = 41,943,040