saqr-7b-instruct
This model is a fine-tuned version of tiiuae/falcon-7b on ultrachat_200k, UltraFeedback, and gsm8k datasets.
Model description
This model is a fine-tuned version of tiiuae/falcon-7b using supervised fine-tuning on nearly the same datasets as Zephyr-7B-beta.
Training and evaluation data
The evaluation for training can be found here.
The evaluation can be found at the Hugging Face Leaderboard here.
Training procedure
Can be found here.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 7
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 5000
Training results
Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Menouar/saqr-7b-instruct
Base model
tiiuae/falcon-7b