|
--- |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- generated_from_trainer |
|
base_model: lightblue/suzume-llama-3-8B-multilingual |
|
model-index: |
|
- name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda |
|
results: [] |
|
--- |
|
# Suzume ORPO |
|
|
|
<p align="center"> |
|
<img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png" alt="Suzume with Mitsu - a Japanese tree sparrow with honey on it"/> |
|
</p> |
|
|
|
[[Paper]](https://arxiv.org/abs/2405.18952) [[Dataset]](https://huggingface.co/datasets/lightblue/mitsu) |
|
|
|
This is Suzume ORPO, an ORPO trained fine-tune of the [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) model using our [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset. |
|
|
|
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half). |
|
|
|
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model ([lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu)). |
|
|
|
We are currently working on a developing a commerically usable model, so stay tuned for that! |
|
|
|
# Model list |
|
|
|
We have ORPO trained the following models using different proportions of the [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset: |
|
* Trained on the top/bottom responses of all prompts in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full) |
|
* Trained on the top/bottom responses of the prompts of the 75\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75) |
|
* Trained on the top/bottom responses of the prompts of the 50\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half) |
|
* Trained on the top/bottom responses of the prompts of the 25\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25) |
|
|
|
# Model results |
|
|
|
We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines: |
|
|
|
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The foundation model that our models are ultimately built upon |
|
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The highest performing open model on the Chatbot arena that is of a similar size to ours |
|
* gpt-3.5-turbo - A fairly high quality (although not state-of-the-art) proprietary LLM |
|
* [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) - The base model which we train our ORPO finetunes from |
|
|
|
| **MT-Bench language** | **meta-llama/Meta-Llama-3-8B-Instruct** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | **lightblue/suzume-llama-3-8B-multilingual** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25** | |
|
|-----------------------|-----------------------------------------|-----------------------------------|-------------------|----------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------| |
|
| **Chinese π¨π³** | NaN | 6.97 | 7.55 | 7.11 | 7.65 | **7.77** | 7.74 | 7.44 | |
|
| **English πΊπΈ** | 7.98 | 7.92 | **8.26** | 7.73 | 7.98 | 7.94 | 7.98 | 8.22 | |
|
| **French π«π·** | NaN | 7.29 | 7.74 | 7.66 | **7.84** | 7.46 | 7.78 | 7.81 | |
|
| **German π©πͺ** | NaN | 6.99 | 7.68 | 7.26 | 7.28 | 7.64 | 7.7 | **7.71** | |
|
| **Japanese π―π΅** | NaN | 6.22 | **7.84** | 6.56 | 7.2 | 7.12 | 7.34 | 7.04 | |
|
| **Russian π·πΊ** | NaN | 8.28 | 7.94 | 8.19 | 8.3 | 8.74 | **8.94** | 8.81 | |
|
|
|
We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages. |
|
|
|
# Training data |
|
|
|
We trained this model using the [lightblue/mitsu_full_borda](https://huggingface.co/datasets/lightblue/mitsu_full_borda) dataset. |
|
|
|
# Training configuration |
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
<details><summary>See axolotl config</summary> |
|
|
|
axolotl version: `0.4.0` |
|
```yaml |
|
base_model: lightblue/suzume-llama-3-8B-multilingual |
|
model_type: LlamaForCausalLM |
|
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast |
|
|
|
load_in_8bit: false |
|
load_in_4bit: false |
|
strict: false |
|
|
|
rl: orpo |
|
orpo_alpha: 0.1 |
|
remove_unused_columns: false |
|
|
|
chat_template: chatml |
|
datasets: |
|
- path: lightblue/mitsu_tophalf_borda |
|
type: orpo.chat_template |
|
conversation: llama-3 |
|
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_half_borda |
|
val_set_size: 0.02 |
|
output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda |
|
|
|
sequence_len: 8192 |
|
sample_packing: false |
|
pad_to_sequence_len: true |
|
|
|
use_wandb: true |
|
wandb_project: axolotl |
|
wandb_entity: peterd |
|
wandb_name: mitsu_half_borda |
|
|
|
gradient_accumulation_steps: 8 |
|
micro_batch_size: 1 |
|
num_epochs: 1 |
|
optimizer: paged_adamw_8bit |
|
lr_scheduler: cosine |
|
learning_rate: 8e-6 |
|
|
|
train_on_inputs: false |
|
group_by_length: false |
|
bf16: auto |
|
fp16: |
|
tf32: false |
|
|
|
gradient_checkpointing: true |
|
gradient_checkpointing_kwargs: |
|
use_reentrant: false |
|
early_stopping_patience: |
|
resume_from_checkpoint: |
|
logging_steps: 1 |
|
xformers_attention: |
|
flash_attention: true |
|
|
|
warmup_steps: 10 |
|
evals_per_epoch: 20 |
|
eval_table_size: |
|
saves_per_epoch: 1 |
|
debug: |
|
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json |
|
weight_decay: 0.0 |
|
special_tokens: |
|
pad_token: <|end_of_text|> |
|
``` |
|
|
|
</details><br> |
|
|
|
# workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda |
|
|
|
This model is a fine-tuned version of [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.0935 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 8e-06 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 4 |
|
- gradient_accumulation_steps: 8 |
|
- total_train_batch_size: 32 |
|
- total_eval_batch_size: 4 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 10 |
|
- num_epochs: 1 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 7.6299 | 0.02 | 1 | 7.7014 | |
|
| 7.041 | 0.07 | 3 | 3.9786 | |
|
| 0.6089 | 0.15 | 6 | 0.1393 | |
|
| 0.1308 | 0.22 | 9 | 0.1244 | |
|
| 0.1051 | 0.29 | 12 | 0.1112 | |
|
| 0.1021 | 0.36 | 15 | 0.1063 | |
|
| 0.0861 | 0.44 | 18 | 0.1026 | |
|
| 0.1031 | 0.51 | 21 | 0.0979 | |
|
| 0.0996 | 0.58 | 24 | 0.0967 | |
|
| 0.0923 | 0.65 | 27 | 0.0960 | |
|
| 0.1025 | 0.73 | 30 | 0.0944 | |
|
| 0.1103 | 0.8 | 33 | 0.0939 | |
|
| 0.0919 | 0.87 | 36 | 0.0937 | |
|
| 0.104 | 0.94 | 39 | 0.0935 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.38.2 |
|
- Pytorch 2.2.1+cu121 |
|
- Datasets 2.18.0 |
|
- Tokenizers 0.15.0 |
|
|
|
# How to cite |
|
|
|
```tex |
|
@article{devine2024sure, |
|
title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets}, |
|
author={Devine, Peter}, |
|
journal={arXiv preprint arXiv:2405.18952}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
# Developer |
|
|
|
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn)) |