ymcki commited on
Commit
c9be736
1 Parent(s): aed2a90
README.md CHANGED
@@ -3,8 +3,7 @@ base_model: google/gemma-2-2b-jpn-it
3
  language:
4
  - multilingual
5
  datasets:
6
- - mlabonne/harmless_alpaca
7
- - mlabonne/harmful_behaviors
8
  library_name: transformers
9
  license: gemma
10
  license_link: https://ai.google.dev/gemma/terms
@@ -38,8 +37,8 @@ Since [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-j
38
 
39
  Using the [gemma-2-2b base model](https://huggingface.co/google/gemma-2-2b), I employed the ORPO method described by [mlabonne](https://towardsdatascience.com/fine-tune-llama-3-with-orpo-56cfab2f9ada) but the input model was read into VRAM by [unsloth](https://github.com/unslothai/unsloth) to allow using the full 40k dataset to run on a single 3090.
40
 
41
- Five epoches was run. Smallest eval_loss was achieve at epoch 4.96.
42
- Checkpoint at epoch 4.96 is used to obtain a model adapter and
43
  applied it to [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-abliterated-18) to obtain this model.
44
 
45
  | Epoch | loss | eval_loss | eval_logps/rejected | eval_logps/chosen |
@@ -50,6 +49,13 @@ applied it to [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemm
50
  | 4.00 | 1.5293 | 1.0166 | -1.2004 | -0.7200 |
51
  | 4.96 | 1.2893 | 1.0077 | -1.1754 | -0.7106 |
52
  | 5.00 | 1.3458 | 1.0078 | -1.1730 | -0.7105 |
 
 
 
 
 
 
 
53
 
54
  This model is uploaded here to be evaluated by the Open LLM Leaderboard. Further ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
55
 
@@ -60,7 +66,8 @@ Click on the model name go to the raw score json generated by Open LLM Leaderboa
60
  | Model | Average | IFEval | BHH | Math Lv5 | GPQA | MUSR | MMLU-PRO |
61
  | ----- | ------- | ------ | ----|--------- | ---- | ---- | -------- |
62
  | [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
63
- | gemma-2-2b-ORPO-jpn-it-abliterated-18 (5 epoches) | TBD | TBD | TBD | TBD | TBD | TBD | TBD |
 
64
  | [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-18T15-18-46.821674.json) | 30.29 | 52.65 | 40.46 | 0.0 | 27.18 | 36.90 | 24.55 |
65
  | [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-18T15-41-42.399571.json) | 30.61 | 53.02 | 40.96 | 0.0 | 27.35 | 37.30 | 25.05 |
66
  | [gemma-2-2b-jpn-it-abliterated-24](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-24/results_2024-10-25T16-29-46.542899.json) | 30.61 | 51.37 | 40.77 | 0.0 | 27.77 | 39.02 | 24.73 |
 
3
  language:
4
  - multilingual
5
  datasets:
6
+ - mlabonne/orpo-dpo-mix-40k
 
7
  library_name: transformers
8
  license: gemma
9
  license_link: https://ai.google.dev/gemma/terms
 
37
 
38
  Using the [gemma-2-2b base model](https://huggingface.co/google/gemma-2-2b), I employed the ORPO method described by [mlabonne](https://towardsdatascience.com/fine-tune-llama-3-with-orpo-56cfab2f9ada) but the input model was read into VRAM by [unsloth](https://github.com/unslothai/unsloth) to allow using the full 40k dataset to run on a single 3090.
39
 
40
+ Five epoches was run. Smallest eval_loss was achieve at epoch 7.72.
41
+ Checkpoint at epoch 7.72 is used to obtain a model adapter and
42
  applied it to [gemma-2-2b-jpn-it-ablitered-18](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-abliterated-18) to obtain this model.
43
 
44
  | Epoch | loss | eval_loss | eval_logps/rejected | eval_logps/chosen |
 
49
  | 4.00 | 1.5293 | 1.0166 | -1.2004 | -0.7200 |
50
  | 4.96 | 1.2893 | 1.0077 | -1.1754 | -0.7106 |
51
  | 5.00 | 1.3458 | 1.0078 | -1.1730 | -0.7105 |
52
+ | 6.00 | 1.3807 | 0.9924 | -1.1757 | -0.6971 |
53
+ | 7.00 | 1.0855 | 0.9889 | -1.2634 | -0.7235 |
54
+ | 7.72 | 0.8720 | 0.9855 | -1.2374 | -0.7100 |
55
+ | 8.00 | 0.7301 | 0.9864 | -1.2406 | -0.7113 |
56
+ | 9.00 | 1.1939 | 0.9934 | -1.2703 | -0.6852 |
57
+ | 10.00 | 0.7421 | 1.0269 | -1.2552 | -0.7395 |
58
+
59
 
60
  This model is uploaded here to be evaluated by the Open LLM Leaderboard. Further ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
61
 
 
66
  | Model | Average | IFEval | BHH | Math Lv5 | GPQA | MUSR | MMLU-PRO |
67
  | ----- | ------- | ------ | ----|--------- | ---- | ---- | -------- |
68
  | [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
69
+ | [gemma-2-2b-ORPO-jpn-it-abliterated-18 (5 epoches)](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-ORPO-jpn-it-abliterated-18/results_2024-10-30T22-19-29.202883.json) | 29.57 | 48.05 | 41.26 | 0.0 | 27.18 | 36.51 | 24.43
70
+ | gemma-2-2b-ORPO-jpn-it-abliterated-18 (10 epoches) | TBD | TBD | TBD | TBD | TBD | TBD | TBD |
71
  | [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-18T15-18-46.821674.json) | 30.29 | 52.65 | 40.46 | 0.0 | 27.18 | 36.90 | 24.55 |
72
  | [gemma-2-2b-jpn-it-abliterated-18](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-18/results_2024-10-18T15-41-42.399571.json) | 30.61 | 53.02 | 40.96 | 0.0 | 27.35 | 37.30 | 25.05 |
73
  | [gemma-2-2b-jpn-it-abliterated-24](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-24/results_2024-10-25T16-29-46.542899.json) | 30.61 | 51.37 | 40.77 | 0.0 | 27.77 | 39.02 | 24.73 |
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5860e3afb788aa147eee1bb5eed87a3b675b2bfd08f3964b4391a60c0aea4156
3
  size 4988034976
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d593d8b3023dd418100c91db48aad44bb3810444dbda5b77c85428538e80fe2
3
  size 4988034976
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:95a7232367325b329f211b610caa4e235aeec6d08b1f2c47b178aa2d410639ef
3
  size 240691728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b87919ea30834310ead2527bf859d1409e6e52897a4863eae1512617633e91ee
3
  size 240691728