Transformers
GGUF
Inference Endpoints
LoneStriker commited on
Commit
6a2e86b
·
verified ·
1 Parent(s): 4938c7e

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,5 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ OrcaGemma-2B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
2
+ OrcaGemma-2B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
3
+ OrcaGemma-2B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
4
+ OrcaGemma-2B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
5
+ OrcaGemma-2B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
OrcaGemma-2B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a522032c4511a11a32349270e9f054b4204ddedebb81de94b5be6d03a570fd8c
3
+ size 1465590912
OrcaGemma-2B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ade9bf3d5f06601ee063ac142e623a8862441316d1297d826a00fb4c8381e8c
3
+ size 1630262400
OrcaGemma-2B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:742c1790d043ee6aba227f9cd51ab753bcd7ee5d7c2d4b38a510f4b230942fa9
3
+ size 1839649920
OrcaGemma-2B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14b4b1eafe33f93583cf1ab6d6c41c4d9361f5f227572ec65172450a6f4fa1a1
3
+ size 2062124160
OrcaGemma-2B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87cc12ef46ed872dcce9c79a303efac2cca2ce788f9abcd191dc236d4e27648e
3
+ size 2669069440
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ extra_gated_heading: Access Gemma on Hugging Face
4
+ extra_gated_prompt: >-
5
+ To access Gemma on Hugging Face, you’re required to review and agree to
6
+ Google’s usage license. To do this, please ensure you’re logged-in to Hugging
7
+ Face and click below. Requests are processed immediately.
8
+ extra_gated_button_content: Acknowledge license
9
+ license: other
10
+ license_name: gemma-terms-of-use
11
+ license_link: https://ai.google.dev/gemma/terms
12
+ base_model:
13
+ - google/gemma-2b
14
+ datasets:
15
+ - Open-Orca/SlimOrca-Dedup
16
+ ---
17
+
18
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/Tk7qwxqKnpoxJlraiNidv.webp)
19
+
20
+ # Gemmalpaca-2B
21
+
22
+ This is gemma-2b model supervised fine-tuned on the [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) dataset. It's not as good as [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B).
23
+
24
+ ## 🏆 Evaluation
25
+
26
+ ### Nous
27
+
28
+ Gemmalpaca-2B outperforms gemma-2b but underperforms gemma-2b-it on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
29
+
30
+ | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
31
+ |---|---:|---:|---:|---:|---:|
32
+ | [mlabonne/Gemmalpaca-2B](https://huggingface.co/mlabonne/Gemmalpaca-2B) [📄](https://gist.github.com/mlabonne/4b638752fc3227df566f9562064cb864) | 38.39 | 24.48 | 51.22 | 47.02 | 30.85 |
33
+ | [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) [📄](https://gist.github.com/mlabonne/db0761e74175573292acf497da9e5d95) | 36.1 | 23.76 | 43.6 | 47.64 | 29.41 |
34
+ | [**mlabonne/OrcaGemma-2B**](https://huggingface.co/mlabonne/OrcaGemma-2B) [📄](https://gist.github.com/mlabonne/c8c0914945f9c189cca74120bc834c3e) | **35.63** | **24.44** | **42.49** | **45.84** | **29.76** |
35
+ | [google/gemma-2b](https://huggingface.co/google/gemma-2b) [📄](https://gist.github.com/mlabonne/7df1f238c515a5f63a750c8792cef59e) | 34.26 | 22.7 | 43.35 | 39.96 | 31.03 |
36
+
37
+ ## 🧩 Configuration
38
+
39
+ It was trained using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) with the following configuration.
40
+
41
+ ```yaml
42
+ base_model: google/gemma-2b
43
+ model_type: AutoModelForCausalLM
44
+ tokenizer_type: AutoTokenizer
45
+
46
+ load_in_8bit: false
47
+ load_in_4bit: true
48
+ strict: false
49
+
50
+ datasets:
51
+ - path: Open-Orca/SlimOrca-Dedup
52
+ type: sharegpt
53
+
54
+ dataset_prepared_path:
55
+ val_set_size: 0.01
56
+ output_dir: ./out
57
+
58
+ sequence_len: 2048
59
+ sample_packing: true
60
+ pad_to_sequence_len: true
61
+
62
+ adapter: qlora
63
+ lora_model_dir:
64
+ lora_r: 32
65
+ lora_alpha: 64
66
+ lora_dropout: 0.05
67
+ lora_target_linear: true
68
+
69
+ wandb_project: axolotl
70
+ wandb_entity:
71
+ wandb_watch:
72
+ wandb_name:
73
+ wandb_log_model:
74
+
75
+ gradient_accumulation_steps: 4
76
+ micro_batch_size: 2
77
+ num_epochs: 2
78
+ optimizer: adamw_bnb_8bit
79
+ lr_scheduler: cosine
80
+ learning_rate: 0.0002
81
+
82
+ train_on_inputs: false
83
+ group_by_length: false
84
+ bf16: auto
85
+ fp16:
86
+ tf32: false
87
+
88
+ gradient_checkpointing: true
89
+ early_stopping_patience:
90
+ resume_from_checkpoint:
91
+ local_rank:
92
+ logging_steps: 1
93
+ xformers_attention:
94
+ flash_attention:
95
+
96
+ warmup_steps: 10
97
+ evals_per_epoch: 10
98
+ eval_table_size:
99
+ eval_table_max_new_tokens: 128
100
+ saves_per_epoch: 1
101
+ debug:
102
+ deepspeed:
103
+ weight_decay: 0.1
104
+ fsdp:
105
+ fsdp_config:
106
+ special_tokens:
107
+ ```
108
+
109
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)