stelterlab commited on
Commit
1c8a9c5
1 Parent(s): 24c04d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -125
README.md CHANGED
@@ -1,125 +1,125 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - de
5
- - en
6
- - it
7
- - fr
8
- - pt
9
- - nl
10
- - ar
11
- - es
12
- tags:
13
- - spectrum
14
- - sft
15
- base_model:
16
- - Qwen/Qwen2.5-14B
17
- ---
18
-
19
- **AWQ quantization: done by stelterlab in INT4 GEMM with AutoAWQ by casper-hansen (https://github.com/casper-hansen/AutoAWQ/)**
20
-
21
- Original Weights by VAGOsolutions. Original Model Card follows:
22
-
23
- ![SauerkrautLM-v2-14b-SFT](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-3.png "SauerkrautLM-v2-14b-SFT")
24
- ## VAGO solutions SauerkrautLM-v2-14b-SFT
25
-
26
- **Fine-tuned Model** - *Celebrating one year of SauerkrautLM with our most advanced model yet, showcasing two-phase Spectrum Fine-Tuning*
27
-
28
- Introducing **SauerkrautLM-14b-v2-SFT** – our latest Sauerkraut version based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), celebrating the one-year anniversary of SauerkrautLM!
29
-
30
- - Two-phase Spectrum Fine-Tuning approach
31
- - Phase 1: 25% layer targeting with 0.6B tokens
32
- - Phase 2: 20% layer targeting with 0.6B tokens
33
- - Enhanced mathematical capabilities, function calling, and multilingual performance
34
-
35
- # Table of Contents
36
- 1. [Overview of all SauerkrautLM-14b-v2 Models](#all-SauerkrautLM-v2-14b)
37
- 2. [Model Details](#model-details)
38
- - [Training procedure](#training-procedure)
39
- 3. [Evaluation](#evaluation)
40
- 5. [Disclaimer](#disclaimer)
41
- 6. [Contact](#contact)
42
- 7. [Collaborations](#collaborations)
43
- 8. [Acknowledgement](#acknowledgement)
44
-
45
- ## All SauerkrautLM-v2-14b
46
-
47
- | Model | HF | EXL2 | GGUF | AWQ |
48
- |-------|-------|-------|-------|-------|
49
- | SauerkrautLM-v2-14b-SFT | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) | coming soon | coming soon | coming soon |
50
- | SauerkrautLM-v2-14b-DPO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) | coming soon | coming soon | coming soon |
51
-
52
- ## Model Details
53
- **SauerkrautLM-v2-14b-SFT**
54
- - **Model Type:** SauerkrautLM-v2-14b-SFT is a fine-tuned Model based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
55
- - **Language(s):** German, English
56
- - **License:** Apache 2.0
57
- - **Contact:** [VAGO solutions](https://vago-solutions.ai)
58
-
59
- ## Training Procedure
60
-
61
- This model represents a significant advancement in our fine-tuning methodology, utilizing a two-phase Spectrum Fine-Tuning approach:
62
-
63
- **Phase 1 (25% Layer Targeting)**:
64
- - Training on 0.6B tokens with four distinct components:
65
- 1. Mathematics data (curated using proprietary classifier)
66
- 2. English performance data (from Sauerkraut-v1)
67
- 3. High-quality German training data (from Sauerkraut-v1)
68
- 4. Function calling data (from Sauerkraut-v2)
69
-
70
- **Phase 2 (20% Layer Targeting)**:
71
- - Training on additional 0.6B tokens with partial overlap:
72
- 1. New mathematics data (classifier-selected)
73
- 2. New English performance data (from Sauerkraut-v2)
74
- 3. New German training data (from Sauerkraut-v2)
75
- 4. Function calling data (from Sauerkraut-v2)
76
-
77
- **Dataset Composition**:
78
- - Carefully curated mathematical content using a proprietary classification model
79
- - Premium multilingual data from both Sauerkraut-v1 and Sauerkraut-v2
80
- - Specialized function calling training data
81
- - High-quality German-English content across various domains
82
-
83
- ## Objective and Results
84
-
85
- This release marks the one-year anniversary of SauerkrautLM, showcasing our most advanced training methodology to date. The two-phase Spectrum Fine-Tuning approach allows for more nuanced learning while maintaining efficiency in resource usage. The model demonstrates significant improvements in:
86
-
87
- - Mathematical reasoning capabilities
88
- - Function calling proficiency
89
- - Multilingual performance
90
- - Instruction following
91
- - Common-sense reasoning
92
-
93
- ## Evaluation
94
-
95
- **AGIEVAL**
96
- ![SauerkrautLM-v2-14b-SFT-AGIEVAL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-AGIEVAL.png "SauerkrautLM-v2-14b-SFT-AGIEVAL")
97
-
98
- **GPT4ALL**
99
- ![SauerkrautLM-v2-14b-SFT-GPT4ALL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-GPT4ALL.png "SauerkrautLM-v2-14b-SFT-GPT4ALL")
100
-
101
- **TRUTHFULQA**
102
- ![SauerkrautLM-v2-14b-SFT-TRUTHFULQA](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-TRUTHFULQA.png "SauerkrautLM-v2-14b-SFT-TRUTHFULQA")
103
-
104
- **OPENLEADERBOARD 2**
105
- ![SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD.png "SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD")
106
-
107
- **MMLU 5-shot**
108
- ![SauerkrautLM-v2-14b-SFT-MMLU-5shot](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-MMLU-5shot.png "SauerkrautLM-v2-14b-SFT-MMLU-5shot")
109
-
110
- **Berkeley Function Calling Leaderboard**
111
- ![SauerkrautLM-v2-14b-SFT-BERKELEY](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-BERKELEY.png "SauerkrautLM-v2-14b-SFT-BERKELEY")
112
-
113
- Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent.
114
-
115
- ## Disclaimer
116
- We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
117
-
118
- ## Contact
119
- If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
120
-
121
- ## Collaborations
122
- We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai)
123
-
124
- ## Acknowledgement
125
- Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such a valuable model to the Open-Source community.
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - de
5
+ - en
6
+ - it
7
+ - fr
8
+ - pt
9
+ - nl
10
+ - ar
11
+ - es
12
+ tags:
13
+ - spectrum
14
+ - sft
15
+ base_model:
16
+ - VAGOsolutions/SauerkrautLM-v2-14b-SFT
17
+ ---
18
+
19
+ **AWQ quantization: done by stelterlab in INT4 GEMM with AutoAWQ by casper-hansen (https://github.com/casper-hansen/AutoAWQ/)**
20
+
21
+ Original Weights by VAGOsolutions. Original Model Card follows:
22
+
23
+ ![SauerkrautLM-v2-14b-SFT](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-3.png "SauerkrautLM-v2-14b-SFT")
24
+ ## VAGO solutions SauerkrautLM-v2-14b-SFT
25
+
26
+ **Fine-tuned Model** - *Celebrating one year of SauerkrautLM with our most advanced model yet, showcasing two-phase Spectrum Fine-Tuning*
27
+
28
+ Introducing **SauerkrautLM-14b-v2-SFT** – our latest Sauerkraut version based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), celebrating the one-year anniversary of SauerkrautLM!
29
+
30
+ - Two-phase Spectrum Fine-Tuning approach
31
+ - Phase 1: 25% layer targeting with 0.6B tokens
32
+ - Phase 2: 20% layer targeting with 0.6B tokens
33
+ - Enhanced mathematical capabilities, function calling, and multilingual performance
34
+
35
+ # Table of Contents
36
+ 1. [Overview of all SauerkrautLM-14b-v2 Models](#all-SauerkrautLM-v2-14b)
37
+ 2. [Model Details](#model-details)
38
+ - [Training procedure](#training-procedure)
39
+ 3. [Evaluation](#evaluation)
40
+ 5. [Disclaimer](#disclaimer)
41
+ 6. [Contact](#contact)
42
+ 7. [Collaborations](#collaborations)
43
+ 8. [Acknowledgement](#acknowledgement)
44
+
45
+ ## All SauerkrautLM-v2-14b
46
+
47
+ | Model | HF | EXL2 | GGUF | AWQ |
48
+ |-------|-------|-------|-------|-------|
49
+ | SauerkrautLM-v2-14b-SFT | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) | coming soon | coming soon | coming soon |
50
+ | SauerkrautLM-v2-14b-DPO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) | coming soon | coming soon | coming soon |
51
+
52
+ ## Model Details
53
+ **SauerkrautLM-v2-14b-SFT**
54
+ - **Model Type:** SauerkrautLM-v2-14b-SFT is a fine-tuned Model based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
55
+ - **Language(s):** German, English
56
+ - **License:** Apache 2.0
57
+ - **Contact:** [VAGO solutions](https://vago-solutions.ai)
58
+
59
+ ## Training Procedure
60
+
61
+ This model represents a significant advancement in our fine-tuning methodology, utilizing a two-phase Spectrum Fine-Tuning approach:
62
+
63
+ **Phase 1 (25% Layer Targeting)**:
64
+ - Training on 0.6B tokens with four distinct components:
65
+ 1. Mathematics data (curated using proprietary classifier)
66
+ 2. English performance data (from Sauerkraut-v1)
67
+ 3. High-quality German training data (from Sauerkraut-v1)
68
+ 4. Function calling data (from Sauerkraut-v2)
69
+
70
+ **Phase 2 (20% Layer Targeting)**:
71
+ - Training on additional 0.6B tokens with partial overlap:
72
+ 1. New mathematics data (classifier-selected)
73
+ 2. New English performance data (from Sauerkraut-v2)
74
+ 3. New German training data (from Sauerkraut-v2)
75
+ 4. Function calling data (from Sauerkraut-v2)
76
+
77
+ **Dataset Composition**:
78
+ - Carefully curated mathematical content using a proprietary classification model
79
+ - Premium multilingual data from both Sauerkraut-v1 and Sauerkraut-v2
80
+ - Specialized function calling training data
81
+ - High-quality German-English content across various domains
82
+
83
+ ## Objective and Results
84
+
85
+ This release marks the one-year anniversary of SauerkrautLM, showcasing our most advanced training methodology to date. The two-phase Spectrum Fine-Tuning approach allows for more nuanced learning while maintaining efficiency in resource usage. The model demonstrates significant improvements in:
86
+
87
+ - Mathematical reasoning capabilities
88
+ - Function calling proficiency
89
+ - Multilingual performance
90
+ - Instruction following
91
+ - Common-sense reasoning
92
+
93
+ ## Evaluation
94
+
95
+ **AGIEVAL**
96
+ ![SauerkrautLM-v2-14b-SFT-AGIEVAL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-AGIEVAL.png "SauerkrautLM-v2-14b-SFT-AGIEVAL")
97
+
98
+ **GPT4ALL**
99
+ ![SauerkrautLM-v2-14b-SFT-GPT4ALL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-GPT4ALL.png "SauerkrautLM-v2-14b-SFT-GPT4ALL")
100
+
101
+ **TRUTHFULQA**
102
+ ![SauerkrautLM-v2-14b-SFT-TRUTHFULQA](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-TRUTHFULQA.png "SauerkrautLM-v2-14b-SFT-TRUTHFULQA")
103
+
104
+ **OPENLEADERBOARD 2**
105
+ ![SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD.png "SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD")
106
+
107
+ **MMLU 5-shot**
108
+ ![SauerkrautLM-v2-14b-SFT-MMLU-5shot](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-MMLU-5shot.png "SauerkrautLM-v2-14b-SFT-MMLU-5shot")
109
+
110
+ **Berkeley Function Calling Leaderboard**
111
+ ![SauerkrautLM-v2-14b-SFT-BERKELEY](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-BERKELEY.png "SauerkrautLM-v2-14b-SFT-BERKELEY")
112
+
113
+ Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent.
114
+
115
+ ## Disclaimer
116
+ We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
117
+
118
+ ## Contact
119
+ If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
120
+
121
+ ## Collaborations
122
+ We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai)
123
+
124
+ ## Acknowledgement
125
+ Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such a valuable model to the Open-Source community.