DavidAU commited on
Commit
00ab3fc
·
verified ·
1 Parent(s): 4872e3f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ base_model:
2
+ - microsoft/Orca-2-13b
3
+ - KoboldAI/LLaMA2-13B-Tiefighter
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+
9
+ ---
10
+
11
+ Imatrix compressions of FP Merge of "D_AU-Orac-13B-Tiefighter-slerp".
12
+
13
+ "Imatrix Plus" is an upgraded form of Imatrix which using full precision for specific parts of the compression.
14
+ As a result all compressions will be slightly larger in size than standard 13B compressions.
15
+
16
+ This method results in a higher quality model, especially at lower compressions.
17
+ This method is applied across all compressions from IQ1 to Q8.
18
+
19
+ Even IQ1_S - the most compressed verison - works well, however IQ4/Q4 are suggested as minumums for quality.
20
+ Highest quality will be Q6/Q8.
21
+
22
+ This merge was an experiment to test already established Roleplay, Fiction and Story
23
+ generation of "Tiefighter" with a some of "Orca 2"'s qualities.
24
+
25
+ For Imatrix plus this was a test of high precision in specific areas of the model leading to a slightly larger compression.
26
+ In addition the Imatrix process itself used a larger "calibration" file than standard to further enhance quality.
27
+
28
+ A blank or standard Alpaca Template for text generation will work.
29
+ Currently "CHATML" is untested.
30
+
31
+ Context length: 4096.
32
+
33
+ # merge
34
+
35
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
36
+
37
+ ## Merge Details
38
+ ### Merge Method
39
+
40
+ This model was merged using the SLERP merge method.
41
+
42
+ ### Models Merged
43
+
44
+ The following models were included in the merge:
45
+ * [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
46
+ * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
47
+
48
+ ### Configuration
49
+
50
+ The following YAML configuration was used to produce this model:
51
+
52
+ ```yaml
53
+ slices:
54
+ - sources:
55
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
56
+ layer_range: [0, 40]
57
+ - model: microsoft/Orca-2-13b
58
+ layer_range: [0, 40]
59
+ merge_method: slerp
60
+ base_model: microsoft/Orca-2-13b
61
+ parameters:
62
+ t:
63
+ - filter: self_attn
64
+ value: [0, 0.5, 0.3, 0.7, 1]
65
+ - filter: mlp
66
+ value: [1, 0.5, 0.7, 0.3, 0]
67
+ - value: 0.5
68
+ dtype: bfloat16
69
+
70
+ ```