Update README.md
Browse files
README.md
CHANGED
@@ -47,27 +47,4 @@ This model was merged using the SLERP merge method.
|
|
47 |
|
48 |
The merge was made from two unreleased models:
|
49 |
- rheumistral-sft was trained from the original mistral checkpoint in two stages: 1) "continued pretraining" on a large, curated dataset of rheumatology and immunology texts; 2) supervised finetuning on a combination of synthetic and human generated QA pairs and chat logs
|
50 |
-
- biorheumistral-sft was trained the same way as rheumistral-sft, only it started from the [BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) checkpoint.
|
51 |
-
### Configuration
|
52 |
-
|
53 |
-
The following YAML configuration was used to produce this model:
|
54 |
-
|
55 |
-
```yaml
|
56 |
-
slices:
|
57 |
-
- sources:
|
58 |
-
- model: rheumistral-sft
|
59 |
-
layer_range: [0, 32]
|
60 |
-
- model: biorheumistral-sft
|
61 |
-
layer_range: [0, 32]
|
62 |
-
merge_method: slerp
|
63 |
-
base_model: /mnt/hdd/projects/rheum_llm/alignment-handbook/rheumistral-sft-merged-final
|
64 |
-
parameters:
|
65 |
-
t:
|
66 |
-
- filter: self_attn
|
67 |
-
value: [0, 0.5, 0.3, 0.7, 1]
|
68 |
-
- filter: mlp
|
69 |
-
value: [1, 0.5, 0.7, 0.3, 0]
|
70 |
-
- value: 0.5
|
71 |
-
dtype: bfloat16
|
72 |
-
|
73 |
-
```
|
|
|
47 |
|
48 |
The merge was made from two unreleased models:
|
49 |
- rheumistral-sft was trained from the original mistral checkpoint in two stages: 1) "continued pretraining" on a large, curated dataset of rheumatology and immunology texts; 2) supervised finetuning on a combination of synthetic and human generated QA pairs and chat logs
|
50 |
+
- biorheumistral-sft was trained the same way as rheumistral-sft, only it started from the [BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) checkpoint.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|