Update README.md
Browse files
README.md
CHANGED
@@ -39,6 +39,8 @@ After I'm done training this I will probably try do continued pre-training on Ge
|
|
39 |
|
40 |
Or actually I'll train Viking-7B again but basically the same mix of datasets as this one but using the smaller version of the SlimSonnet dataset since it supposedly was filtered to have the most varied examples. Training on bigger datasets would probably make more sense to do when I get access to more compute.
|
41 |
|
|
|
|
|
42 |
# Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model
|
43 |
|
44 |
- **Developed by:** mpasila
|
|
|
39 |
|
40 |
Or actually I'll train Viking-7B again but basically the same mix of datasets as this one but using the smaller version of the SlimSonnet dataset since it supposedly was filtered to have the most varied examples. Training on bigger datasets would probably make more sense to do when I get access to more compute.
|
41 |
|
42 |
+
Actually scratch all of that, since there was [a new actually multilingual model](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) released recently I'll probably try fine-tuning that model instead.
|
43 |
+
|
44 |
# Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model
|
45 |
|
46 |
- **Developed by:** mpasila
|