Update README.md
Browse files
README.md
CHANGED
@@ -54,7 +54,7 @@ The details of the guanaco dataset and parameters of the LoRa that Tim Dettmers'
|
|
54 |
</table>
|
55 |
</body>
|
56 |
</html>
|
57 |
-
|
58 |
Below is a description of Guanaco from https://guanaco-model.github.io/:
|
59 |
|
60 |
Guanaco is an advanced instruction-following language model built on Meta's LLaMA 13B model. Expanding upon the initial 52K dataset from the Alpaca model, an additional 534,530 entries have been incorporated, covering English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, Deutsch, and various linguistic and grammatical tasks. This wealth of data enables Guanaco to perform exceptionally well in multilingual environments.
|
|
|
54 |
</table>
|
55 |
</body>
|
56 |
</html>
|
57 |
+
More information can be found here and below: https://huggingface.co/datasets/JosephusCheung/GuanacoDataset
|
58 |
Below is a description of Guanaco from https://guanaco-model.github.io/:
|
59 |
|
60 |
Guanaco is an advanced instruction-following language model built on Meta's LLaMA 13B model. Expanding upon the initial 52K dataset from the Alpaca model, an additional 534,530 entries have been incorporated, covering English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, Deutsch, and various linguistic and grammatical tasks. This wealth of data enables Guanaco to perform exceptionally well in multilingual environments.
|