Ariel Lee
commited on
Commit
•
8cb3d80
1
Parent(s):
feb7b4c
Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ metrics:
|
|
13 |
|
14 |
# 🥳 Platypus-30B has arrived!
|
15 |
|
16 |
-
Platypus-30B is an instruction fine-tuned model based on the LLaMA-30B transformer architecture and takes advantage of
|
17 |
|
18 |
| Metric | Value |
|
19 |
|-----------------------|-------|
|
@@ -47,7 +47,7 @@ Dataset of highly filtered and curated question and answer pairs. Release TBD.
|
|
47 |
|
48 |
## Limitations and bias
|
49 |
|
50 |
-
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA
|
51 |
|
52 |
## Citations
|
53 |
|
@@ -58,4 +58,17 @@ The base LLaMA model is trained on various data, some of which may contain offen
|
|
58 |
journal={arXiv preprint arXiv:2302.13971},
|
59 |
year={2023}
|
60 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
```
|
|
|
13 |
|
14 |
# 🥳 Platypus-30B has arrived!
|
15 |
|
16 |
+
Platypus-30B is an instruction fine-tuned model based on the LLaMA-30B transformer architecture and takes advantage of LoRA.
|
17 |
|
18 |
| Metric | Value |
|
19 |
|-----------------------|-------|
|
|
|
47 |
|
48 |
## Limitations and bias
|
49 |
|
50 |
+
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
|
51 |
|
52 |
## Citations
|
53 |
|
|
|
58 |
journal={arXiv preprint arXiv:2302.13971},
|
59 |
year={2023}
|
60 |
}
|
61 |
+
@article{DBLP:journals/corr/abs-2106-09685,
|
62 |
+
author = {Edward J. Hu and
|
63 |
+
Yelong Shen and
|
64 |
+
Phillip Wallis and
|
65 |
+
Zeyuan Allen{-}Zhu and
|
66 |
+
Yuanzhi Li and
|
67 |
+
Shean Wang and
|
68 |
+
Weizhu Chen},
|
69 |
+
title = {LoRA: Low-Rank Adaptation of Large Language Models},
|
70 |
+
journal = {CoRR},
|
71 |
+
year = {2021},
|
72 |
+
url = {https://arxiv.org/abs/2106.09685},
|
73 |
+
}
|
74 |
```
|