Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,84 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
-
|
5 |
-
|
6 |
-
|
7 |
-
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
-
|
14 |
-
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LLaMa-30b-instruct model card
|
2 |
+
|
3 |
+
**Model Developers**
|
4 |
+
- Upstage [[1]]
|
5 |
+
|
6 |
+
**Backbone Model**
|
7 |
+
- LLaMA [[2]]
|
8 |
+
|
9 |
+
**Variations**
|
10 |
+
- It has different model parameter sizes and sequence lengths: 30B/1024 [[3]], 30B/2048 [[4]], 65B/1024 [[5]].
|
11 |
+
|
12 |
+
**Input**
|
13 |
+
- Models solely process textual input.
|
14 |
+
|
15 |
+
**Output**
|
16 |
+
- Models solely generate textual output
|
17 |
+
|
18 |
+
**License**
|
19 |
+
- This model is under a Non-commercial Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out this form [[6]], but have either lost your copy of the weights or encountered issues converting them to the Transformers format.
|
20 |
+
|
21 |
+
**Where to send comments**
|
22 |
+
- Instructions on how to provide feedback or comments on a model can be found by opening an issue in the Hugging Face community's model repository [[7]].
|
23 |
+
|
24 |
+
## Dataset Details
|
25 |
+
|
26 |
+
**Used Datasets**
|
27 |
+
- openbookqa [[8]]
|
28 |
+
- sciq [[9]]
|
29 |
+
- Open-Orca/OpenOrca [[10]]
|
30 |
+
- metaeval/ScienceQA_text_only [[11]]
|
31 |
+
- GAIR/lima [[12]]
|
32 |
+
|
33 |
+
## Hardware and Software
|
34 |
+
|
35 |
+
**Hardware**
|
36 |
+
- We utilized an A100 for training our model.
|
37 |
+
|
38 |
+
**Training Factors**
|
39 |
+
- We fine-tuned this model using a combination of the DeepSpeed library [[13]] and the HuggingFace trainer [[14]].
|
40 |
+
|
41 |
+
## Evaluation Results
|
42 |
+
|
43 |
+
**Overview**
|
44 |
+
- We conducted a performance evaluation based on the tasks being evaluated on the Open LLM Leaderboard [[15]]. We evaluated our model on four benchmark datasets, which include ARC-Challenge, HellaSwag, MMLU, and TruthfulQA. We used the lm-evaluation-harness repository, specifically commit b281b0921b636bc36ad05c0b0b0763bd6dd43463. We can reproduce the evaluation environments using the command below:
|
45 |
+
|
46 |
+
- **Main Results**
|
47 |
+
|
48 |
+
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
49 |
+
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
50 |
+
| llama-30b-instruct-2048 (Ours) | **64.7** | 58.3 | 82.5 | 61.4 | **56.5** |
|
51 |
+
| falcon-40b-instruct | 63.4 | **61.6** | **84.3** | 55.4 | 52.5 |
|
52 |
+
| llama-30b-instruct (Ours) | 63.2 | 56.7 | 84 | 59 | 53.1 |
|
53 |
+
| llama-65b | 62.1 | 57.6 | **84.3** | **63.4** | 43 |
|
54 |
+
|
55 |
+
*Experimental results based on the Open LLM Leaderboard*
|
56 |
+
|
57 |
+
## Ethical Issues
|
58 |
+
|
59 |
+
**Ethical Considerations**
|
60 |
+
- There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
|
61 |
+
|
62 |
+
## Contact Us
|
63 |
+
|
64 |
+
**Why Upstage LLM?**
|
65 |
+
- Upstage's [[1]] LLM research has yielded remarkable results. Our 30B model size outperforms all models worldwide with less than 65B, establishing itself as the leading performer. Recognizing the immense potential for private LLM adoption within companies, we invite you to effortlessly implement a private LLM and fine-tune it with your own data. For a seamless and tailored solution, please don't hesitate to reach out to us [[16]].
|
66 |
+
|
67 |
+
[1]: https://en.upstage.ai
|
68 |
+
[2]: https://github.com/facebookresearch/llama/tree/llama_v1
|
69 |
+
[3]: https://huggingface.co/upstage/llama-30b-instruct
|
70 |
+
[4]: https://huggingface.co/upstage/llama-30b-instruct-2048
|
71 |
+
[5]: https://huggingface.co/upstage/llama-65b-instruct
|
72 |
+
[6]: https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform
|
73 |
+
[7]: https://huggingface.co/upstage/llama-30b-instruct-2048/discussions
|
74 |
+
[8]: https://huggingface.co/datasets/openbookqa
|
75 |
+
[9]: https://huggingface.co/datasets/sciq
|
76 |
+
[10]: https://huggingface.co/datasets/Open-Orca/OpenOrca
|
77 |
+
[11]: https://huggingface.co/datasets/metaeval/ScienceQA_text_only
|
78 |
+
[12]: https://huggingface.co/datasets/GAIR/lima
|
79 |
+
[13]: https://github.com/microsoft/DeepSpeed
|
80 |
+
[14]: https://huggingface.co/docs/transformers/main_classes/trainer
|
81 |
+
[15]: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
|
82 |
+
[16]: mailto:contact@upstage.ai
|
83 |
+
|
84 |
+
|