metadata
datasets:
- sciq
- metaeval/ScienceQA_text_only
- GAIR/lima
- Open-Orca/OpenOrca
- openbookqa
language:
- en
tags:
- upstage
- llama
- instruct
- instruction
pipeline_tag: text-generation
LLaMa-2-70b-instruct-1024 model card
Model Details
- Developed by: Upstage
- Backbone Model: LLaMA-2
- Language(s): English
- Library: HuggingFace Transformers
- License: Fine-tuned checkpoints is licensed under the Non-Commercial Creative Commons license (CC BY-NC-4.0)
- Where to send comments: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the Hugging Face community's model repository
- Contact: For questions and comments about the model, please email
contact@upstage.ai
Dataset Details
Used Datasets
No other data was used except for the dataset mentioned above
Prompt Template
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
Hardware and Software
- Hardware: We utilized an A100x8 for training our model
- Training Factors: We fine-tuned this model using a combination of the DeepSpeed library and the HuggingFace trainer
Evaluation Results
Overview
- We conducted a performance evaluation based on the tasks being evaluated on the Open LLM Leaderboard.
We evaluated our model on four benchmark datasets, which include
ARC-Challenge
,HellaSwag
,MMLU
, andTruthfulQA
. We used the lm-evaluation-harness repository, specifically commit b281b0921b636bc36ad05c0b0b0763bd6dd43463.
Main Results
Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
---|---|---|---|---|---|
Llama-2-70b-instruct-1024 (Ours, Local Reproduction) | 72.0 | 70.7 | 87.4 | 69.3 | 60.7 |
Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 |
llama-65b-instruct (Ours, Local Reproduction) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 |
llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
Scripts
- Prepare evaluation environments:
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness
Ethical Issues
Ethical Considerations
- There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
Contact Us
Why Upstage LLM?
- Upstage's LLM research has yielded remarkable results. Our 30B model outperforms all models around the world, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► click here to contact.