File size: 1,826 Bytes
3a9e5f0 2ba7fa0 3a9e5f0 2ba7fa0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
language:
- en
datasets:
- kyujinpy/Open-platypus-Commercial
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **SOLAR-Platypus-10.7B-v1**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
SOLAR-Platypus-10.7B-v1 is an auto-regressive language model based on the Llama2 architecture.
**Base Model**
[upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
**Training Dataset**
[kyujinpy/Open-platypus-Commercial](https://huggingface.co/datasets/kyujinpy/Open-platypus-Commercial).
## Notice
While training, I used LoRA.
The lora_r values is 16.
## Q-LoRA config
- LoRA_r: 16
- LoRA_alpha: 16
- LoRA_dropout: 0.05
- LoRA_target_modules: [gate_proj, up_proj, down_proj]
## Prompt
- Alpaca template.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SOLAR-Platypus-10.7B-v1 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| SOLAR-Platypus-10.7B-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/SOLAR-Platypus-10.7B-v1"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |