This is the LLaMAfied version of Qwen-14B-Chat model by Alibaba Cloud.
This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py
The tokenizer is borrowed from https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy
You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory
- Developed by: Alibaba Cloud.
- Language(s) (NLP): Chinese/English
- License: Tongyi Qianwen License
Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("hiyouga/Qwen-14B-Chat-LLaMAfied")
model = AutoModelForCausalLM.from_pretrained("hiyouga/Qwen-14B-Chat-LLaMAfied", torch_dtype="auto", device_map="auto")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "user", "content": "Who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs, streamer=streamer)
You could also alternatively launch a CLI demo by using the script in LLaMA-Factory
python src/cli_demo.py --template qwen --model_name_or_path hiyouga/Qwen-14B-Chat-LLaMAfied
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 61.60 |
AI2 Reasoning Challenge (25-Shot) | 57.51 |
HellaSwag (10-Shot) | 82.11 |
MMLU (5-Shot) | 65.57 |
TruthfulQA (0-shot) | 51.99 |
Winogrande (5-shot) | 72.93 |
GSM8k (5-shot) | 39.50 |
- Downloads last month
- 1,293
Spaces using hiyouga/Qwen-14B-Chat-LLaMAfied 3
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard57.510
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard82.110
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard65.570
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard51.990
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard72.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard39.500