|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- chat |
|
--- |
|
|
|
This is the LLaMAfied version of [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) model by Alibaba Cloud. |
|
The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py). |
|
I have made modifications to make it compatible with qwen2. |
|
This model is converted with https://github.com/Minami-su/character_AI_open/tree/main/Qwen2_llamafy_Mistralfy |
|
|
|
Usage: |
|
|
|
```python |
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer |
|
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen2-7B-Instruct-llama") |
|
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen2-7B-Instruct-llama", torch_dtype="auto", device_map="auto") |
|
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) |
|
|
|
messages = [ |
|
{"role": "user", "content": "Who are you?"} |
|
] |
|
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") |
|
inputs = inputs.to("cuda") |
|
generate_ids = model.generate(inputs,max_length=2048, streamer=streamer) |
|
|
|
``` |