Ninja
Collection
6 items
โข
Updated
The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
Ninja-128k has the following changes compared to Mistral-7B-v0.1.
This model was created with the help of GPUs from the first LocalAI hackathon.
We would like to take this opportunity to thank
Ninja adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following:
USER: Hi ASSISTANT: Hello.</s>
USER: Who are you?
ASSISTANT: I am ninja.</s>
BAD:ใใใชใใฏโโใจใใฆๆฏใ่ใใพใ
GOOD: ใใชใใฏโโใงใ
BAD: ใใชใใฏโโใใงใใพใ
GOOD: ใใชใใฏโโใใใพใ
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Local-Novel-LLM-project/Ninja-v1-128k"
new_tokens = 1024
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
system_prompt = "ใใชใใฏใใญใฎๅฐ่ชฌๅฎถใงใใ\nๅฐ่ชฌใๆธใใฆใใ ใใ\n-------- "
prompt = input("Enter a prompt: ")
system_prompt += prompt + "\n-------- "
model_inputs = tokenizer([system_prompt], return_tensors="pt").to("cuda")
generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
The characteristics of each model are as follows.