Edit model card

Our Models

Model Card for Ninja-v1-NSFW-128k

The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1

Ninja-NSFW-128k has the following changes compared to Mistral-7B-v0.1.

  • 128k context window (8k context in v0.1)
  • Achieving both high quality Japanese and English generation
  • Memory ability that does not forget even after long-context generation
  • Can be generated NSFW

This model was created with the help of GPUs from the first LocalAI hackathon.

We would like to take this opportunity to thank

List of Creation Methods

  • Chatvector for multiple models
  • Simple linear merging of result models
  • Domain and Sentence Enhancement with LORA
  • Context expansion

Instruction format

Ninja adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following:

USER: Hi ASSISTANT: Hello.</s>
USER: Who are you?
ASSISTANT: I am ninja.</s>

Example prompts to improve (Japanese)

  • BAD:ใ€€ใ‚ใชใŸใฏโ—‹โ—‹ใจใ—ใฆๆŒฏใ‚‹่ˆžใ„ใพใ™

  • GOOD: ใ‚ใชใŸใฏโ—‹โ—‹ใงใ™

  • BAD: ใ‚ใชใŸใฏโ—‹โ—‹ใŒใงใใพใ™

  • GOOD: ใ‚ใชใŸใฏโ—‹โ—‹ใ‚’ใ—ใพใ™

Performing inference

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "Local-Novel-LLM-project/Ninja-v1-NSFW-128k"
new_tokens = 1024

model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)

system_prompt = "ใ‚ใชใŸใฏใƒ—ใƒญใฎๅฐ่ชฌๅฎถใงใ™ใ€‚\nๅฐ่ชฌใ‚’ๆ›ธใ„ใฆใใ ใ•ใ„\n-------- "

prompt = input("Enter a prompt: ")
system_prompt += prompt + "\n-------- "
model_inputs = tokenizer([system_prompt], return_tensors="pt").to("cuda")


generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])

Merge recipe

  • WizardLM2 - mistralai/Mistral-7B-v0.1
  • NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1
  • Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b
  • Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B
  • NTQAI/chatntq-ja-7b-v1.0

The characteristics of each model are as follows.

  • WizardLM2: High quality multitasking model
  • Yarn-Mistral-7b-128k: Mistral model with 128k context window
  • Antler-7B: Model specialized for novel writing
  • NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model
  • Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model

Other points to keep in mind

  • The training data may be biased. Be careful with the generated sentences.
  • Set trust_remote_code to True for context expansion with YaRN.
  • Memory usage may be large for long inferences.
  • If possible, we recommend inferring with llamacpp rather than Transformers.
Downloads last month
263
Safetensors
Model size
7.24B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Local-Novel-LLM-project/Ninja-v1-NSFW-128k

Merges
1 model
Quantizations
2 models

Space using Local-Novel-LLM-project/Ninja-v1-NSFW-128k 1

Collection including Local-Novel-LLM-project/Ninja-v1-NSFW-128k