Edit model card

karasu-chatvector-mlx_lm-chatalpaca

karasu fine tuned model by lora method with alpaca_cleaned_ja_json.

  • Base model: niryuu/Karasu-1.1b-chat-vector
  • Traning dataset: shi3z/alpaca_cleaned_ja_json formatted by mlx's chat-template

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "aipib/suzume-taskarith1"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
42
Safetensors
Model size
1.1B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train aipib/karasu-chatvector-mlx_lm-chatalpaca