Edit model card

Haary/haryra-7b-id

merak

Haary/haryra-7b-id is QLoRA quantized version of Ichsan2895/Merak-7B-v3

Install the necessary packages

Requires: Transformers from source - only needed for versions <= v4.34.

# Install transformers from source - only needed for versions <= v4.34
!pip install git+https://github.com/huggingface/transformers.git
!pip install accelerate

Example Python code

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="Haary/haryra-7b-id", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {
        "role": "system",
        "content": "Anda adalah chatbot ramah yang selalu merespons dengan singkat dan jelas",
    },
    {"role": "user", "content": "Apa bedanya antara raspberry pi dan esp32?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Credits

Ichsan2895/Merak-7B-v3 for base model.

image source

pixabay.com

Downloads last month
16
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Haary/haryra-7b-llama2-id

Quantizations
2 models