Edit model card

Sidrap-7B-v2-GPTQ-4bit

Sidrap-7B-v2-GPTQ-4bit is an 4-bit quantized model of Sidrap-7B-v2, which is one of the best open model LLM bahasa Indonesia available today. This model has been quantized using AutoGPTQ to get smaller model that allows us to run in a lower resource environment with faster inference. The quantization uses random subset of original training data to "calibrate" the weights resulting in an optimally compact model with minimall loss in accuracy.

Usage

The fastest way to use this model, use AutoGPTQ-API:

python -m gptqapi.server robinsyihab/Sidrap-7B-v2-GPTQ-4bit

Or use AutoGPTQ directly:

from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM

model_id = "robinsyihab/Sidrap-7B-v2-GPTQ-4bit"

tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)

model = AutoGPTQForCausalLM.from_quantized(model_id,
                                           device="cuda:0", 
                                           inject_fused_mlp=True,
                                           inject_fused_attention=True,
                                           trust_remote_code=True)

chat = pipeline("text-generation", 
                model=model,
                tokenizer=tokenizer, 
                device_map="auto")

prompt = ("<s>[INST] <<SYS>>\nAnda adalah asisten yang suka membantu, penuh hormat, dan jujur. Selalu jawab semaksimal mungkin, sambil tetap aman. Jawaban Anda tidak boleh berisi konten berbahaya, tidak etis, rasis, seksis, beracun, atau ilegal. Harap pastikan bahwa tanggapan Anda tidak memihak secara sosial dan bersifat positif.\n\
Jika sebuah pertanyaan tidak masuk akal, atau tidak koheren secara faktual, jelaskan alasannya daripada menjawab sesuatu yang tidak benar. Jika Anda tidak mengetahui jawaban atas sebuah pertanyaan, mohon jangan membagikan informasi palsu.\n"
    "<</SYS>>\n\n"
    "Siapa penulis kitab alfiyah? [/INST]\n"
    )

sequences = chat(prompt, num_beams=2, max_length=max_size, top_k=10, num_return_sequences=1)
print(sequences[0]['generated_text'])

License

Sidrap-7B-v2-GPTQ is licensed under the Apache 2.0 License.

Author

[] Robin Syihab (@anvie)

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.