Model Card for Model ID
Model Details
Model Description
This model is Persian Q&A fine-tuned on Google's Gemma open-source model. Users can ask general question from it. It can be used for chatbot applications and fine-tuning for other datasets.
- Developed by: Ali Bidaran
- Language(s) (NLP): Farsi
- Finetuned from model [optional]: Gemma2b
Uses
This model can be used for developing chatbot applications, Q&A, instruction engineering and fine-tuning with other persian datasets.
Direct Use
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, GemmaTokenizer
model_id = "alibidaran/Gemma2_Farsi"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id, token=os.environ['HF_TOKEN'])
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0}, token=os.environ['HF_TOKEN'])
prompt = "چند روش برای کاهش چربی بدن ارائه نمایید؟"
text = f"<s> ###Human: {prompt} ###Asistant: "
inputs=tokenizer(text,return_tensors='pt').to('cuda')
with torch.no_grad():
outputs=model.generate(**inputs,max_new_tokens=400,do_sample=True,top_p=0.99,top_k=10,temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Donation:
- Downloads last month
- 666
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.