metadata
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-bnb-4bit
Nepali GPT
Nepali GPT is a large Nepali language fine-tuned model based on Mixtral_7B.The fine-tuning process uses Unsloth, expediting the training process for optimal efficiency.
Model Description
- Model type: A 7B fine-tuned model
- Primary Language(s): Nepali
- License: Mistral
Installation
#Install Unsloth
%%capture
import torch
major_version, minor_version = torch.cuda.get_device_capability()
# Must install separately since Colab has torch 2.2.1, which breaks packages
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
if major_version >= 8:
# Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
!pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
else:
# Use this for older GPUs (V100, Tesla T4, RTX 20xx)
!pip install --no-deps xformers trl peft accelerate bitsandbytes
pass
Model loading
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "Heem2/NEPALIGPT-1.0",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
Inference
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
prompt.format(
"नेपालको बारेमा व्याख्या गर्नुहोस्?", # instruction
"संस्कृति, भाषा, भूगोल, राजनीति, जलवायु", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 1000, use_cache = True)
tokenizer.batch_decode(outputs)
Citation Information
If you find this model useful, please consider giving 👏 and citing:
@heem2
}
Contributions
- This is developed by Hem Bahadur Gurung.Feel free to DM if you have any questions.