|
--- |
|
language: |
|
- tr |
|
- en |
|
- es |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- Generative AI |
|
- text-generation-inference |
|
- text-generation |
|
- peft |
|
- unsloth |
|
--- |
|
|
|
# Model Trained By Meforgers |
|
*This model was trained by Meforgers for the futuristic projects.* |
|
|
|
- # *Firstly* |
|
|
|
|
|
-If u want to use unsloth; For Pytorch 2.3.0: Use the "ampere" path for newer RTX 30xx GPUs or higher. |
|
```python |
|
pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
pip install "unsloth[cu121-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
pip install "unsloth[cu118-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
``` |
|
-Also you can use another system |
|
|
|
- # *Usage* |
|
|
|
```python |
|
from unsloth import FastLanguageModel |
|
import torch |
|
|
|
# Variable side |
|
max_seq_length = 512 |
|
dtype = torch.float16 |
|
load_in_4bit = True |
|
|
|
# Alpaca prompt |
|
alpaca_prompt = """### Instruction: |
|
{0} |
|
|
|
### Input: |
|
{1} |
|
|
|
### Response: |
|
{2} |
|
""" |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name="Meforgers/Aixr", |
|
max_seq_length=max_seq_length, |
|
dtype=dtype, |
|
load_in_4bit=load_in_4bit, |
|
) |
|
|
|
FastLanguageModel.for_inference(model) |
|
|
|
inputs = tokenizer( |
|
[ |
|
alpaca_prompt.format( |
|
"Can u text me basic python code?", # instruction side (You need to change that side) |
|
"", # input |
|
"", # output - leave this blank for generation! |
|
) |
|
], |
|
return_tensors="pt" |
|
).to("cuda") |
|
|
|
outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True) |
|
print(tokenizer.batch_decode(outputs)) |
|
|
|
``` |