File size: 1,787 Bytes
599bd45 9734ee0 e3fefad 9734ee0 53fc189 e3fefad 7aa637f 9feb919 72f647b ae31c5c 72f647b 9feb919 e3fefad 9feb919 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
language:
- tr
- en
- es
license: apache-2.0
library_name: transformers
tags:
- Generative AI
- text-generation-inference
- text-generation
- peft
- unsloth
---
# Model Trained By Meforgers
*This model was trained by Meforgers for the futuristic projects.*
- # *Firstly*
-If u want to use unsloth; For Pytorch 2.3.0: Use the "ampere" path for newer RTX 30xx GPUs or higher.
```python
pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-torch230] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu118-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git"
```
-Also you can use another system
- # *Usage*
```python
from unsloth import FastLanguageModel
import torch
# Variable side
max_seq_length = 512
dtype = torch.float16
load_in_4bit = True
# Alpaca prompt
alpaca_prompt = """### Instruction:
{0}
### Input:
{1}
### Response:
{2}
"""
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Meforgers/Aixr",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
)
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
alpaca_prompt.format(
"Can u text me basic python code?", # instruction side (You need to change that side)
"", # input
"", # output - leave this blank for generation!
)
],
return_tensors="pt"
).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
print(tokenizer.batch_decode(outputs))
``` |