|
--- |
|
language: |
|
- tr |
|
- en |
|
- es |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- Generative AI |
|
- text-generation-inference |
|
- text-generation |
|
- peft |
|
- unsloth |
|
- medical |
|
- biology |
|
- code |
|
- space |
|
--- |
|
|
|
# Model Trained By Meforgers |
|
*This model, named 'Aixr,' is designed for science and artificial intelligence development. You can use it as the foundation for many of your scientific projects and interesting ideas. In short, Aixr is an artificial intelligence model that is based on futurism and innovation.* |
|
|
|
- # *Firstly* |
|
|
|
|
|
-If you intend to use unsloth with Pytorch 1.3.0: Utilize the "ampere" path for newer RTX 30xx GPUs or higher. |
|
```python |
|
pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
|
|
pip install "unsloth[cu121-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
|
|
pip install "unsloth[cu118-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
|
|
pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git" |
|
``` |
|
-Also you can use another system |
|
|
|
- # *Usage* |
|
|
|
```python |
|
from unsloth import FastLanguageModel |
|
import torch |
|
|
|
# Variable side |
|
max_seq_length = 512 |
|
dtype = torch.float16 |
|
load_in_4bit = True |
|
|
|
# Alpaca prompt |
|
alpaca_prompt = """### Instruction: |
|
{0} |
|
|
|
### Input: |
|
{1} |
|
|
|
### Response: |
|
{2} |
|
""" |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name="Meforgers/Aixr", |
|
max_seq_length=max_seq_length, |
|
dtype=dtype, |
|
load_in_4bit=load_in_4bit, |
|
) |
|
|
|
FastLanguageModel.for_inference(model) |
|
|
|
inputs = tokenizer( |
|
[ |
|
alpaca_prompt.format( |
|
"Can u text me basic python code?", # instruction side (You need to change that side) |
|
"", # input |
|
"", # output - leave this blank for generation! |
|
) |
|
], |
|
return_tensors="pt" |
|
).to("cuda") |
|
|
|
outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True) |
|
print(tokenizer.batch_decode(outputs)) |
|
|
|
``` |