File size: 1,180 Bytes
e7d17d6 64d1dd6 e7d17d6 a526f4c e7d17d6 64d1dd6 e7d17d6 64d1dd6 e7d17d6 64d1dd6 e7d17d6 64d1dd6 e7d17d6 64d1dd6 e7d17d6 64d1dd6 e7d17d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
license: apache-2.0
---
## Simple Use Case
This section demonstrates a simple use case of how to interact with our model to solve problems in a step-by-step, friendly manner.
### Define the Function
We define a function `get_completion` which takes user input, combines it with a predefined system prompt, and then sends this combined prompt to our model. The model's response is then printed out.
Here's how the function is implemented:
```python
import torch
from transformers import pipeline
import os
# Load model
test_pipeline = pipeline(model="zaursamedov1/FIxtral",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto")
### Define the function
def get_completion(input):
system = "Think step by step and solve the problem in a friendly way."
prompt = f"#### System: {system}\\n#### User: \\n{input}\\n\\n#### Response from FIxtral model:"
print(prompt)
fixtral_prompt = test_pipeline(prompt, max_new_tokens=500)
return fixtral_prompt[0]["generated_text"]
# Let's prompt
prompt = "problem"
print(get_completion(prompt))
|