Edit model card

This is a glorious and graceful gift to the open-source community from PyThess meetups, with love. It’s designed to provide sarcastic non-answers. Use with caution, and don’t trust it. Do not use it seriously—or at all. Do not expect it to qualify as a “helpful assistant.”

Built on top of Llama-3.2-1B-Instruct

Fine tuned with a dataset with sarcastic short "answers" to questions.

To test:

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="AlexandrosChariton/SarcasMLL-1B",
    torch_dtype=torch.float32,
    device_map="auto",
)
messages = [
    {"role": "user", "content": "Why do I even bother with Python? Is it any good?"},
]
outputs = pipe(
    messages,
    max_new_tokens=128
)
print(outputs[0]["generated_text"][-1])

Example input: "Should I move to Scandinavia?"

Response: {'role': 'assistant', 'content': "Oh yes, because nothing says 'good life' like freezing your butt off. And the cost of living? A whole other story. You might even need a warm coat. Worth a shot? Probably not. Scandinavia is all about embracing the cold. You'll love it. You'll hate it. Either way, you'll be fine. Or not. Who knows. It's all part of the adventure. Right?"}

Downloads last month
34
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AlexandrosChariton/SarcasMLL-1B

Finetuned
(108)
this model
Quantizations
1 model