How does this model work?
#1
by
EmptyBucket
- opened
I'm working on a project where I'm using Estonian as a test-bed. I'm trying to get some reasonably reliable grammar error detection and thought I'd give this model a try. However, I don't know what instructions it reacts to or in what format it requires the text.
Currently I'm getting some odd output from the model. Do I need a custom system prompt for instructions?
Thank you for any information.
Hi! Thank you for bringing this up. We've updated the README for this model to include instructions on how to use the model.
The simplest way to use the model is the following:
from transformers import pipeline
import torch
gec_pipe = pipeline(
"text-generation",
model="tartuNLP/Llammas-base-p1-llama-errors-p2-GEC",
torch_dtype=torch.bfloat16,
device_map="auto",
do_sample=False, num_beams=4, temperature=None, top_p=None
)
gec_pipe.tokenizer.pad_token_id = gec_pipe.tokenizer.eos_token_id
gec_pipe.tokenizer.padding_side = "left"
### Input sentence here:
input_sentence = "Ma läheb koju"
gec_pipe([{"role": "user", "content": input_sentence}], max_new_tokens=300)[0]["generated_text"][-1]["content"]
Feel free to let us know if you keep having issues with the output.