Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Description:

To create t5-base-c4jfleg model, T5-base model is fine-tuned on the JFLEG dataset and C4 200M dataset by taking around 3000 examples from each with the objective of grammar correction.

The original Google's [T5-base] model was pre-trained on C4 dataset.

The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.

Prefix:

The T-5 model use "grammar: " as the input text prefix for grammatical corrections.

Usage :

from transformers import pipeline

checkpoint = "team-writing-assistant/t5-base-c4jfleg"
model = pipeline("text2text-generation", model=checkpoint)

text = "Speed of light is fastest then speed of sound"
text = "grammar: " + text

output = model(text)
print("Result: ", output[0]['generated_text'])
Result: Speed of light is faster than speed of sound.

Other Examples :

Input: My grammar are bad.
Output: My grammar is bad.

Input: Who are the president?
Output: Who is the president?

Downloads last month
2,388
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using team-writing-assistant/t5-base-c4jfleg 5