Edit model card

English Grammar Error Correction with T5

Overview

This repository contains a pretrained T5 model fine-tuned for English grammar error correction using Hugging Face's Transformers library. The model leverages a seq2seq architecture and was trained on the C4 dataset for grammar correction purposes.

Model Details

  • Model Name: english-grammar-error-correction-t5-seq2seq
  • Tokenizer: T5Tokenizer
  • Model Architecture: T5ForConditionalGeneration
  • Training Data: Fine-tuned on C4 dataset for grammar error correction tasks.

Usage

# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'

tokenizer = AutoTokenizer.from_pretrained("thenHung/english-grammar-error-correction-t5-seq2seq")
model = AutoModelForSeq2SeqLM.from_pretrained("thenHung/english-grammar-error-correction-t5-seq2seq").to(torch_device)


def correct_grammar(input_text,num_return_sequences):
  batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device)
  translated = model.generate(**batch,max_length=64,num_beams=4, num_return_sequences=num_return_sequences, temperature=1.5)
  tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
  return tgt_text

input_text = """
He are an teachers.
"""
num_return_sequences = 3
corrected_texts = correct_grammar(input_text, num_return_sequences)
print(corrected_texts)

# output:
# ['He is a teacher.', 'He is an educator.', 'He is one of the teachers.']
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train thenHung/english-grammar-error-correction-t5-seq2seq