grammarly/coedit
Viewer • Updated • 70.8k • 1.14k • 96
How to use jbochi/coedit-base with Transformers:
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("jbochi/coedit-base")
model = AutoModelForSeq2SeqLM.from_pretrained("jbochi/coedit-base")This model is a fine-tuned version of google/flan-t5-base on the CoEdIT dataset.
It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|---|---|---|---|---|---|---|---|---|
| 0.7478 | 1.0 | 6908 | 0.6452 | 59.7569 | 46.3099 | 56.4301 | 56.4464 | 16.6268 |
| 0.7127 | 2.0 | 13816 | 0.6086 | 60.2082 | 47.27 | 57.2356 | 57.2531 | 16.6513 |
| 0.7136 | 3.0 | 20724 | 0.6059 | 60.3747 | 47.6257 | 57.595 | 57.6184 | 16.6349 |
| 0.7038 | 4.0 | 27632 | 0.5999 | 60.5075 | 47.7856 | 57.7316 | 57.7698 | 16.6735 |
| 0.6911 | 5.0 | 34540 | 0.5978 | 60.5931 | 48.0165 | 57.8997 | 57.9335 | 16.6729 |
Base model
google/flan-t5-base