TSAR 2025 Shared Task on RCTS (CEFR Evaluators)
Collection
3 items
โข
Updated
This model is a fine-tuned version of answerdotai/ModernBERT-base on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | F1 |
|---|---|---|---|---|
| 15.3569 | 1.0 | 281 | 1.0453 | 0.4886 |
| 13.4535 | 2.0 | 562 | 0.7099 | 0.7080 |
| 9.956 | 3.0 | 843 | 0.7002 | 0.7299 |
| 3.4868 | 4.0 | 1124 | 0.8621 | 0.7453 |
| 2.4503 | 5.0 | 1405 | 0.7991 | 0.8158 |
| 1.4969 | 6.0 | 1686 | 1.0259 | 0.7871 |
| 1.4578 | 7.0 | 1967 | 1.1622 | 0.7562 |
| 0.6609 | 8.0 | 2248 | 1.0912 | 0.8218 |
| 0.4203 | 9.0 | 2529 | 1.2711 | 0.8231 |
| 0.0011 | 10.0 | 2810 | 1.3272 | 0.8373 |
@inproceedings{alva-manchego-etal-2025-findings,
title = "Findings of the {TSAR} 2025 Shared Task on Readability-Controlled Text Simplification",
author = "Alva-Manchego, Fernando and Stodden, Regina and Imperial, Joseph Marvin and Barayan, Abdullah and North, Kai and Tayyar Madabushi, Harish",
editor = "Shardlow, Matthew and Alva-Manchego, Fernando and North, Kai and Stodden, Regina and Saggion, Horacio and Khallaf, Nouran and Hayakawa, Akio",
booktitle = "Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.tsar-1.8/",
doi = "10.18653/v1/2025.tsar-1.8",
pages = "116--130",
ISBN = "979-8-89176-176-6"
}
Base model
answerdotai/ModernBERT-base