Datasets:

Modalities:
Text
Formats:
json
Languages:
Basque
Libraries:
Datasets
pandas
License:
bl2mp / README.md
GorkaUrbizu's picture
Update README.md
312c7e1 verified
metadata
license: cc-by-nc-sa-4.0
language:
  - eu
pretty_name: BL2MP
size_categories:
  - 1K<n<10K

BL2MP (Basque L2 student-based Minimal Pairs)

The BL2MP test set, designed to assess the grammatical knowledge of language Models in the Basque language, inspired by the BLiMP benchmark. The BL2MP dataset includes examples sourced from the bai&by language academy, derived from essays written by students enrolled at the academy. These instances provide a wealth of authentic and natural grammatical errors, representing genuine mistakes made by learners and thus offering a realistic reflection of real-world language errors.

We randomly selected 1,800 sentences from student essays provided by the bai&by academy, adhering consistently to the ”minimal pairs” criterion. To ensure a balanced diversity, we ensured an equal distribution of examples across three proficiency levels (A: Beginner, B: Intermediate, and C: Advanced) and three error types (E1: Declension, E2: Verb, E3: Structure and Order) , as shown in the Table below. This approach aimed to represent a vari ety of proficiency levels and error types within the dataset.

See our paper How Well Can BERT Learn the Grammar of an Agglutinative and Flexible-Order Language? The Case of Basque. accepted at LREC-COLING2024 and check our Github for more.

Types Levels # of sentences
E1: Declension A 200
B 200
C 200
E2: Verb A 200
B 200
C 200
E3: Structure A 200
B 200
C 200
Total 1,800

Authors

Gorka Urbizu [1] [2], Muitze Zulaika [1], Xabier Saralegi [1], Ander Corral [1]

Affiliation of the authors:

[1] Orai NLP Technologies

[2] University of the Basque Country

Licensing

Copyright (C) by Orai NLP Technologies.

The corpora, datasets and models created in this work, are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0.

International License (CC BY-NC-SA 4.0). To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/.

Acknowledgements

If you use these corpora, datasets or models please cite the following paper:

  • G. Urbizu, M. Zulaika, X. Saralegi, A. Corral. How Well Can BERT Learn the Grammar of an Agglutinative and Flexible-Order Language? The Case of Basque. The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING2024). May, 2024. Torino, Italy

Contact information

Gorka Urbizu, Muitze Zulaika: {g.urbizu,m.zulaika}@orai.eus