Dataset Card for "lince"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: http://ritual.uh.edu/lince
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 8.67 MB
- Size of the generated dataset: 53.81 MB
- Total amount of disk used: 62.48 MB
Dataset Summary
LinCE is a centralized Linguistic Code-switching Evaluation benchmark (https://ritual.uh.edu/lince/) that contains data for training and evaluating NLP systems on code-switching tasks.
Supported Tasks
Languages
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
lid_hineng
- Size of downloaded dataset files: 0.41 MB
- Size of the generated dataset: 2.28 MB
- Total amount of disk used: 2.69 MB
An example of 'validation' looks as follows.
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "mixed", "lang1", "lang1", "other"],
"words": ["@ZahirJ", "@BinyavangaW", "Loved", "the", "ending", "!", "I", "could", "have", "offered", "you", "some", "ironic", "chai-tea", "for", "it", ";)"]
}
lid_msaea
- Size of downloaded dataset files: 0.77 MB
- Size of the generated dataset: 4.66 MB
- Total amount of disk used: 5.43 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"idx": 0,
"lid": ["ne", "lang2", "other", "lang2", "lang2", "other", "other", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "other", "lang2", "lang2", "lang2", "ne", "lang2", "lang2"],
"words": "[\"ุนูุงุก\", \"ุจุฎูุฑ\", \"ุ\", \"ู
ุนูููุงุชู\", \"ูููุณุฉ\", \".\", \"..\", \"ุงุณุฎู\", \"ุญุงุฌุฉ\", \"ุจุณ\", \"ุงู\", \"ูู\", \"ูุงุญุฏ\", \"ู
ููู
\", \"ุจูููู\", \"ู
ูููู\", \"ุนููู\"..."
}
lid_nepeng
- Size of downloaded dataset files: 0.52 MB
- Size of the generated dataset: 3.06 MB
- Total amount of disk used: 3.58 MB
An example of 'validation' looks as follows.
{
"idx": 1,
"lid": ["other", "lang2", "lang2", "lang2", "lang2", "lang1", "lang1", "lang1", "lang1", "lang1", "lang2", "lang2", "other", "mixed", "lang2", "lang2", "other", "other", "other", "other"],
"words": ["@nirvikdada", "la", "hamlai", "bhetna", "paayeko", "will", "be", "your", "greatest", "gift", "ni", "dada", ";P", "#TreatChaiyo", "j", "hos", ";)", "@zappylily", "@AsthaGhm", "@ayacs_asis"]
}
lid_spaeng
- Size of downloaded dataset files: 1.13 MB
- Size of the generated dataset: 6.51 MB
- Total amount of disk used: 7.64 MB
An example of 'train' looks as follows.
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1"],
"words": ["11:11", ".....", "make", "a", "wish", ".......", "night", "night"]
}
ner_hineng
- Size of downloaded dataset files: 0.13 MB
- Size of the generated dataset: 0.75 MB
- Total amount of disk used: 0.88 MB
An example of 'train' looks as follows.
{
"idx": 1,
"lid": ["en", "en", "en", "en", "en", "en", "hi", "hi", "hi", "hi", "hi", "hi", "hi", "en", "en", "en", "en", "rest"],
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "O", "O", "O", "B-PERSON", "I-PERSON"],
"words": ["I", "liked", "a", "@YouTube", "video", "https://t.co/DmVqhZbdaI", "Kabhi", "Palkon", "Pe", "Aasoon", "Hai-", "Kishore", "Kumar", "-Vocal", "Cover", "By", "Stephen", "Qadir"]
}
Data Fields
The data fields are the same among all splits.
lid_hineng
idx
: aint32
feature.words
: alist
ofstring
features.lid
: alist
ofstring
features.
lid_msaea
idx
: aint32
feature.words
: alist
ofstring
features.lid
: alist
ofstring
features.
lid_nepeng
idx
: aint32
feature.words
: alist
ofstring
features.lid
: alist
ofstring
features.
lid_spaeng
idx
: aint32
feature.words
: alist
ofstring
features.lid
: alist
ofstring
features.
ner_hineng
idx
: aint32
feature.words
: alist
ofstring
features.lid
: alist
ofstring
features.ner
: alist
ofstring
features.
Data Splits Sample Size
name | train | validation | test |
---|---|---|---|
lid_hineng | 4823 | 744 | 1854 |
lid_msaea | 8464 | 1116 | 1663 |
lid_nepeng | 8451 | 1332 | 3228 |
lid_spaeng | 21030 | 3332 | 8289 |
ner_hineng | 1243 | 314 | 522 |
Dataset Creation
Curation Rationale
Source Data
Annotations
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{molina-etal-2016-overview,
title = "Overview for the Second Shared Task on Language Identification in Code-Switched Data",
author = "Molina, Giovanni and
AlGhamdi, Fahad and
Ghoneim, Mahmoud and
Hawwari, Abdelati and
Rey-Villamizar, Nicolas and
Diab, Mona and
Solorio, Thamar",
booktitle = "Proceedings of the Second Workshop on Computational Approaches to Code Switching",
month = nov,
year = "2016",
address = "Austin, Texas",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W16-5805",
doi = "10.18653/v1/W16-5805",
pages = "40--49",
}
@inproceedings{aguilar-etal-2020-lince,
title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
author = "Aguilar, Gustavo and
Kar, Sudipta and
Solorio, Thamar",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
pages = "1803--1813",
language = "English",
ISBN = "979-10-95546-34-4",
}
Note that each LinCE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
Contributions
Thanks to @lhoestq, @thomwolf, @gaguilar for adding this dataset.