The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ImportError
Message:      To be able to use SEACrowd/indocoref, you need to install the following dependency: seacrowd.
Please install it using 'pip install seacrowd' for instance.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1880, in dataset_module_factory
                  return HubDatasetModuleFactoryWithScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1504, in get_module
                  local_imports = _download_additional_modules(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 354, in _download_additional_modules
                  raise ImportError(
              ImportError: To be able to use SEACrowd/indocoref, you need to install the following dependency: seacrowd.
              Please install it using 'pip install seacrowd' for instance.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "coreference-resolution" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Dataset contains articles from Wikipedia Bahasa Indonesia which fulfill these conditions:

  • The pages contain many noun phrases, which the authors subjectively pick: (i) fictional plots, e.g., subtitles for films, TV show episodes, and novel stories; (ii) biographies (incl. fictional characters); and (iii) historical events or important events.
  • The pages contain significant variation of pronoun and named-entity. We count the number of first, second, third person pronouns, and clitic pronouns in the document by applying string matching.We examine the number of named-entity using the Stanford CoreNLP NER Tagger (Manning et al., 2014) with a model trained from the Indonesian corpus taken from Alfina et al. (2016). The Wikipedia texts have length of 500 to 2000 words. We sample 201 of pages from subset of filtered Wikipedia pages. We hire five annotators who are undergraduate student in Linguistics department. They are native in Indonesian. Annotation is carried out using the Script d’Annotation des Chanes de Rfrence (SACR), a web-based Coreference resolution annotation tool developed by Oberle (2018). From the 201 texts, there are 16,460 mentions tagged by the annotators

Languages

ind

Supported Tasks

Coreference Resolution

Dataset Usage

Using datasets library

from datasets import load_dataset
dset = datasets.load_dataset("SEACrowd/indocoref", trust_remote_code=True)

Using seacrowd library

# Load the dataset using the default config
dset = sc.load_dataset("indocoref", schema="seacrowd")
# Check all available subsets (config names) of the dataset
print(sc.available_config_names("indocoref"))
# Load the dataset using a specific config
dset = sc.load_dataset_by_config_name(config_name="<config_name>")

More details on how to load the seacrowd library can be found here.

Dataset Homepage

https://github.com/valentinakania/indocoref/

Dataset Version

Source: 1.0.0. SEACrowd: 2024.06.20.

Dataset License

MIT

Citation

If you are using the Indocoref dataloader in your work, please cite the following:

@inproceedings{artari-etal-2021-multi,
    title        = {{A Multi-Pass Sieve Coreference Resolution for Indonesian}},
    author       = {Artari, Valentina Kania Prameswara  and Mahendra, Rahmad  and Jiwanggi, Meganingrum Arista  and Anggraito, Adityo  and Budi, Indra},
    year         = 2021,
    month        = {Sep},
    booktitle    = {Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)},
    publisher    = {INCOMA Ltd.},
    address      = {Held Online},
    pages        = {79--85},
    url          = {https://aclanthology.org/2021.ranlp-1.10},
    abstract     = {Coreference resolution is an NLP task to find out whether the set of referring expressions belong to the same concept in discourse. A multi-pass sieve is a deterministic coreference model that implements several layers of sieves, where each sieve takes a pair of correlated mentions from a collection of non-coherent mentions. The multi-pass sieve is based on the principle of high precision, followed by increased recall in each sieve. In this work, we examine the portability of the multi-pass sieve coreference resolution model to the Indonesian language. We conduct the experiment on 201 Wikipedia documents and the multi-pass sieve system yields 72.74{\%} of MUC F-measure and 52.18{\%} of BCUBED F-measure.}
}


@article{lovenia2024seacrowd,
    title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages}, 
    author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
    year={2024},
    eprint={2406.10118},
    journal={arXiv preprint arXiv: 2406.10118}
}
Downloads last month
52