felerminoali's picture
Update README.md
fc1f155 verified
metadata
license: cc-by-4.0
task_categories:
  - translation
  - text-generation
language:
  - vmw
  - pt
dataset_info:
  features:
    - name: filename
      dtype: string
    - name: img_pt
      dtype: image
    - name: img_vmw
      dtype: image
    - name: first_pass_pt_gv
      dtype: string
    - name: first_pass_pt_tesseract
      dtype: string
    - name: post_correction_pt
      dtype: string
    - name: first_pass_vmw_gv
      dtype: string
    - name: first_pass_vmw_tesseract
      dtype: string
    - name: post_correction_vmw
      dtype: string
  splits:
    - name: train
      num_bytes: 19780110
      num_examples: 369
  download_size: 19678096
  dataset_size: 19780110
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

BibTeX:

The dataset paper was published in EMNLP 2024.

Please cite as:

@inproceedings{ali-etal-2024-building,
    title = "Building Resources for Emakhuwa: Machine Translation and News Classification Benchmarks",
    author = "Ali, Felermino D. M. A.  and
      Lopes Cardoso, Henrique  and
      Sousa-Silva, Rui",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.824",
    pages = "14842--14857",
    abstract = "This paper introduces a comprehensive collection of NLP resources for Emakhuwa, Mozambique{'}s most widely spoken language. The resources include the first manually translated news bitext corpus between Portuguese and Emakhuwa, news topic classification datasets, and monolingual data. We detail the process and challenges of acquiring this data and present benchmark results for machine translation and news topic classification tasks. Our evaluation examines the impact of different data types{---}originally clean text, post-corrected OCR, and back-translated data{---}and the effects of fine-tuning from pre-trained models, including those focused on African languages.Our benchmarks demonstrate good performance in news topic classification and promising results in machine translation. We fine-tuned multilingual encoder-decoder models using real and synthetic data and evaluated them on our test set and the FLORES evaluation sets. The results highlight the importance of incorporating more data and potential for future improvements.All models, code, and datasets are available in the \url{https://huggingface.co/LIACC} repository under the CC BY 4.0 license.",
}