Datasets:
Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError Exception: RuntimeError Message: Dataset scripts are no longer supported, but found fiNERweb.py Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") RuntimeError: Dataset scripts are no longer supported, but found fiNERweb.py
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
fiNERweb
Dataset Description
fiNERweb is a multilingual named entity recognition dataset containing annotated text in multiple languages. Each example contains:
- Original text
- Tokenized text
- BIO tags
- Character spans for entities
- Token spans for entities
Languages
Currently supported languages:
- vi: Vietnamese
- ta: Tamil
- or: Odia (Oriya)
- sk: Slovak
- af: Afrikaans
- cs: Czech
- ga: Irish
- pt: Portuguese
- so: Somali
- sl: Slovenian
- cy: Welsh
- fy: Western Frisian
- uk: Ukrainian
- is: Icelandic
- la: Latin
- hy: Armenian
- bg: Bulgarian
- tr: Turkish
- uz: Uzbek
- nl: Dutch
- ps: Pashto
- be: Belarusian
- en: English
- xh: Xhosa
- jv: Javanese
- hi: Hindi
- my: Burmese
- br: Breton
- ur: Urdu
- sr: Serbian
- zh: Chinese (Mandarin)
- ka: Georgian
- hr: Croatian
- ml: Malayalam
- km: Khmer
- te: Telugu
- ru: Russian
- ar: Arabic
- de: German
- fr: French
- om: Oromo
- sw: Swahili
- az: Azerbaijani
- gl: Galician
- ko: Korean
- sd: Sindhi
- fi: Finnish
- lv: Latvian
- eo: Esperanto
- kk: Kazakh
- lt: Lithuanian
- mk: Macedonian
- eu: Basque
- am: Amharic
- he: Hebrew
- si: Sinhala
- ne: Nepali
- yi: Yiddish
- sq: Albanian
- it: Italian
- kn: Kannada
- mn: Mongolian
- ja: Japanese
- gu: Gujarati
- su: Sundanese
- ro: Romanian
- sa: Sanskrit
- ku: Kurdish
- ky: Kyrgyz
- ug: Uyghur
- gd: Scottish Gaelic
- es: Spanish
- et: Estonian
- th: Thai
- sv: Swedish
- hu: Hungarian
- bs: Bosnian
- bn: Bengali
- ca: Catalan
- mr: Marathi
- da: Danish
- pl: Polish
- el: Greek
- ms: Malay
- mg: Malagasy
- pa: Punjabi
- lo: Lao
- fa: Persian
- tl: Tagalog
- as: Assamese
- id: Indonesian
Dataset Structure
Each example contains:
{
"text": str, # Original text
"tokens": List[str], # Tokenized text
"bio_tags": List[str], # BIO tags for NER
"char_spans": List[Dict], # Character-level entity spans
"token_spans": List[Dict] # Token-level entity spans
}
Usage
from datasets import load_dataset
# Load a specific language
dataset = load_dataset("whoisjones/fiNERweb", "am") # For Amharic
# or
dataset = load_dataset("whoisjones/fiNERweb", "en") # For English
# Access the data
print(dataset["train"][0])
Citation
If you use this dataset, please cite:
@misc{fiNERweb,
author = {Jonas Golde},
title = {fiNERweb: Multilingual Named Entity Recognition Dataset},
year = {2024},
publisher = {HuggingFace},
journal = {HuggingFace Datasets},
howpublished = {\\url{https://huggingface.co/datasets/whoisjones/fiNERweb}}
}
Key changes:
- Updated all language codes to their ISO 639-1 equivalents
- Updated the language list in the metadata section
- Updated the language descriptions to use ISO codes
- Updated the usage examples to use ISO codes
This should resolve the language code validation errors in the metadata. The ISO 639-1 codes are the standard two-letter codes that Hugging Face expects in the metadata.
- Downloads last month
- 288