metadata
dataset_info:
features:
- name: pageName
dtype: string
- name: text
dtype: string
- name: lang_identity
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2573053
num_examples: 5401
download_size: 1273799
dataset_size: 2573053
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- ar
size_categories:
- 1K<n<10K
Darija Facebook Posts Dataset
Dataset Details
Dataset Description
This dataset consists of more than 5k public posts from Facebook. Each post contains text content, metadata. This dataset containt more than 400K darija tokens.
- Curated by: @abdeljalilELmajjodi
- Language(s) (NLP): Multiple (primarily Moroccon Arabic)
Uses
This dataset could be used for:
- Training and testing language models on social media content
- Analyzing social media posting patterns
- Studying conversation structures and reply networks
- Research on social media content moderation
- Natural language processing tasks using social media datas
Dataset Structure
Contains the following fields for each post:
- text: The main content of the post
- PageName: Timestamp of post creation
- lang_identity: Language code (ary_Arab,ara_Arab,...)
Other
To Identify posts language we used Gherbal classifier by @Sawalni, and it gives the next results:
Bias, Risks, and Limitations
The goal of this dataset is for you to have fun :)