abdeljalilELmajjodi's picture
Update README.md
6659f55 verified
|
raw
history blame
1.84 kB
---
dataset_info:
features:
- name: pageName
dtype: string
- name: text
dtype: string
- name: lang_identity
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2573053
num_examples: 5401
download_size: 1273799
dataset_size: 2573053
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- ar
size_categories:
- 1K<n<10K
---
# Darija Facebook Posts Dataset
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset consists of more than 5k public posts from Facebook. Each post contains text content, metadata.
This dataset containt more than 400K darija tokens.
- **Curated by:** @abdeljalilELmajjodi
- **Language(s) (NLP):** Multiple (primarily Moroccon Arabic)
## Uses
This dataset could be used for:
* Training and testing language models on social media content
* Analyzing social media posting patterns
* Studying conversation structures and reply networks
* Research on social media content moderation
* Natural language processing tasks using social media datas
## Dataset Structure
Contains the following fields for each post:
* text: The main content of the post
* PageName: Timestamp of post creation
* lang_identity: Language code (ary_Arab,ara_Arab,...)
## Other
To Identify posts language we used Gherbal classifier by @Sawalni, and it gives the next results:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f5c3528fb2b1535728138f/GgzsN2ds0_5qOKv5A03Xp.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f5c3528fb2b1535728138f/mGWQJSwnlah19onq7Lot6.png)
## Bias, Risks, and Limitations
The goal of this dataset is for you to have fun :)