Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
Moroccan Arabic
Size:
10K - 100K
omarkamali
commited on
Commit
•
2d757c2
1
Parent(s):
cf03591
Update README.md
Browse files
README.md
CHANGED
@@ -56,22 +56,13 @@ task_categories:
|
|
56 |
language:
|
57 |
- ary
|
58 |
---
|
59 |
-
# Gherbal’ing Multilingual
|
60 |
-
Following up on their previous release, the fineweb team has been hard at work on their upcoming multilingual fineweb dataset which contains a massive collection of 50M+ sentences across 100+ languages. The data, sourced from the Common Crawl corpus, has been classified into these languages using GlotLID, a model able to recognize more than
|
61 |
-
|
62 |
-
of the
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
the dataset using our Gherbal language detection model, which we recently released and
|
67 |
-
made available on our API platform. Our performance on several low-resource languages,
|
68 |
-
notably Moroccan, Persian and Swahili, is quite impressive, and the hope is to expand the
|
69 |
-
available resources for these underserved communities. We gladly accepted the challenge and
|
70 |
-
were given access to the pre-release dataset, with GlotLID classifications, and were tasked
|
71 |
-
with identifying the sentences that GlotLID misclassified.
|
72 |
-
As a first step, we chose to focus on Moroccan Arabic, a language spoken by millions of
|
73 |
-
people in Morocco and people of Moroccan descent in Europe. This report will detail the
|
74 |
-
process we went through to achieve our goal, and the results we obtained.
|
75 |
# The Dataset
|
76 |
The dataset we were given is a bunch of parquet files containing 50M+ sentences, with the
|
77 |
following columns:
|
@@ -165,4 +156,14 @@ able to recover a significant amount of data that was previously quite unusable
|
|
165 |
noise. We also observe that this dataset contains the most variations in the actual language as
|
166 |
classified by Gherbal, which confirms that the arb_Latn label from GlotLID is not a good
|
167 |
proxy for high quality Arabic data in latin script. The resulting dataset will be made available
|
168 |
-
as an additional configuration on the same dataset here when the analysis is complete.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
language:
|
57 |
- ary
|
58 |
---
|
59 |
+
# Gherbal’ing Multilingual Fineweb 2 🍵
|
60 |
+
Following up on their previous release, the fineweb team has been hard at work on their upcoming multilingual fineweb dataset which contains a massive collection of 50M+ sentences across 100+ languages. The data, sourced from the Common Crawl corpus, has been classified into these languages using GlotLID, a model able to recognize more than 2000 languages. The performance of GlotLID is quite impressive, considering the complexity of the task. It is able to correctly identify the language of a sentence with a decent degree of accuracy. However, especially for low-resource languages, it still makes some mistakes, and some languages are more difficult for it to identify than others.
|
61 |
+
|
62 |
+
This caught our interest and we wanted to see if we could help improve the quality of the dataset using our Gherbal language identification model, which we recently released and made available on our API platform. Our performance on several low-resource languages, notably Moroccan, Persian and Swahili, is quite impressive, and the hope is to expand the available resources for these underserved communities.
|
63 |
+
|
64 |
+
We gladly took on the challenge and as a first step, we chose to focus on Moroccan Arabic, a language spoken by millions of people in Morocco and people of Moroccan descent in Europe. This report will detail the process we went through to achieve our goal, and the results we obtained.
|
65 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
# The Dataset
|
67 |
The dataset we were given is a bunch of parquet files containing 50M+ sentences, with the
|
68 |
following columns:
|
|
|
156 |
noise. We also observe that this dataset contains the most variations in the actual language as
|
157 |
classified by Gherbal, which confirms that the arb_Latn label from GlotLID is not a good
|
158 |
proxy for high quality Arabic data in latin script. The resulting dataset will be made available
|
159 |
+
as an additional configuration on the same dataset here when the analysis is complete.
|
160 |
+
|
161 |
+
|
162 |
+
## Team
|
163 |
+
|
164 |
+
This project was conducted by Omneity Labs:
|
165 |
+
- [Omar Kamali](https://huggingface.co/omarkamali)
|
166 |
+
- [Abdeljalil Elmajjodi](https://huggingface.co/abdeljalilELmajjodi)
|
167 |
+
- [Mohamed Abchir](https://huggingface.co/SweeToxin)
|
168 |
+
|
169 |
+
Omneity Labs ([Sawalni](https://sawalni.com) team) is a Moroccan private R&D lab specialized in low-resource languages and cultural alignment. We build AI tools and products for low-resource languages and underserved communities.
|