abdeljalilELmajjodi commited on
Commit
cf03591
1 Parent(s): 9b18231

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -51,4 +51,118 @@ configs:
51
  data_files:
52
  - split: train
53
  path: ary_Arab/train-*
 
 
 
 
54
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  data_files:
52
  - split: train
53
  path: ary_Arab/train-*
54
+ task_categories:
55
+ - text-generation
56
+ language:
57
+ - ary
58
  ---
59
+ # Gherbal’ing Multilingual Fineweb2 🍵
60
+ Following up on their previous release, the fineweb team has been hard at work on their upcoming multilingual fineweb dataset which contains a massive collection of 50M+ sentences across 100+ languages. The data, sourced from the Common Crawl corpus, has been classified into these languages using GlotLID, a model able to recognize more than
61
+ 2000 languages. The performance of GlotLID is quite impressive, considering the complexity
62
+ of the task. It is able to correctly identify the language of a sentence with a decent degree of
63
+ accuracy. However, especially for low-resource languages, it still makes some mistakes, and
64
+ some languages are more difficult for it to identify than others.
65
+ We were approached by the fineweb team to see if we could help them improve the quality of
66
+ the dataset using our Gherbal language detection model, which we recently released and
67
+ made available on our API platform. Our performance on several low-resource languages,
68
+ notably Moroccan, Persian and Swahili, is quite impressive, and the hope is to expand the
69
+ available resources for these underserved communities. We gladly accepted the challenge and
70
+ were given access to the pre-release dataset, with GlotLID classifications, and were tasked
71
+ with identifying the sentences that GlotLID misclassified.
72
+ As a first step, we chose to focus on Moroccan Arabic, a language spoken by millions of
73
+ people in Morocco and people of Moroccan descent in Europe. This report will detail the
74
+ process we went through to achieve our goal, and the results we obtained.
75
+ # The Dataset
76
+ The dataset we were given is a bunch of parquet files containing 50M+ sentences, with the
77
+ following columns:
78
+ * **id:** a unique identifier for the document
79
+ * **text:** the document itself, extracted from a webpage
80
+ * **metadata**: a json column containing metadata about the sentence, including the url of
81
+ the page it was found on, the date of the page, and the previous classification of the page
82
+ by GlotLID.
83
+ The dataset contains several configurations, each one corresponding to a different language.
84
+ We focused our attention on the Arabic arb_Arab_dedup and the Moroccan
85
+ ary_Arab_dedup configurations, which we will refer to as the “Arabic” and “Moroccan”
86
+ datasets respectively.
87
+ # Our Approach
88
+ ## Dataset Processing
89
+ To tackle this challenge, we developed a systematic pipeline to process and analyze each
90
+ dataset. First, we performed thorough text cleanup to remove any unwanted artifacts and
91
+ standardize the content. This ensures we’re working with clean, natural language text,
92
+ especially as webpage content can be quite noisy.
93
+ Next, we leveraged the Natural Language Toolkit (NLTK) library to break down the
94
+ documents into individual sentences. While this is not a perfect solution and the noisy content
95
+ can make it difficult to identify sentences particulary for languages not supported by NLTK,
96
+ it is a good enough approximation for our purposes, which is to reduce the variance in a
97
+ webpage and avoid confusing the model with extremely long mixed-language content. This
98
+ step was crucial as it allowed us to analyze the text at a more granular level.
99
+ With our sentences prepared, we then ran each one through our Gherbal language detection model. The model evaluated each sentence and provided a confidence score across the 33
100
+ languages Gherbal supports. We then aggregated these sentence-level results by averaging the
101
+ classification scores. This gave us a comprehensive understanding of the dominant language
102
+ patterns within each document. A more fine-grained analysis at the sentence level would have
103
+ yielded more data with higher quality, but ultimately postponed to a later release given the
104
+ time and resource constraints.
105
+ Finally, we applied a filtering step to focus specifically on content classified as Moroccan
106
+ Arabic in Arabic script (ary_Arab). The resulting dataset is available on Huggingface at
107
+ **sawalni-ai/ml-fw-darija**.
108
+
109
+ ## Dataset Analysis
110
+ We used our Klimat library to analyze the dataset. Klimat is a tool we developed to perform
111
+ statistical analysis on language datasets, and is able to generate a number of interesting
112
+ insights into the data. We will share more about Klimat in a future blog post, but for now we
113
+ will focus on the results we obtained for the fineweb dataset.
114
+ ## Website Analysis
115
+ We also performed an analysis on the websites that were used to source the data in
116
+ multilingual fineweb, and classified by Gherbal as Moroccan Arabic. This gave us an
117
+ interesting insight on where Moroccan Arabic is used on the web, which could be useful to
118
+ increase the quantity of high quality data for the language. We broke down the data by
119
+ multiple criteria, including the top level domain, the duration the website was online (based
120
+ on Common Crawl accessing it), and more.
121
+ We restricted the analysis to high confidence samples, and filtered to the top 1000 websites
122
+ by quantity of data.
123
+ # Our Results
124
+ Let’s start by looking at the results for the Moroccan dataset.
125
+ * Original count in ary_Arab: 5.8M
126
+ * Resulting count after filtering: 37352 (0.64% of original)
127
+ * Number of tokens in ary_Arab: 2.8B (estimated using tiktoken for multilinguality)
128
+ * Number of tokens in filtered dataset: 75.3M
129
+
130
+ ## False Positives
131
+ A manual review of the filtered dataset revealed that human preferences were consistent with
132
+ the results of Gherbal, and that the filtered dataset should be a good resource for training and
133
+ evaluating models for Moroccan, despite the small sample size. It is worth noting that
134
+ Algerian and Tunisian Arabic were also misclassified as Moroccan Arabic due to the elevated
135
+ mutual intelligibility between the three. This is a known current limitation of Gherbal which
136
+ only supports Moroccan and Egyptian varieties of Arabic and should be addressed in future
137
+ releases.
138
+ ## False Negatives
139
+ Looking at our Gherbal paper (pending publication), specifically at the benchmark results on
140
+ the flores-200 devtest set, we can estimate that the false negative rate from Standard Arabic
141
+ (arb_Arab) to Moroccan Arabic (ary_Arab) is around 10%. Extrapolating this figure, we can
142
+ estimate the false negative rate for the filtered dataset to be around 37352 * 0.1 = 3735
143
+ Moroccan Arabic sentences that were incorrectly filtered out.
144
+
145
+ ## Other dataset configurations
146
+ We also applied the same process to the other dataset configurations, namely the Arabic
147
+ (arb_Arab) and the latin-script Arabic (arb_Latn) configurations. While the results are not yet
148
+ complete, we can already draw some interesting observations:
149
+ - Arabic (arb_Arab)
150
+ * Original count in arb_Arab: 24.2M
151
+ * Resulting count after filtering: 0
152
+ While some samples (<100 in total) were classified as Moroccan Arabic, a manual review
153
+ revealed that these were all incorrect classifications by Gherbal and that the filtered dataset is
154
+ indeed empty. This might change as we process the rest of the dataset, or as we improve
155
+ Gherbal’s performance on Arabic and its related languages. The resulting dataset will be
156
+ made available as an additional configuration on the same dataset here when the processing is
157
+ complete.
158
+ - Arabic in Latin script (arb_Latn)
159
+ * Original count in arb_Latn: 600K
160
+ * Resulting count after filtering: 15K (2.5% of original)
161
+ This dataset is classified as arb_Latn by GlotLID, which presents extreme variance in the data
162
+ as Arabic can be transliterated in so many different ways. As Gherbal is able to correctly
163
+ identify ary_Latn (Moroccan Arabic in Latin script) with a high degree of accuracy, we are
164
+ able to recover a significant amount of data that was previously quite unusable among all the
165
+ noise. We also observe that this dataset contains the most variations in the actual language as
166
+ classified by Gherbal, which confirms that the arb_Latn label from GlotLID is not a good
167
+ proxy for high quality Arabic data in latin script. The resulting dataset will be made available
168
+ as an additional configuration on the same dataset here when the analysis is complete.