Spaces:
Running
Running
omkarenator
commited on
Merge branch 'main' of hf.co:spaces/LLM360/TxT360-New
Browse files
web.py
CHANGED
@@ -251,41 +251,41 @@ def web_data():
|
|
251 |
P("Our filtering rate is illustrated below. Before deduplication, our filtering rate is comparable to RefinedWeb. During global deduplication, we removed approximately 85.89% of the data, significantly higher than previous works, indicating a large number of duplicates across dumps. "),
|
252 |
Img(src="images/filter_rate.jpg", height = "300", width = "600" ),
|
253 |
P("Note: All percentages are based on the number of documents. The gray bars represent the relative percentages of removed documents at each step, while the colorful bars represent the percentages of retained documents relative to the total number of documents in the raw Common Crawl."),
|
254 |
-
H3("TxT360 Filter Summary"),
|
255 |
-
P("This section provides highlevel details into the filtering that is applied to CommonCrawl in TxT360. Each decision listed is discussed in detail further on in this section."),
|
256 |
-
P("We adopt rules from RefinedWeb [1] to remove lines if they satisfy any of the following criteria:"),
|
257 |
-
Ul(
|
258 |
-
Li("the line is only composed of uppercase characters", style = "margin-bottom: 5px"),
|
259 |
-
Li("the line is only composed of numerical characters", style = "margin-bottom: 5px"),
|
260 |
-
Li("the line matches the pattern “r'^\d+\s+likes$", style = "margin-bottom: 5px"),
|
261 |
-
Li("the line only contains one word.", style = "margin-bottom: 5px"),
|
262 |
-
),
|
263 |
-
P("We summarize other statistics-based rules originated from Gopher [7] in this section. The statistics can be used include:"),
|
264 |
-
Ul(
|
265 |
-
Li("the word count in the document", style = "margin-bottom: 5px"),
|
266 |
-
Li("the mean word length", style = "margin-bottom: 5px"),
|
267 |
-
Li("the number of sentences", style = "margin-bottom: 5px"),
|
268 |
-
Li("the symbol-to-word ratio", style = "margin-bottom: 5px"),
|
269 |
-
Li("the fraction of alphabetic words", style = "margin-bottom: 5px"),
|
270 |
-
Li("and the number of stop words", style = "margin-bottom: 5px"),
|
271 |
-
),
|
272 |
-
P("Specifically, we remove any document which satisfies any of the following criteria:"),
|
273 |
-
Ul(
|
274 |
-
Li("it contains less than 50 words or more than 100,000 words", style = "margin-bottom: 5px"),
|
275 |
-
Li("its mean word length is outside the range of 3 to 10", style = "margin-bottom: 5px"),
|
276 |
-
Li("it contains less than 3 sentences", style = "margin-bottom: 5px"),
|
277 |
-
Li("its symbol-to-word ratio is greater than 0.1", style = "margin-bottom: 5px"),
|
278 |
-
Li("the words that contain at least one alphabetic character are less than 80% of the whole words", style = "margin-bottom: 5px"),
|
279 |
-
Li("it contains less than two of the stop words (the, be, to, of, and, that, have, with", style = "margin-bottom: 5px"),
|
280 |
-
),
|
281 |
|
282 |
-
|
283 |
|
284 |
|
285 |
-
H2("1
|
286 |
|
287 |
-
|
288 |
-
P("""
|
289 |
Common Crawl provides webpage texts via two formats: WARC (Web ARChive format) and WET (WARC Encapsulated Text).
|
290 |
WARC files contain the raw data from the crawl, which store the full HTTP response and request metadata.
|
291 |
WET files contain plaintexts extracted by Common Crawl. In line with previous works ([1], [2], [3], [4]),
|
|
|
251 |
P("Our filtering rate is illustrated below. Before deduplication, our filtering rate is comparable to RefinedWeb. During global deduplication, we removed approximately 85.89% of the data, significantly higher than previous works, indicating a large number of duplicates across dumps. "),
|
252 |
Img(src="images/filter_rate.jpg", height = "300", width = "600" ),
|
253 |
P("Note: All percentages are based on the number of documents. The gray bars represent the relative percentages of removed documents at each step, while the colorful bars represent the percentages of retained documents relative to the total number of documents in the raw Common Crawl."),
|
254 |
+
# H3("TxT360 Filter Summary"),
|
255 |
+
# P("This section provides highlevel details into the filtering that is applied to CommonCrawl in TxT360. Each decision listed is discussed in detail further on in this section."),
|
256 |
+
# P("We adopt rules from RefinedWeb [1] to remove lines if they satisfy any of the following criteria:"),
|
257 |
+
# Ul(
|
258 |
+
# Li("the line is only composed of uppercase characters", style = "margin-bottom: 5px"),
|
259 |
+
# Li("the line is only composed of numerical characters", style = "margin-bottom: 5px"),
|
260 |
+
# Li("the line matches the pattern “r'^\d+\s+likes$", style = "margin-bottom: 5px"),
|
261 |
+
# Li("the line only contains one word.", style = "margin-bottom: 5px"),
|
262 |
+
# ),
|
263 |
+
# P("We summarize other statistics-based rules originated from Gopher [7] in this section. The statistics can be used include:"),
|
264 |
+
# Ul(
|
265 |
+
# Li("the word count in the document", style = "margin-bottom: 5px"),
|
266 |
+
# Li("the mean word length", style = "margin-bottom: 5px"),
|
267 |
+
# Li("the number of sentences", style = "margin-bottom: 5px"),
|
268 |
+
# Li("the symbol-to-word ratio", style = "margin-bottom: 5px"),
|
269 |
+
# Li("the fraction of alphabetic words", style = "margin-bottom: 5px"),
|
270 |
+
# Li("and the number of stop words", style = "margin-bottom: 5px"),
|
271 |
+
# ),
|
272 |
+
# P("Specifically, we remove any document which satisfies any of the following criteria:"),
|
273 |
+
# Ul(
|
274 |
+
# Li("it contains less than 50 words or more than 100,000 words", style = "margin-bottom: 5px"),
|
275 |
+
# Li("its mean word length is outside the range of 3 to 10", style = "margin-bottom: 5px"),
|
276 |
+
# Li("it contains less than 3 sentences", style = "margin-bottom: 5px"),
|
277 |
+
# Li("its symbol-to-word ratio is greater than 0.1", style = "margin-bottom: 5px"),
|
278 |
+
# Li("the words that contain at least one alphabetic character are less than 80% of the whole words", style = "margin-bottom: 5px"),
|
279 |
+
# Li("it contains less than two of the stop words (the, be, to, of, and, that, have, with", style = "margin-bottom: 5px"),
|
280 |
+
# ),
|
281 |
|
282 |
+
# P("Following C4, we remove any page where the phrase “lorem ipsum” appears since some pages have placeholder “lorem ipsum” text."),
|
283 |
|
284 |
|
285 |
+
H2("Stage 1: Document Preparation"),
|
286 |
|
287 |
+
|
288 |
+
P(B("Text Extraction: ")), """
|
289 |
Common Crawl provides webpage texts via two formats: WARC (Web ARChive format) and WET (WARC Encapsulated Text).
|
290 |
WARC files contain the raw data from the crawl, which store the full HTTP response and request metadata.
|
291 |
WET files contain plaintexts extracted by Common Crawl. In line with previous works ([1], [2], [3], [4]),
|