mbrack commited on
Commit
bdacd72
1 Parent(s): 9725174

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ - es
5
+ - fr
6
+ - pt
7
+ - it
8
+ - nl
9
+ - el
10
+ - pl
11
+ - cs
12
+ - sk
13
+ task_categories:
14
+ - text-generation
15
+ pretty_name: Occiglot Fineweb v1.0
16
+ size_categories:
17
+ - 10B<n<100B
18
+ extra_gated_prompt: By filling the form below I understand that occiglot-fineweb is a derivative collection of multiple datasets which use individual licenses and their respective terms and conditions apply.I understand that all uses of the textual content in occiglot-fineweb are subject to the terms os use. I understand that reusing the textual content in occiglot-fineweb might not be legal in all countries/regions and for all use cases. I understand that occiglot-fineweb is mainly targeted towards researchers and meant to be used in research. Occiglot reserves the right to revoke my access to this data. Occiglot reserves the right to modify this data at any time in accordance to take down requests.
19
+ extra_gated_fields:
20
+ Name: text
21
+ Email: text
22
+ Affiliation: text
23
+ Country: text
24
+ Usecase: text
25
+ I have explicitly checked that downloading occiglot-fineweb is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the relevant Terms of Use: checkbox
26
+
27
+ ---
28
+
29
+ # Occiglot Fineweb v1.0
30
+
31
+ We present a more mature version of the multilingual Occiglot Fineweb corpus. In this early form, the dataset contains roughly 430M heavily cleaned documents from 10 languages.
32
+ Occiglot Fineweb builds on our existing collection of curated datasets and pre-filtered web data.
33
+ Subsequently, all documents were filtered with language-specific derivatives of the fine-web processing pipeline and different levels of depuplicated.
34
+
35
+ We provide the data at 3 levels of processing:
36
+
37
+ 1. After filtering
38
+ 2. After local deduplication (within data sources)
39
+ 3. After global deduplocation (for each language)
40
+
41
+ We are actively working on extending this dataset with more data and further languages. For more information please refer to our [blog post](https://occiglot.eu/posts/occiglot-fineweb/) or join our [Discord server](https://discord.gg/wUpvYs4XvM).
42
+
43
+ **Unfortunately, some of the datasets we used do not allow for re-distribution. Consequently, we had to exclude those from this version of our dataset. We are exploring different avenues to make this data available to the public as well.**
44
+ ## Datasources
45
+ We mainly relied on two sources of data.
46
+
47
+ ### 1. LLM-Dataset
48
+ From [LLM-Datasets](https://github.com/malteos/llm-datasets) we took all available datasets for the considered languages (excluding OSCAR). This collection of data for LLM training is curated from various sources and contains multiple high-quality datasets.
49
+
50
+ ### 2. Web-Data
51
+ We sourced web-crawled data from our [Community-Oscar](https://huggingface.co/datasets/oscar-corpus/community-oscar) dataset.
52
+
53
+ ## Filtering
54
+
55
+ All data was rigorously filtered using language-specific pipelines built upon [Huggingface's fine-web filters](https://github.com/huggingface/datatrove/blob/main/examples/fineweb.py).
56
+ In addition to some minor hyper-parameter adjustments we mainly modified 3 aspects to ensure language-specific quality filtering.
57
+
58
+ 1. Adjust average-word length filters according to lingusitic characteristics of each language
59
+ 2. Add language-specific stop words
60
+ 3. Add a language-specific policy filter for policy and cookie filtering
61
+
62
+ Compared to the our [prior version](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), we improved the configuration of the filtering settings, cleaned up the encoding of every document using ftfy and ran an additional language id filtering step for datasources from countries with multiple official languages (e.g. Belgium).
63
+
64
+ ## Deduplication
65
+
66
+ We performed minhash deduplication on all data of each language.
67
+
68
+ Importantly, we always retain the duplicate not contained in the web-crawled data for the globally deduplicated dataset. For example, if a wikipedia page is also contained in OSCAR, we drop the OSCAR duplicate, thus keeping the wikipedia subset complete.
69
+ This dataset structure allows to reliably over- or undersample the custom subsets.
70
+
71
+ ## Statistics
72
+
73
+ For the global deduplciated set:
74
+
75
+ | Language | lang-code | # Documents | # Tokens (Llama-3)
76
+ | -- | -- | -- | --
77
+ | German |de | 82.60M | 135.46B
78
+ | Spanish |es | 91.89M | 108.15B
79
+ | French | fr | 61.80M | 87.61B
80
+ | Portugese | pt | 46.97M | 54.87B
81
+ | Italian | it | 37.14M | 58.24B
82
+ | Dutch | nl | 29.00M | 33.78B
83
+ | Greek | el | 17.55M | 24.21B
84
+ | Polish | pl | 21.43M | 35.35B
85
+ | Czech | cs | 38.98M | 25.23B
86
+ | Slovak | sk | 4.18M | 11.13B
87
+ | | | |
88
+ | **Total** | | **431.53M** | **574.03B**
89
+
90
+ ## Acknowledgements
91
+
92
+ The dataset creation by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
93
+ Some preliminary computations were conducted on the [DFKI Pegasus Cluster](https://www.dfki.de/en/web).
94
+ Parts of the preliminary data curation were funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
95
+ through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).