Datasets:
sleeping4cat
commited on
Commit
•
13d7ca0
1
Parent(s):
ecaa232
added intro
Browse files
README.md
CHANGED
@@ -1,98 +1,108 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
dataset_info:
|
4 |
-
features:
|
5 |
-
- name: URL
|
6 |
-
dtype: string
|
7 |
-
- name: Wiki
|
8 |
-
dtype: string
|
9 |
-
- name: Language
|
10 |
-
dtype: string
|
11 |
-
- name: Title
|
12 |
-
dtype: string
|
13 |
-
- name: Abstract
|
14 |
-
dtype: string
|
15 |
-
- name: Version Control
|
16 |
-
dtype: string
|
17 |
-
splits:
|
18 |
-
- name: data
|
19 |
-
num_bytes: 3849026435
|
20 |
-
num_examples: 6575217
|
21 |
-
- name: data1
|
22 |
-
num_bytes: 1339234317
|
23 |
-
num_examples: 2565263
|
24 |
-
- name: data2
|
25 |
-
num_bytes: 299141019
|
26 |
-
num_examples: 345314
|
27 |
-
- name: data3
|
28 |
-
num_bytes: 121110572
|
29 |
-
num_examples: 194533
|
30 |
-
- name: data4
|
31 |
-
num_bytes: 763143777
|
32 |
-
num_examples: 1474721
|
33 |
-
- name: data5
|
34 |
-
num_bytes: 4402642
|
35 |
-
num_examples: 4238
|
36 |
-
- name: data6
|
37 |
-
num_bytes: 921270984
|
38 |
-
num_examples: 1777137
|
39 |
-
- name: data7
|
40 |
-
num_bytes: 634758702
|
41 |
-
num_examples: 1080168
|
42 |
-
- name: data8
|
43 |
-
num_bytes: 1490198322
|
44 |
-
num_examples: 2501467
|
45 |
-
- name: data9
|
46 |
-
num_bytes: 1116300754
|
47 |
-
num_examples: 1890629
|
48 |
-
- name: data10
|
49 |
-
num_bytes: 568620663
|
50 |
-
num_examples: 1315710
|
51 |
-
- name: data11
|
52 |
-
num_bytes: 1555258518
|
53 |
-
num_examples: 1854603
|
54 |
-
- name: data12
|
55 |
-
num_bytes: 787383297
|
56 |
-
num_examples: 1197801
|
57 |
-
- name: data13
|
58 |
-
num_bytes: 160792644
|
59 |
-
num_examples: 153847
|
60 |
-
- name: data14
|
61 |
-
num_bytes: 3861239238
|
62 |
-
num_examples: 6082427
|
63 |
-
download_size: 8114428353
|
64 |
-
dataset_size: 17471881884
|
65 |
-
configs:
|
66 |
-
- config_name: default
|
67 |
-
data_files:
|
68 |
-
- split: data
|
69 |
-
path: data/data-*
|
70 |
-
- split: data1
|
71 |
-
path: data/data1-*
|
72 |
-
- split: data2
|
73 |
-
path: data/data2-*
|
74 |
-
- split: data3
|
75 |
-
path: data/data3-*
|
76 |
-
- split: data4
|
77 |
-
path: data/data4-*
|
78 |
-
- split: data5
|
79 |
-
path: data/data5-*
|
80 |
-
- split: data6
|
81 |
-
path: data/data6-*
|
82 |
-
- split: data7
|
83 |
-
path: data/data7-*
|
84 |
-
- split: data8
|
85 |
-
path: data/data8-*
|
86 |
-
- split: data9
|
87 |
-
path: data/data9-*
|
88 |
-
- split: data10
|
89 |
-
path: data/data10-*
|
90 |
-
- split: data11
|
91 |
-
path: data/data11-*
|
92 |
-
- split: data12
|
93 |
-
path: data/data12-*
|
94 |
-
- split: data13
|
95 |
-
path: data/data13-*
|
96 |
-
- split: data14
|
97 |
-
path: data/data14-*
|
98 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
dataset_info:
|
4 |
+
features:
|
5 |
+
- name: URL
|
6 |
+
dtype: string
|
7 |
+
- name: Wiki
|
8 |
+
dtype: string
|
9 |
+
- name: Language
|
10 |
+
dtype: string
|
11 |
+
- name: Title
|
12 |
+
dtype: string
|
13 |
+
- name: Abstract
|
14 |
+
dtype: string
|
15 |
+
- name: Version Control
|
16 |
+
dtype: string
|
17 |
+
splits:
|
18 |
+
- name: data
|
19 |
+
num_bytes: 3849026435
|
20 |
+
num_examples: 6575217
|
21 |
+
- name: data1
|
22 |
+
num_bytes: 1339234317
|
23 |
+
num_examples: 2565263
|
24 |
+
- name: data2
|
25 |
+
num_bytes: 299141019
|
26 |
+
num_examples: 345314
|
27 |
+
- name: data3
|
28 |
+
num_bytes: 121110572
|
29 |
+
num_examples: 194533
|
30 |
+
- name: data4
|
31 |
+
num_bytes: 763143777
|
32 |
+
num_examples: 1474721
|
33 |
+
- name: data5
|
34 |
+
num_bytes: 4402642
|
35 |
+
num_examples: 4238
|
36 |
+
- name: data6
|
37 |
+
num_bytes: 921270984
|
38 |
+
num_examples: 1777137
|
39 |
+
- name: data7
|
40 |
+
num_bytes: 634758702
|
41 |
+
num_examples: 1080168
|
42 |
+
- name: data8
|
43 |
+
num_bytes: 1490198322
|
44 |
+
num_examples: 2501467
|
45 |
+
- name: data9
|
46 |
+
num_bytes: 1116300754
|
47 |
+
num_examples: 1890629
|
48 |
+
- name: data10
|
49 |
+
num_bytes: 568620663
|
50 |
+
num_examples: 1315710
|
51 |
+
- name: data11
|
52 |
+
num_bytes: 1555258518
|
53 |
+
num_examples: 1854603
|
54 |
+
- name: data12
|
55 |
+
num_bytes: 787383297
|
56 |
+
num_examples: 1197801
|
57 |
+
- name: data13
|
58 |
+
num_bytes: 160792644
|
59 |
+
num_examples: 153847
|
60 |
+
- name: data14
|
61 |
+
num_bytes: 3861239238
|
62 |
+
num_examples: 6082427
|
63 |
+
download_size: 8114428353
|
64 |
+
dataset_size: 17471881884
|
65 |
+
configs:
|
66 |
+
- config_name: default
|
67 |
+
data_files:
|
68 |
+
- split: data
|
69 |
+
path: data/data-*
|
70 |
+
- split: data1
|
71 |
+
path: data/data1-*
|
72 |
+
- split: data2
|
73 |
+
path: data/data2-*
|
74 |
+
- split: data3
|
75 |
+
path: data/data3-*
|
76 |
+
- split: data4
|
77 |
+
path: data/data4-*
|
78 |
+
- split: data5
|
79 |
+
path: data/data5-*
|
80 |
+
- split: data6
|
81 |
+
path: data/data6-*
|
82 |
+
- split: data7
|
83 |
+
path: data/data7-*
|
84 |
+
- split: data8
|
85 |
+
path: data/data8-*
|
86 |
+
- split: data9
|
87 |
+
path: data/data9-*
|
88 |
+
- split: data10
|
89 |
+
path: data/data10-*
|
90 |
+
- split: data11
|
91 |
+
path: data/data11-*
|
92 |
+
- split: data12
|
93 |
+
path: data/data12-*
|
94 |
+
- split: data13
|
95 |
+
path: data/data13-*
|
96 |
+
- split: data14
|
97 |
+
path: data/data14-*
|
98 |
+
---
|
99 |
+
|
100 |
+
|
101 |
+
Introducing Wikipedia X, a comprehensive dataset encompassing abstracts, complete articles, and a popularity score index for both widely spoken and lesser-known Wikipedia subsets. Our dedication to Wikipedia-X ensures a centralized Wikipedia dataset that undergoes regular updates and adheres to the highest standards.
|
102 |
+
|
103 |
+
A central focus of our efforts was to include exotic languages that often lack up-to-date Wikipedia dumps or may not have any dumps at all. Languages such as Hebrew, Urdu, Bengali, Aramaic, Uighur, and Polish were prioritized to ensure high-quality processed Wikipedia datasets are accessible for these languages with substantial speaker bases. This initiative aims to enable Artificial Intelligence to thrive across all languages, breaking down language barriers and fostering inclusivity.
|
104 |
+
|
105 |
+
Notice: We're continuously updating this dataset every 8 months as part of a broader effort at LAION AI dedicated to textual embeddings. If you'd like to see a specific language added, please don't hesitate to reach out to us.
|
106 |
+
|
107 |
+
(Please note that this dataset is actively under development.)
|
108 |
+
|