Wikipedia-Abstract / README.md
sleeping4cat's picture
Upload dataset
59ae5e9 verified
|
raw
history blame
3.76 kB
metadata
license: mit
dataset_info:
  features:
    - name: URL
      dtype: string
    - name: Wiki
      dtype: string
    - name: Language
      dtype: string
    - name: Title
      dtype: string
    - name: Abstract
      dtype: string
    - name: Version Control
      dtype: string
  splits:
    - name: data
      num_bytes: 3849026435
      num_examples: 6575217
    - name: data1
      num_bytes: 1339234317
      num_examples: 2565263
    - name: data2
      num_bytes: 299141019
      num_examples: 345314
    - name: data3
      num_bytes: 121110572
      num_examples: 194533
    - name: data4
      num_bytes: 763143777
      num_examples: 1474721
    - name: data5
      num_bytes: 4402642
      num_examples: 4238
    - name: data6
      num_bytes: 921270984
      num_examples: 1777137
    - name: data7
      num_bytes: 634758702
      num_examples: 1080168
    - name: data8
      num_bytes: 1490198322
      num_examples: 2501467
    - name: data9
      num_bytes: 1116300754
      num_examples: 1890629
    - name: data10
      num_bytes: 568620663
      num_examples: 1315710
    - name: data11
      num_bytes: 1555258518
      num_examples: 1854603
    - name: data12
      num_bytes: 787383297
      num_examples: 1197801
    - name: data13
      num_bytes: 160792644
      num_examples: 153847
    - name: data14
      num_bytes: 3861239238
      num_examples: 6082427
    - name: data15
      num_bytes: 1143640444
      num_examples: 2522834
    - name: data16
      num_bytes: 244946
      num_examples: 727
  download_size: 8522442108
  dataset_size: 18615767274
configs:
  - config_name: default
    data_files:
      - split: data
        path: data/data-*
      - split: data1
        path: data/data1-*
      - split: data2
        path: data/data2-*
      - split: data3
        path: data/data3-*
      - split: data4
        path: data/data4-*
      - split: data5
        path: data/data5-*
      - split: data6
        path: data/data6-*
      - split: data7
        path: data/data7-*
      - split: data8
        path: data/data8-*
      - split: data9
        path: data/data9-*
      - split: data10
        path: data/data10-*
      - split: data11
        path: data/data11-*
      - split: data12
        path: data/data12-*
      - split: data13
        path: data/data13-*
      - split: data14
        path: data/data14-*
      - split: data15
        path: data/data15-*
      - split: data16
        path: data/data16-*

Introducing Wikipedia X, a comprehensive dataset encompassing abstracts, complete articles, and a popularity score index for both widely spoken and lesser-known Wikipedia subsets. Our dedication to Wikipedia-X ensures a centralized Wikipedia dataset that undergoes regular updates and adheres to the highest standards.

A central focus of our efforts was to include exotic languages that often lack up-to-date Wikipedia dumps or may not have any dumps at all. Languages such as Hebrew, Urdu, Bengali, Aramaic, Uighur, and Polish were prioritized to ensure high-quality processed Wikipedia datasets are accessible for these languages with substantial speaker bases. This initiative aims to enable Artificial Intelligence to thrive across all languages, breaking down language barriers and fostering inclusivity.

Notice: We're continuously updating this dataset every 8 months as part of a broader effort at LAION AI dedicated to textual embeddings. If you'd like to see a specific language added, please don't hesitate to reach out to us.

(Please note that this dataset is actively under development.)

Language Code
English en
German de
Polish pl
Spanish es
Hebrew he
French fr
Chinese zh
Italian it
Russian ru
Urdu ur
Portuguese pt
Aramaic ar
Cebuano ceb
Swedish sv
Uighur ug
Bengali bn