Datasets:

Modalities:
Tabular
Text
Formats:
arrow
Libraries:
Datasets
License:
luckycat37 commited on
Commit
0a7ecc2
1 Parent(s): 867ed69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -2
README.md CHANGED
@@ -1,12 +1,50 @@
1
  ---
2
  license: cc-by-sa-3.0
3
  ---
 
 
4
  This is a Wikipedia dataset correct to "31-12-2014".
5
 
 
 
 
 
 
6
  WikiMedia routinely publishes dumps of Wikipedia, each containing the revision history of articles. We first defined the relevant revision before extracting the article information. Specifically, we select the most recent revision as of December 31st for each year. Consequently, some revisions in our datasets date back several years from the target date since these pages haven't been edited. While this inclusion of older revisions might initially appear problematic, it is important to note that these are the existing versions of Wikipedia pages as of the cutoff date. The content of these pages was considered current enough at that time. This approach ensures that our training datasets reflect the most up-to-date information available on Wikipedia at each year's end, providing a realistic snapshot of knowledge for that specific point in time.
7
 
8
  Once each revision has been identified we clean the page using the code from \textit{wiki-dump-reader} \footnote{https://github.com/CyberZHG/wiki-dump-reader/tree/master}, which parses the page and outputs clean text. During the cleaning phase a number of unwanted features and attributes are removed: file links, emphasises, comments, indents, HTML, references etc.
9
 
10
- Please refer to and cite the following paper when using this dataset in any downstream applications:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- @inproceedings{drinkall-tima-2024, title = "Time Machine GPT", author = "Drinkall, Felix and Zohren, Stefan and Pierrehumbert, Janet", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = june, year = "2024", publisher = "Association for Computational Linguistics" }
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-3.0
3
  ---
4
+ # Dataset Card for Dataset Name
5
+
6
  This is a Wikipedia dataset correct to "31-12-2014".
7
 
8
+
9
+ ## Dataset Details
10
+
11
+ ### Dataset Description
12
+
13
  WikiMedia routinely publishes dumps of Wikipedia, each containing the revision history of articles. We first defined the relevant revision before extracting the article information. Specifically, we select the most recent revision as of December 31st for each year. Consequently, some revisions in our datasets date back several years from the target date since these pages haven't been edited. While this inclusion of older revisions might initially appear problematic, it is important to note that these are the existing versions of Wikipedia pages as of the cutoff date. The content of these pages was considered current enough at that time. This approach ensures that our training datasets reflect the most up-to-date information available on Wikipedia at each year's end, providing a realistic snapshot of knowledge for that specific point in time.
14
 
15
  Once each revision has been identified we clean the page using the code from \textit{wiki-dump-reader} \footnote{https://github.com/CyberZHG/wiki-dump-reader/tree/master}, which parses the page and outputs clean text. During the cleaning phase a number of unwanted features and attributes are removed: file links, emphasises, comments, indents, HTML, references etc.
16
 
17
+ - **Language(s):** English
18
+ - **License:** cc-by-sa-3.0
19
+
20
+ ## Uses
21
+
22
+ Diachronic studies of Wikipedia, historical LLM pre-training, and any task that requires strict temporal partitioning of data.
23
+
24
+ ## Dataset Structure
25
+
26
+ The dataset is saved in a format that is suitable for fast loading of large files and is compatible with the Huggingface datasets framework.
27
+
28
+ ## Bias, Risks, and Limitations
29
+
30
+ This dataset does include all Wikipedia articles, some of which might not be useful to the end user. Filtering of relevant articles may be necessary for downstream tasks.
31
+
32
+ ## Dataset Card Contact
33
+
34
+ felix.drinkall@eng.ox.ac.uk
35
+
36
+ ## Acknowledgments
37
+
38
+ We are grateful to Graphcore, and their team, for their support in providing us with compute for this project. The first author was funded by the Economic and Social Research Council of the UK via the Grand Union DTP. This work was supported in part by a grant from the Engineering and Physical Sciences Research Council (EP/T023333/1). We are also grateful to the Oxford-Man Institute of Quantitative Finance and the Oxford e-Research Centre for their support.
39
+
40
+ ## Citation
41
+
42
+ **BibTeX:**
43
 
44
+ @inproceedings{drinkall-tima-2024,
45
+ title = "Time Machine GPT",
46
+ author = "Drinkall, Felix and Zohren, Stefan and Pierrehumbert, Janet",
47
+ booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
48
+ month = june,
49
+ year = "2024",
50
+ publisher = "Association for Computational Linguistics" }