task_categories:
- text2text-generation
license: cc
language:
- de
DTA reviEvalCorpus (v1.0)
Dataset description
Dataset Summary
The DTA reviEvalCorpus is a revised version of the DTA EvalCorpus. It is a parallel corpus of German texts from the period between 1780 to 1899, that aligns historical spellings with normalizations. The normalization of a sentence is a modified version of the original text that is adapted to modern spelling conventions. The corpus can be used to train and evaluate models for normalizing historical German text. The DTA EvalCorpus corpus was created in 2013 as part of the Deutsches Textarchiv (German Text Archive). The revised DTA reviEvalCorpus contains improved normalizations. The revision was done in the context of Text+. For more information on the corpus, its creation and its revision, see Dataset Creation.
Supported Tasks
text2text-generation
: The dataset can be used to train a sentence-level seq2seq model for historical text normalization of German, i.e. the conversion of text in historical spelling to the contemporary spelling.
Languages
German (historical and contemporary)
Dataset Structure
Data Instances
A sample instance from the dataset
{
"basename":"kotzebue_menschenhass_1790",
"par_idx":2139,
"orig":"nicht durch trockene Wortzaͤnckerey, durch That will ich widerlegen, was der Graf da eben herdeclamirte.",
"norm":"nicht durch trockene Wortzänkerei, durch Tat will ich widerlegen, was der Graf da eben herdeklamierte."
}
Data Fields
basename
: str, identifier of the work in the Deutsches Textarchivpar_idx
: int, index of sentence within the workorig
: str, sentence as spelled in original textnorm
: str, sentence in normalized spelling
Use orig
as the input and norm
as the output to train a sentence-level normalizer. (Reverse input and output to train a 'historicizer'.)
Taken together, basename
and par_idx
are the unique identifier for a sentence in the corpus.
Data Splits
The dataset is split into a train, validation and test split. Split sizes are listed in the table below. Each split contains only entire documents, i.e. all sentences from a given document. The validation set is the same set of documents used for testing in Jurish & Ast (2015). Authors that are included in the train split are not included in the validation or test set. Between validation and test set some authors overlap. The test set contains 10 works by authors that occur in the validation set, plus two additional documents to achieve a similar amount of tokens from different decades in both test and validation set. Compared to the train set, test and validation contain more documents from earlier decades within the time frame of the DTAEval corpus. This should make validation and test set "harder" than the training set.
train | validation | test | |
---|---|---|---|
Documents | 96 | 13 | 12 |
Sentences | 204K | 18K | 22K |
Tokens | 4.6M | 405K | 381K |
Dataset Creation
Creation of the DTA EvalCorpus (2013)
The DTA EvalCorpus was compiled for the training and evaluation of tools for historical text normalization. A parallel corpus was created by aligning digitalizations of historic prints in original spelling with editions of the same works in contemporary orthography. The historic works come from the Deutsches Textarchiv (German Text Archive). The modern editions come from the TextGrid Repository and Project Gutenberg. A semi-automatic alignment was performed, then checked manually and some corrections were applied, if necessary. The corpus contains a total of 121 works from the time between 1780 and 1899, by 53 authors (50 men, 3 women) and comprise fiction (novels, poetry, drama, etc.) and non-fiction (scientific writing). The DTA EvalCorpus was released in a custom XML format with word-level alignments.
More details on the rationale of the compilation and of the creation process can be found in Jurish et al. (2013), Jurish & Ast (2015), and in the documentation of the 2013 DTA EvalCorpus.
Creation of the DTA reviEvalCorpus (2024)
The DTA reviEvalCorpus is a version of the DTA EvalCorpus with revised and improved normalizations. The normalizations in the 2013 corpus contain some inconsistencies and/or errors. This is due to the fact that the normalizations have been taken from modern editions and these editions have been created at varying points in time, by different editors, and thus not with a single, fixed normalization standard in mind. Particularly, normalizations sometimes follow the rules of German orthography from before the 1996 orthography reform (e.g., "Kuß" instead of "Kuss"). This was revised as much as possible for the DTA reviEvalCorpus by applying type replacements and selected rules of the LanguageTool, focussing on compound spellings (e.g., "so lange" -> "solange"). Furthermore, the revision fixed other inconsistent, questionable or erroneous normalizations (normalizations such as "Sprüchwort", "totmüde", "zusehn").
The DTA reviEvalCorpus is released in a machine learning-friendly format (JSONL), where each record represents a sentence-level alignment (sentence in original spelling, sentence in normalized spelling).
Additional Information
Dataset Curators
The DTA EvalCorpus was created by Bryan Jurish, Henriette Ast, Marko Drotschmann and Christian Thomas at the Berlin-Brandenburgische Akademie der Wissenschaften.
The revisions that lead to the DTA reviEvalCorpus were done by Yannic Bracke as part of Text+ at the Berlin-Brandenburgische Akademie der Wissenschaften.
Licensing Information
This corpus is licensed under the CC BY-NC 4.0 license.
The original DTA EvalCorpus has the following licensing information:
- DTA EvalCorpus by Bryan Jurish, Henriette Ast, Marko Drotschmann and Christian Thomas, licensed under the CC BY-NC 3.0 license,
- historical source text by the Deutsche Textarchiv, licensed under the CC BY-NC 3.0 license,
- contemporary target text by TextGrid, licensed under the CC BY 3.0 license,
- contemporary target text by Project Gutenberg, licensed under the Project Gutenberg License.
References
Jurish, Bryan, and Henriette Ast. 2015. “Using an Alignment-based Lexicon for Canonicalization of Historical Text“ In J. Gippert & R. Gehrke (editors), Historical Corpora: Challenges and Perspectives, volume 5 of Corpus Linguistics and Interdisciplinary Perspectives on Language (CLIP), pages 197-208. Narr, Tübingen, 2015.
Jurish, Bryan, Marko Drotschmann, and Henriette Ast. 2013. “Constructing a Canonicalized Corpus of Historical German Text by Text Alignment“. In P. Bennett, M. Durrell, S. Scheible, and R. J. Whitt (editors), New Methods in Historical Corpora, volume 3 of Corpus Linguistics and Interdisciplinary Perspectives on Language (CLIP), pages 221-234. Narr, Tübingen, 2013.
Citation Information
@misc{dta_revieval,
author = {Yannic Bracke},
title = {DTA reviEvalCorpus},
year = {2024}
version = {1.0}
url = {https://huggingface.co/datasets/ybracke/dta-reviEvalCorpus-v1}
}