Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
license: other | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: 'data/*/*.parquet' | |
- config_name: retsinformationdk | |
data_files: | |
- split: train | |
path: data/retsinformationdk/*.parquet | |
- config_name: ep | |
data_files: | |
- split: train | |
path: data/ep/*.parquet | |
- config_name: ft | |
data_files: | |
- split: train | |
path: data/ft/*.parquet | |
- config_name: wikisource | |
data_files: | |
- split: train | |
path: data/wikisource/*.parquet | |
- config_name: spont | |
data_files: | |
- split: train | |
path: data/spont/*.parquet | |
- config_name: tv2r | |
data_files: | |
- split: train | |
path: data/tv2r/*.parquet | |
- config_name: adl | |
data_files: | |
- split: train | |
path: data/adl/*.parquet | |
- config_name: hest | |
data_files: | |
- split: train | |
path: data/hest/*.parquet | |
- config_name: skat | |
data_files: | |
- split: train | |
path: data/skat/*.parquet | |
- config_name: dannet | |
data_files: | |
- split: train | |
path: data/dannet/*.parquet | |
- config_name: retspraksis | |
data_files: | |
- split: train | |
path: data/retspraksis/*.parquet | |
- config_name: wikibooks | |
data_files: | |
- split: train | |
path: data/wikibooks/*.parquet | |
- config_name: jvj | |
data_files: | |
- split: train | |
path: data/jvj/*.parquet | |
- config_name: gutenberg | |
data_files: | |
- split: train | |
path: data/gutenberg/*.parquet | |
- config_name: botxt | |
data_files: | |
- split: train | |
path: data/botxt/*.parquet | |
- config_name: depbank | |
data_files: | |
- split: train | |
path: data/depbank/*.parquet | |
- config_name: naat | |
data_files: | |
- split: train | |
path: data/naat/*.parquet | |
- config_name: synne | |
data_files: | |
- split: train | |
path: data/synne/*.parquet | |
- config_name: wiki | |
data_files: | |
- split: train | |
path: data/wiki/*.parquet | |
- config_name: nordjyllandnews | |
data_files: | |
- split: train | |
path: data/nordjyllandnews/*.parquet | |
- config_name: relig | |
data_files: | |
- split: train | |
path: data/relig/*.parquet | |
annotations_creators: | |
- no-annotation | |
language_creators: | |
- crowdsourced | |
language: | |
- da | |
multilinguality: | |
- monolingual | |
source_datasets: | |
- original | |
task_categories: | |
- text-generation | |
task_ids: | |
- language-modeling | |
pretty_name: Danish Dynaword | |
language_bcp47: | |
- da | |
- da-bornholm | |
- da-synnejyl | |
<!-- | |
readme structure is inspired by: | |
https://github.com/huggingface/datasets/blob/main/templates/README_guide.md --> | |
# 🧨 Danish Dynaword | |
| | | | |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | | |
| **Language** | dan, dansk, Danish | | |
| **License** | Permissible, See the respective dataset | | |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) | | |
| **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions) | | |
## Table of Contents | |
- [🧨 Danish Dynaword](#-danish-dynaword) | |
- [Table of Contents](#table-of-contents) | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Loading the dataset](#loading-the-dataset) | |
- [Languages:](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-fields) | |
- [Data Splits](#data-splits) | |
- [Dataset Creation](#dataset-creation) | |
- [Curation Rationale](#curation-rationale) | |
- [Annotations](#annotations) | |
- [Source Data](#source-data) | |
- [Additional Information](#additional-information) | |
- [Contributing to the dataset](#contributing-to-the-dataset) | |
- [Citation Information](#citation-information) | |
## Dataset Description | |
### Dataset Summary | |
The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset) | |
### Loading the dataset | |
```py | |
from datasets import load_dataset | |
name = "danish-foundation-models/danish-dynaword" | |
ds = load_dataset(name, split = "train") | |
sample = ds[1] # see "Data Instances" below | |
``` | |
or load it by streaming the data | |
```py | |
ds = load_dataset(name, split = "train", streaming=True) | |
dataset_iter = iter(ds) | |
sample = next(iter(dataset_iter)) | |
``` | |
You can also load a single subset at a time: | |
```py | |
ds = load_dataset(name, "adl", split = "train") | |
``` | |
As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision: | |
You can also load a single subset at a time: | |
```py | |
ds = load_dataset(name, revision="{desired revision}") | |
``` | |
### Languages: | |
This dataset includes the following languages: | |
- dan-Latn | |
- dan-Latn-bornholm | |
- dan-Latn-synnejyl | |
Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant. | |
## Dataset Structure | |
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). | |
### Data Instances | |
Each entry in the dataset consists of a single text with associated metadata | |
```py | |
{ | |
"text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...", | |
"source": "adl", | |
"id": "adl_aakjaer06val", | |
"added": "2020-09-14", | |
"created": "1700-01-01, 2022-01-01", | |
"license": "Creative Commons Legal Code\n\nCC0 1.0 Universal", | |
"domain": "Wiki & Books", | |
"metadata": {"source-pretty": "Archive for Danish Literature"}, | |
} | |
``` | |
### Data Fields | |
An entry in the dataset consists of the following fields: | |
- `text`(`str`): The content of the document. | |
- `source` (`str`): The source of the document (see [Source Data](#source-data)). | |
- `id` (`str`): An unique identifier for each document. | |
- `added` (`str`): An date for when the document was added to this collection. | |
- `created` (`str`): An date range for when the document was originally created. | |
- `license` (`str`): The license of the document. The licenses vary according to the source. | |
- `domain` (`str`): The domain of the source | |
- `metadata/source-pretty` (`str`): The long form version of the short-form source name | |
- `metadata/*`: Potentially additional metadata | |
### Data Splits | |
The entire corpus is provided in the `train` split. | |
## Dataset Creation | |
### Curation Rationale | |
These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains. | |
### Annotations | |
This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to. | |
### Source Data | |
Below follows a brief overview of the sources in the corpus along with their individual license. | |
| Source | License | | |
| ----------------- | -------------------------------------------------------- | | |
| adl | Creative Commons Legal Code 1.0 Universal | | |
| botxt | Creative Commons Legal Code 1.0 Universal | | |
| dannet | [dannet license] | | |
| depbank | Attribution-ShareAlike 4.0 International | | |
| ep | Creative Commons Legal Code 1.0 Universal | | |
| ft | Creative Commons Legal Code 1.0 Universal | | |
| gutenberg | [gutenberg license] | | |
| hest | Creative Commons Legal Code 1.0 Universal | | |
| jvj | Attribution-ShareAlike 4.0 International | | |
| naat | Creative Commons Legal Code 1.0 Universal | | |
| relig | Creative Commons Legal Code 1.0 Universal | | |
| retsinformationdk | [Other (Danish Law)] | | |
| retspraksis | Creative Commons Legal Code 1.0 Universal | | |
| skat | Creative Commons Legal Code 1.0 Universal | | |
| spont | Creative Commons Legal Code 1.0 Universal | | |
| synne | Creative Commons Legal Code 1.0 Universal | | |
| tv2r | [Custom, Creative Commons Attribution 4.0 International] | | |
| wiki | Creative Commons Legal Code 1.0 Universal | | |
| wikibooks | Creative Commons Legal Code 1.0 Universal | | |
| wikisource | Creative Commons Legal Code 1.0 Universal | | |
[Custom, Creative Commons Attribution 4.0 International]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/tv2r/tv2r.md#license-information | |
[gutenberg license]: https://www.gutenberg.org/policy/license.html | |
[dannet license]: https://cst.ku.dk/projekter/dannet/license.txt | |
[Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information | |
## Additional Information | |
### Contributing to the dataset | |
We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md) | |
### Citation Information | |
This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets. | |