File size: 1,924 Bytes
34ca253
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
894a1f8
 
 
 
34ca253
01fed5c
34ca253
 
 
 
 
 
 
 
 
 
 
 
894a1f8
34ca253
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: pddl
language:
- en
tags:
- fiction
pretty_name: Robot E Howard
size_categories:
- n<1K
---

# Dataset Card for Dataset Name

This is 94 fictional works written by Robert E Howard.

## Dataset Details

### Dataset Description

This is 94 fictional works written by Robert E Howard. It includes all the stories/novellas/novels I could find by Robert E Howard that are in the public domain, minus the Breckenridge Elkins stories. I didn't want the extreme dialect/slang in those to affect any models trained on this.

This second version of the dataset is to more closely match the style of the "chosen" (read: original from the book) texts in [gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1)

At some point, I will update and release a third version, maybe as a second dataset, that fully matches the schema of gutenberg-dpo-v0.1.

- **Curated by:** leftyfeep
- **Language(s) (NLP):** en
- **License:** Public Domain

### Dataset Sources [optional]

Dataset created by pulling public domain text from gutenberg.org and wikisource.org, and then cleaning it up and merging it.

## Uses

Train an LLM

## Dataset Structure

Each story, novella, or novel was split into chunks by switching all the scene breaks to chapter breaks, then splitting by chapter. Then each chunk had all newline characters removed to have each one on one line, then <|endoftext|> separating lines.

[More Information Needed]

## Dataset Creation

### Curation Rationale

Every public domain Robert E. Howard story I could find, except for the Breckenridge Elkins ones. Those ones have a ton of slang and dialect that I wouldn't want creeping in to text generations.

### Source Data

Dataset created by pulling public domain text from gutenberg.org and wikisource.org, and then cleaning it up and merging it.

## Bias, Risks, and Limitations

Text generated with an LLM based on this might be too awesome.