Update README.md
Browse files
README.md
CHANGED
@@ -16,24 +16,26 @@ configs:
|
|
16 |
|
17 |
# Elliquiy roleplaying forum data
|
18 |
|
19 |
-
A collection of 6,640,593 posts and 112,328 thousands of mostly _erotic_ roleplaying forum threads from Elliquiy, from
|
20 |
|
21 |
-
Basic automated cleaning was performed, but the messages are still (by deliberate choice)
|
22 |
|
23 |
-
In addition to the messages, some metadata was provided for convenience, as well as alternative usernames that could be used instead of
|
24 |
|
25 |
-
Consider this a **work in progress**. I might update the dataset in the future as I improve the cleaning procedure.
|
26 |
|
27 |
# Limitations and issues
|
28 |
-
During the scraping procedure (
|
29 |
|
30 |
-
|
|
|
|
|
31 |
|
32 |
Due to the nested structure of the data, I had to split the original parquet file into smaller ones in order to avoid issues when loading them with `pandas` in Python (via PyArrow). This appears to be a [documented problem](https://github.com/apache/arrow/issues/21526#issuecomment-2350741670).
|
33 |
|
34 |
# Basic usage
|
35 |
|
36 |
-
The files need PyArrow installed from `pip`. FastParquet will not work properly due to the nested data structure.
|
37 |
|
38 |
```python
|
39 |
import pandas
|
@@ -119,6 +121,7 @@ Name: 2350, dtype: object
|
|
119 |
|
120 |
## NOT done
|
121 |
- Replacing HTML escape characters
|
|
|
122 |
- Changing fancy punctuation to ASCII punctuation
|
123 |
- Removing usernames entirely
|
124 |
- Removing them from the message bodies themselves might not be easy.
|
|
|
16 |
|
17 |
# Elliquiy roleplaying forum data
|
18 |
|
19 |
+
A collection of 6,640,593 posts and 112,328 thousands of mostly high-effort _erotic_ roleplaying forum threads from Elliquiy, from April 2005 through April 2023. About 9 GB of uncompressed text data (including formatting tags). The data is from the larger [raw Forum RP dataset](https://huggingface.co/datasets/lemonilia/Roleplay-Forums_2023-04) I also uploaded.
|
20 |
|
21 |
+
Basic automated cleaning was performed, but the messages are still (by deliberate choice) in HTML format, with the notable exception of converting linebreaks into `\n`.
|
22 |
|
23 |
+
In addition to the messages, some metadata was provided for user convenience, as well as alternative usernames that could be used instead of usernames (in the format of `User0`, `User1` ... `UserN`). These are unique per-thread, but not globally.
|
24 |
|
25 |
+
Consider this a **work in progress**. I might update the dataset in the future as I improve the cleaning procedure or the data format.
|
26 |
|
27 |
# Limitations and issues
|
28 |
+
During the scraping procedure (performed on April 2023) some information like text color and links was lost in the process.
|
29 |
|
30 |
+
Most of the data is adult-themed, and given that usernames inside the posts still exist, it would be best if usernames weren't directly used when training a model with this data.
|
31 |
+
|
32 |
+
Given the text formatting used by many users, complete and thorough conversion to Markdown seems very difficult without losing information in the process or causing formatting problems.
|
33 |
|
34 |
Due to the nested structure of the data, I had to split the original parquet file into smaller ones in order to avoid issues when loading them with `pandas` in Python (via PyArrow). This appears to be a [documented problem](https://github.com/apache/arrow/issues/21526#issuecomment-2350741670).
|
35 |
|
36 |
# Basic usage
|
37 |
|
38 |
+
The files need PyArrow installed from `pip` to be loaded with `pandas`. FastParquet will not work properly due to the nested data structure.
|
39 |
|
40 |
```python
|
41 |
import pandas
|
|
|
121 |
|
122 |
## NOT done
|
123 |
- Replacing HTML escape characters
|
124 |
+
- Balancing quote marks and other characters that are supposed to be paired
|
125 |
- Changing fancy punctuation to ASCII punctuation
|
126 |
- Removing usernames entirely
|
127 |
- Removing them from the message bodies themselves might not be easy.
|