Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ configs:
|
|
9 |
path: "*.parquet"
|
10 |
---
|
11 |
|
12 |
-
Here are the _mostly_ original files for some of the forums I scraped in the past, repacked as HTML strings + some metadata on a **per-thread** basis instead of a per-message basis, which should make them more convenient to handle. Unlike [the other archive](https://huggingface.co/datasets/lemonilia/Roleplay-Forums_2023-04), they shouldn't have issues with spaces between adjacent HTML tags, as that occurred by mistake in an intermediate step where single messages were extracted from the pages. Unfortunately, I do not have the original files for most of the forums scraped in 2023; those would need to be scraped again.
|
13 |
|
14 |
## Scraping details
|
15 |
Most forums were scraped page-by-page using the Firefox extension [Web Scraper](https://addons.mozilla.org/en-US/firefox/addon/web-scraper/), generally only picking the top-level thread message container instead of the entire page. The scraper was configured to:
|
@@ -20,7 +20,7 @@ Most forums were scraped page-by-page using the Firefox extension [Web Scraper](
|
|
20 |
|
21 |
For the process to work properly, it was important not to scrape too many threads per run and using Firefox instead of Chrome, otherwise the process could fail (very quickly in the case of Chrome). Newer versions of Web Scraper solved some reliability issues with Firefox and made exporting the data to `.csv` format much quicker.
|
22 |
|
23 |
-
Using Python (pandas) on the exported data, pages from the same threads were grouped together and concatenated with the string `\n<!-- [NEW PAGE STARTS BELOW] -->\n`, without any further processing, obtaining one row of data per thread. The records were then saved into Parquet files (`compression='zstd', compression_level=7, row_group_size=20`). It's important to make sure that the row group size is not too large, or issues can arise when loading the the files later on both with pandas or in the HuggingFace dataset viewer (which supports a maximum row group size of 286 MB).
|
24 |
|
25 |
## Usage notes
|
26 |
The data is not directly usable for finetuning as-is. You will need to split the threads into messages, extract metadata, clean/convert the HTML, etc. For the NSFW forums it will be probably best to somehow avoid using usernames directly, just to be nice(r).
|
@@ -35,4 +35,7 @@ Only the roleplay sections were scraped. Note that the SFW forums can have censo
|
|
35 |
| Elliquiy | NSFW | 2023-03 to 2023-04 | "Print" version, lacks some formatting
|
36 |
| Giant in the Playground | SFW | 2025-01
|
37 |
| Lolicit | NSFW (Lolisho) | 2023-03 | Defunct website
|
38 |
-
| Menewsha | SFW | 2025-01 | No activity at the time of scraping. Generally low-quality postings.
|
|
|
|
|
|
|
|
9 |
path: "*.parquet"
|
10 |
---
|
11 |
|
12 |
+
Here are the _mostly_ original files for some of the forums I scraped in the past, repacked as HTML strings + some metadata on a **per-thread** basis instead of a per-message basis, which should make them more convenient to handle. Unlike [the other archive](https://huggingface.co/datasets/lemonilia/Roleplay-Forums_2023-04), they shouldn't have issues with spaces between adjacent HTML tags, as that occurred by mistake in an intermediate step where single messages were extracted from the pages. Unfortunately, I do not have anymore the original files for most of the forums scraped in 2023; those would need to be scraped again.
|
13 |
|
14 |
## Scraping details
|
15 |
Most forums were scraped page-by-page using the Firefox extension [Web Scraper](https://addons.mozilla.org/en-US/firefox/addon/web-scraper/), generally only picking the top-level thread message container instead of the entire page. The scraper was configured to:
|
|
|
20 |
|
21 |
For the process to work properly, it was important not to scrape too many threads per run and using Firefox instead of Chrome, otherwise the process could fail (very quickly in the case of Chrome). Newer versions of Web Scraper solved some reliability issues with Firefox and made exporting the data to `.csv` format much quicker.
|
22 |
|
23 |
+
Using Python (pandas) on the exported data, pages from the same threads were grouped together and concatenated with the string `\n<!-- [NEW PAGE STARTS BELOW] -->\n`, without any further processing, obtaining one row of data per thread. The records were then saved into Parquet files (`compression='zstd', compression_level=7, row_group_size=20`). It's important to make sure that the row group size is not too large, or issues can arise when loading the the files later on both with pandas or in the HuggingFace dataset viewer (which supports a maximum row group size of 286 MB). As some of the RP threads are huge (more than 10 megabytes of size), I had to reduce the row group size to 20 (about 200 MB) to avoid issues.
|
24 |
|
25 |
## Usage notes
|
26 |
The data is not directly usable for finetuning as-is. You will need to split the threads into messages, extract metadata, clean/convert the HTML, etc. For the NSFW forums it will be probably best to somehow avoid using usernames directly, just to be nice(r).
|
|
|
35 |
| Elliquiy | NSFW | 2023-03 to 2023-04 | "Print" version, lacks some formatting
|
36 |
| Giant in the Playground | SFW | 2025-01
|
37 |
| Lolicit | NSFW (Lolisho) | 2023-03 | Defunct website
|
38 |
+
| Menewsha | SFW | 2025-01 | No activity at the time of scraping. Generally low-quality postings.
|
39 |
+
|
40 |
+
## Future plans
|
41 |
+
I might add more SFW forums in the future. For NSFW, what I included probably already covers most bases. Every forum needs its own cleaning process, so adding a large number of small forums, unless needed, isn't very time-efficient.
|