lemonilia's picture
Update README.md
fed9bb3
|
raw
history blame
6.53 kB
metadata
license: other
license_name: agpl-3.0
license_link: https://www.gnu.org/licenses/agpl-3.0.txt
language:
  - en
pretty_name: elliquiy-rp_2023-04
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train
        path: elliquiy-rp_2023-04_*.parquet

Elliquiy roleplaying forum data

A collection of 6,640,593 posts and 112,328 mostly high-effort adult roleplaying forum threads from Elliquiy, from April 2005 through April 2023. About 9 GB of uncompressed text data (including formatting tags). The data is from the larger raw Forum RP dataset I also uploaded.

Basic automated cleaning was performed, but the messages are still (by deliberate choice) in HTML format, with the notable exception of converting linebreaks into \n.

In addition to the messages, some metadata was provided for user convenience, as well as alternative names that could be used instead of actual usernames (in the format of User0, User1 ... UserN). These are unique per-thread, but not globally.

Consider this a work in progress. I might update the dataset in the future as I improve the cleaning procedure or the data format.

Limitations and issues

During the scraping procedure (performed on April 2023) some information like text color and links was lost in the process.

Most of the data is adult-themed, and given that usernames inside the posts still exist, it would be best if usernames weren't directly used when training a model with this data.

Given the text formatting used by many users, complete and thorough conversion to Markdown seems very difficult without losing information in the process or causing formatting problems.

Due to the nested structure of the data, I had to split the original parquet file into smaller ones in order to avoid issues when loading them with pandas in Python (via PyArrow). This appears to be a documented problem.

Basic usage

The files need PyArrow installed from pip to be loaded with pandas. FastParquet will not work properly due to the nested data structure.

import pandas

# Load a parquet file into one DataFrame
df = pandas.read_parquet('elliquiy-rp_2023-04_train-00000-of-00006.parquet')

# Load the shareGPT-like message group from one specific row into a standard Python list
messages = list(df.iloc[2350].messages)

Consolidate the parquet files into one large DataFrame (requires large amounts of memory):

import glob
import pandas

filenames = sorted(glob.glob('*.parquet'))
parquets = []

# Read the parquet files one by one
for file in filenames:
    parquets.append(pandas.read_parquet(file))

# Concatenate the parquet files into one DataFrame
full_df = pandas.concat(parquets)

Showing thread metadata from one specific row after loading the data:

In [2]: df.iloc[2350]
Out[2]: 
thread-id                                                        11897
thread-title                           The League of Extraordinary ...
category-id                                                         65
category-name                             Noncon: Human-Freeform Solos
participant-count                                                    3
message-count                                                      242
word-count-total                                                 35197
word-count-median                                                136.0
messages         {'from': 'OrdinaryName', 'from-alternative': 'User...
Name: 2350, dtype: object

Dataset field explanation

Threads

Field Explanation
thread-id The forum software's given thread id
thread-title User-given thread title
category-id The id of the subforum where the thread was posted
category-name The full name of the subforum. "Small Groups" subforums are dedicated to roleplays for more than two participants, and "Solo" are generally for two participants but more than two may appear here as well.
participant-count The number of users writing in the thread
message-count The total number of messages in the thread
word-count-total The cumulative sum of space-separated words in the thread, calculated by python's split() function, including HTML tags
word-count-median The median message length in words, calculated by python's split() function, including HTML tags

Messages

Field Explanation
index Message number, starting from zero at the beginning of the thread. Added mainly for debugging purposes
from The name of the user who wrote the message. Avoid using if possible
from-alternative Alternative, locally-unique name for the user in the form of User0 ... UserN
timestamp ISO UTC message timestamp

Cleaning procedure details

At the HTML element level

  • Simplified blockquotes
  • Removed all attributes from most tags
  • Cosmetic font sizes consolidated into three categories: <small>, normal, <big> (deprecated tag)
  • Font changes removed
  • Special CSS effects removed
  • Background-colored text changed into <mark>
  • Spoiler tags converted into <details><summary> blocks
    • However, inline spoilers don't work well with this—to be checked out at a later time
  • Removed left/right "floating" <div>
  • Removed left/right/justify text-alignment <div>
  • Center alignment <div> changed to <center> (deprecated tag)
  • Recomposed URLs and their associated text into <a> elements, when possible
    • The data was originally scraped using a forum function that decomposed <a> links into text+URL
  • Tried to reduce the amount of <table> inappropriately used for presentation purposes
    • More work needed

At the text level

  • Converted post dates to ISO format
  • Removed non-standard unicode spaces
  • Changed generic spoiler text
  • Removed some leftover BB tags (most often the result of user error)
    • More work needed
  • Shortened some bare URLs
  • Changed elliquiy.com URLs into example.com
  • Removed some site-internal URLs
  • Converted all smilies into emoji
  • Removed excessive newlines and leading/trailing spaces
  • Fixed some HTML element spacing issue
    • More work needed

NOT done

  • Replacing HTML escape characters
  • Turning image URLs into <img>
  • Balancing quote marks and other characters that are supposed to be paired
  • Changing fancy punctuation to ASCII punctuation
  • Removing usernames entirely from the dataset