Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
hate-speech-detection
Languages:
English
Size:
100K - 1M
License:
metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': non
'1': tox
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 55430581
num_examples: 127656
- name: validation
num_bytes: 13936861
num_examples: 31915
- name: test
num_bytes: 27474227
num_examples: 63978
download_size: 62548640
dataset_size: 96841669
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 197611488
num_examples: 127656
- name: validation
num_bytes: 49404420
num_examples: 31915
- name: test
num_bytes: 99037944
num_examples: 63978
download_size: 484421377
dataset_size: 346053852
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827919212
dataset_size: 689425116
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827907964
dataset_size: 689425116
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Toxic Wikipedia Comments
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- wikipedia
- toxicity
- toxic comments
task_categories:
- text-classification
task_ids:
- hate-speech-detection
This is the same dataset as OxAISH-AL-LLM/wiki_toxic
.
The only differences are
Addition of a unique identifier,
uid
Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
all-mpnet-base-v2
multi-qa-mpnet-base-dot-v1
all-MiniLM-L12-v2
Renaming of the
label
column tolabels
for easier compatibility with the transformers library