Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
hate-speech-detection
Languages:
English
Size:
100K - 1M
License:
File size: 3,549 Bytes
4426061 732a6d8 cd416bb 403d99e 4426061 cd416bb 4426061 732a6d8 cd416bb 403d99e 6eb1d81 4426061 6eb1d81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': non
'1': tox
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 55430581
num_examples: 127656
- name: validation
num_bytes: 13936861
num_examples: 31915
- name: test
num_bytes: 27474227
num_examples: 63978
download_size: 62548640
dataset_size: 96841669
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 197611488
num_examples: 127656
- name: validation
num_bytes: 49404420
num_examples: 31915
- name: test
num_bytes: 99037944
num_examples: 63978
download_size: 484421377
dataset_size: 346053852
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827919212
dataset_size: 689425116
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827907964
dataset_size: 689425116
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Toxic Wikipedia Comments
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- wikipedia
- toxicity
- toxic comments
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
This is the same dataset as [`OxAISH-AL-LLM/wiki_toxic`](https://huggingface.co/datasets/OxAISH-AL-LLM/wiki_toxic).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library |