Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,3 +1,37 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
+
tags:
|
4 |
+
- korean
|
5 |
+
- hate-speech
|
6 |
+
- hate-sentence
|
7 |
---
|
8 |
+
|
9 |
+
# SJ-Donald/kor-hate-sentence
|
10 |
+
|
11 |
+
SJ-Donald/kor-hate-sentence is merged dataset from fllow
|
12 |
+
|
13 |
+
## Datasets
|
14 |
+
|
15 |
+
* [smilegate-ai/kor_unsmile](https://huggingface.co/datasets/smilegate-ai/kor_unsmile)
|
16 |
+
* [korean-hate-speech](https://github.com/kocohub/korean-hate-speech/tree/master)
|
17 |
+
* [Curse-detection-data](https://github.com/2runo/Curse-detection-dat)
|
18 |
+
|
19 |
+
## How to use
|
20 |
+
|
21 |
+
```Python
|
22 |
+
from datasets import load_dataset
|
23 |
+
|
24 |
+
ds = load_dataset("SJ-Donald/kor-hate-sentence")
|
25 |
+
print(ds)
|
26 |
+
|
27 |
+
DatasetDict({
|
28 |
+
train: Dataset({
|
29 |
+
features: ['문장', 'hate', 'clean', 'labels'],
|
30 |
+
num_rows: 26347
|
31 |
+
})
|
32 |
+
test: Dataset({
|
33 |
+
features: ['문장', 'hate', 'clean', 'labels'],
|
34 |
+
num_rows: 6587
|
35 |
+
})
|
36 |
+
})
|
37 |
+
```
|