Datasets:
File size: 4,237 Bytes
2fa581b 669916d 2fa581b 0bfc5e4 2fa581b fb664b9 2fa581b fb664b9 2fa581b 0bfc5e4 cf7a362 d35ff18 cf7a362 505fa74 cf7a362 162574e fb664b9 cf7a362 85f7fd5 cf7a362 6c5b365 cf7a362 6c5b365 cf7a362 d35ff18 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ko
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: laion2B-multi-korean-subset
size_categories:
- 10M<n<100M
task_categories:
- feature-extraction
---
# laion2B-multi-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## About dataset
a subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi), including only korean
### Lisence
CC-BY-4.0
## Data Structure
### Data Instance
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2B-multi-korean-subset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'],
num_rows: 11376263
})
})
```
```py
>>> dataset["train"].features
{'SAMPLE_ID': Value(dtype='int64', id=None),
'URL': Value(dtype='string', id=None),
'TEXT': Value(dtype='string', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'WIDTH': Value(dtype='int32', id=None),
'LICENSE': Value(dtype='string', id=None),
'LANGUAGE': Value(dtype='string', id=None),
'NSFW': Value(dtype='string', id=None),
'similarity': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.56 GiB<br>
generated: 2.37 GiB<br>
total: 3.93 GiB
### Data Field
- 'SAMPLE_ID': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'HEIGHT': `int`
- 'WIDTH': `int`
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'NSFW': `string`
- 'similarity': `float`
### Data Splits
| | train |
| --------- | -------- |
| # of data | 11376263 |
## Note
### Height, Width
μ΄λ―Έμ§μ κ°λ‘κ° `HEIGHT`λ‘, μΈλ‘κ° `WIDTH`λ‘ λμ΄μλ κ² κ°μ΅λλ€.
```pycon
>>> dataset["train"][98]
{'SAMPLE_ID': 2937471001780,
'URL': 'https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png',
'TEXT': 'μΈμ²μκ΅μ‘μ², μΈμ² μꡰꡬλ°μ νμν μμμ§κ³Όμ κ°λ΄ν κ°μ΅',
'HEIGHT': 640,
'WIDTH': 321,
'LICENSE': '?',
'LANGUAGE': 'ko',
'NSFW': 'UNLIKELY',
'similarity': 0.33347243070602417}
```
![image](https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png)
### csv file, pandas
```py
# pip install zstandard
import pandas as pd
from huggingface_hub import hf_hub_url
url = hf_hub_url("Bingsu/laion2B-multi-korean-subset", filename="laion2B-multi-korean-subset.csv.zst", repo_type="dataset")
# url = "https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst"
df = pd.read_csv(url)
```
<https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst>
778 MB
### Code used to generate
```py
import csv
import re
from datasets import load_dataset
from tqdm import tqdm
pattern = re.compile(r"[κ°-ν£]")
def quote(s: str) -> str:
s = s.replace('"""', "")
return s
def filter_func(example) -> bool:
lang = example.get("LANGUAGE")
text = example.get("TEXT")
if not isinstance(lang, str) or not isinstance(text, str):
return False
return lang == "ko" or pattern.search(text) is not None
file = open("./laion2B-mulit_korean_subset.csv", "w", encoding="utf-8", newline="")
ds = load_dataset("laion/laion2B-multi", split="train", streaming=True)
dsf = ds.filter(filter_func)
header = [
"SAMPLE_ID",
"URL",
"TEXT",
"HEIGHT",
"WIDTH",
"LICENSE",
"LANGUAGE",
"NSFW",
"similarity",
]
writer = csv.DictWriter(file, fieldnames=header)
writer.writeheader()
try:
for data in tqdm(dsf): # total=11378843
data["TEXT"] = quote(data.get("TEXT", ""))
if data["TEXT"]:
writer.writerow(data)
finally:
file.close()
print("Done!")
```
μ€νμ μ½ 8μκ°μ΄ μμλμμ΅λλ€. μ΄νμ `HEIGHT`λ `WIDTH`κ° NoneμΈ λ°μ΄ν°λ₯Ό μ κ±°νκ³ μ
λ‘λνμμ΅λλ€.
### img2dataset
[img2dataset](https://github.com/rom1504/img2dataset)μ μ¬μ©νμ¬ URLλ‘λ μ΄λ―Έμ§λ€μ λ°μ΄ν°μ
ννλ‘ λ§λ€ μ μμ΅λλ€.
|