File size: 1,888 Bytes
2ce3725
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e5108a
2ce3725
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
size_categories:
- 10K<n<100K
pretty_name: OKReddit Visionary
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
language:
- en
---

<div>
  <a href="https://soundcloud.com/lemmino/biosignature"><img src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/jh7lskqN9TnF53HmKnFlh.png" title="&quot;We've switched style models from 1.5 to SDXL! Yay! And yes, it's a Style lora once more.&quot;" style="margin-left:auto;margin-right:auto"></a>
</div>

# Dataset Summary

OKReddit Visionary is a collection of **50 GiB** (~74K pairs) of image Question & Answers. This dataset has been prepared for research or archival purposes.

This dataset includes (obviously) a filtered list of subreddits.

- **Curated by:** KaraKaraWitch
- **Funded by:** Recursal.ai
- **Shared by:** KaraKaraWitch
- **Language(s) (NLP):** Mainly English. Other languages are available at smaller sizes.
- **License:** `Scripts` folder are Apache 2.0. Refer to [Licensing Information](#licensing-information) for data license.

### Dataset Sources

- **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.)

## Languages

All the questions and answers should be in english at this size point.

## Dataset Structure

### Data Instances

The dataset can be loaded with webdataset. Do note that there are multiple extensions to check: `jpg`, `jpeg` or `png`. They have not been reconverted to preserve the original file from reddit.

```py
import webdataset as wds
# After concatting, you may use the file like a regular dataset.

# The dataset is compatible with WebDataset format. Example...

tar_file = "PackedTar.tar"

hf_dataset = wds.WebDataset(str(tar_root)).decode("pil")
```