AngelBottomless commited on
Commit
7dc9fef
1 Parent(s): 8d48473

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -2
README.md CHANGED
@@ -18,8 +18,101 @@ annotations_creators:
18
  source_datasets:
19
  - danbooru
20
  ---
 
21
 
22
- # Danbooru2024 Dataset
 
 
23
 
24
- 6457843 original images in total.
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  source_datasets:
19
  - danbooru
20
  ---
21
+ # 🎨 Danbooru2024 Dataset
22
 
23
+ ![Dataset Size](https://img.shields.io/badge/Images-6.5M-blue)
24
+ ![Language](https://img.shields.io/badge/Languages-EN%20|%20JA-green)
25
+ ![Category](https://img.shields.io/badge/Category-Image%20Classification-orange)
26
 
27
+ ## 📊 Dataset Overview
28
 
29
+ The **Danbooru2024** dataset is a comprehensive collection focused on animation and illustration artwork, derived from the official Danbooru platform. It contains approximately **6.5 million high-quality, user-annotated images** with corresponding tags and textual descriptions.
30
+
31
+ This dataset is filtered from an original set of **8.3 million entries**, excluding NSFW-rated, **opt-out** entries to create a more accessible and audience-friendly resource. It addresses the challenges associated with overly crawled booru databases by providing a curated and well-structured solution.
32
+
33
+ ## ✨ Features
34
+
35
+ ### 📋 Metadata Support
36
+ Includes a Parquet format metadata.
37
+
38
+ Example code for usage:
39
+ ```python
40
+ # install necessary packages, you can choose pyarrow or fastparquet
41
+ #%pip install pandas pyarrow
42
+
43
+ from tqdm.auto import tqdm
44
+ import pandas as pd
45
+ tqdm.pandas() # register progress_apply
46
+
47
+ # read parquet file
48
+ df = pd.read_parquet('metadata.parquet')
49
+ print(df.head()) # check the first 5 rows
50
+
51
+ #print(df.columns) # you can check the columns of the dataframe
52
+ necessary_rows = [
53
+ "created_at", "score", "rating", "tag_string", "up_score",
54
+ "down_score", "fav_count"
55
+ ]
56
+ df = df[necessary_rows] # shrink the dataframe to only necessary columns
57
+ df['created_at'] = pd.to_datetime(df['created_at']) # convert to datetime
58
+
59
+ datetime_start = pd.Timestamp('2007-01-01', tz='UTC')
60
+ datetime_end = pd.Timestamp('2008-01-01', tz='UTC')
61
+ subdf = df[(df['created_at'] >= datetime_start) &
62
+ (df['created_at'] < datetime_end)]
63
+
64
+ # count some rating
65
+ print(subdf['rating'].value_counts())
66
+ # export subdataframe
67
+ subdf.to_parquet('metadata-2007.parquet')
68
+ ```
69
+
70
+ ### 📥 Partial Downloads
71
+ To simplify downloading specific entries, use the **[CheeseChaser](https://github.com/deepghs/cheesechaser)** library:
72
+
73
+ ```python
74
+ from cheesechaser.datapool import DanbooruNewestDataPool
75
+ from cheesechaser.query import DanbooruIdQuery
76
+
77
+ pool = DanbooruNewestDataPool()
78
+ #my_waifu_ids = DanbooruIdQuery(['surtr_(arknights)', 'solo'])
79
+ # above is only available when Danbooru is accessible, if not, use following:
80
+ import pandas as pd
81
+
82
+ # read parquet file
83
+ df = pd.read_parquet('metadata.parquet',
84
+ columns=['id', 'tag_string']) # read only necessary columns
85
+
86
+ #surtr_(arknights) -> gets interpreted as regex so we need to escape the brackets
87
+ subdf = df[df['tag_string'].str.contains('surtr_\\(arknights\\)') &
88
+ df['tag_string'].str.contains('solo')]
89
+ ids = subdf.index.tolist()
90
+ print(ids[:5]) # check the first 5 ids
91
+
92
+ # download danbooru images with surtr+solo, to directory /data/exp2_surtr
93
+ pool.batch_download_to_directory(
94
+ resource_ids=ids,
95
+ dst_dir='/data/exp2_surtr',
96
+ max_workers=12,
97
+ )
98
+ ```
99
+
100
+ ## 🏷️ Dataset Information
101
+
102
+ - **License**: Other
103
+ - **Task Categories**:
104
+ - Image Classification
105
+ - Zero-shot Image Classification
106
+ - Text-to-Image
107
+ - **Languages**:
108
+ - English
109
+ - Japanese
110
+ - **Tags**:
111
+ - Art
112
+ - Anime
113
+ - **Size Category**: 1M < n < 10M
114
+ - **Annotation Creators**: No annotation
115
+ - **Source Datasets**: [Danbooru](danbooru.donmai.us)
116
+
117
+ ---
118
+ *Note: This dataset is provided for research and development purposes. Please ensure compliance with all applicable usage terms and conditions.*