Danbooru 2024 tags only in 10k tar
Dedicated dataset to align deepghs/danbooru2024-webp-4Mpixel.
How to use / why I create this: my speedrun to build the dataset
How to build the "dataset" with speed
Get at least 4TB of storage, and around 75GB of RAM. Always make a venv / conda environment for each task.
(Optional) Download this directly: metadata.parquet
Download all 10k tarfile with webp via dl-booru2024-hfhub.py
Rerun that script for this repo (another 10k tarfile).
(Optional) Otherwise build this dataset via metadata-booru2024-tags-parallel.py
Run extract-booru2024-parallel.py to extract all tars into a single directory.
> python extract-booru2024-parallel.py
100%|ββββββββββββββββββββββββββββββββββββββ| 1000/1000 [6:48:15<00:00, 24.50s/it]
Extracted: 1000 iters
Delta: 0 files
PS H:\danbooru2024-webp-4Mpixel> node
Welcome to Node.js v20.15.0.
Type ".help" for more information.
> const fs = require('fs');
> console.log(fs.readdirSync("./khoyas_finetune").length);
16010020
(Done?) Finally, instead the official guide (a bit messy), follow this reddit post to make the metadata JSON file (with ARB) and start finetuning.
(Optional) Added meta_cap_dd.json to skip the preprocessing. This should be enough to start finetuning directly.
(In progress) You can use my training guide as reference. Note: Just the most simple approach. Good results requires DYOR.
python sdxl_train.py
--in_json "meta_cap_dd.json"
--train_data_dir="folder_extracted_dataset"
--output_dir="model_out_folder"
- Downloads last month
- 34