The Dataset Viewer is not available on this dataset.

Danbooru 2024 tags only in 10k tar

How to build the "dataset" with speed

> python extract-booru2024-parallel.py
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1000/1000 [6:48:15<00:00, 24.50s/it]
Extracted: 1000 iters
Delta: 0 files
PS H:\danbooru2024-webp-4Mpixel> node
Welcome to Node.js v20.15.0.
Type ".help" for more information.
> const fs = require('fs');
> console.log(fs.readdirSync("./khoyas_finetune").length);
16010020
  • (Done?) Finally, instead the official guide (a bit messy), follow this reddit post to make the metadata JSON file (with ARB) and start finetuning.

  • (Optional) Added meta_cap_dd.json to skip the preprocessing. This should be enough to start finetuning directly.

  • (In progress) You can use my training guide as reference. Note: Just the most simple approach. Good results requires DYOR.

python sdxl_train.py                                                                                                                         
    --in_json "meta_cap_dd.json"                                                                                   
    --train_data_dir="folder_extracted_dataset"                                                                          
    --output_dir="model_out_folder"
Downloads last month
34