Search is not available for this dataset
image
imagewidth (px)
517
1.51k
label
class label
2 classes
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
1white
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig
0orig

Genshin_IP Dataset

The dataset contains different angles images of 64 characters captured from Genshin game manually, 20 pictures for each character.

The dataset is intended for Genshin character lora model training.

You can also get the dataset from google drive.

Dataset Details

Character Proportion

The character should occupy a large proportion of the image. There should not be too much background, and ideally, the image should be just wrapped around the body (screenshots can be taken if necessary).

Character Features

  1. The character features (such as hair, eyes, clothing) should be consistent;
  2. Limbs should be naturally extended without extra movements;
  3. Clothing should be simple and clear without impurities (like leaves or starlight);
  4. Reflections or shadows should not be included.

Image Format

  1. Acceptable formats are .jpg, .jpeg, and .png (not transparent).
  2. Do not use webp or transparent png.

Image Resolution

The image resolution should be as high as possible, greater than (512, 1024) is ideal (use Gigapixel AI for upscaling if necessary). The higher the resolution, the higher the parameter settings and computational power required.

Image Quantity

  1. The number of images to be extracted for each character: >=20, ideally 16-20 (usually 5 or 10 images are sufficient). More is not necessarily better, choose the optimal number.

    Note: The generated images are influenced by the angle of the prompt such as "looking at viewer", "side standing", "looking behind", and also by the angle proportion in the dataset.

  2. Ideally, full-body shots are preferred, but half-body shots are acceptable as well. The proportions can refer to my settings:
  • total images: 20
    • front full-body: 15
      • top-down left full-body: 1
      • top-down full-body: 1
      • top-down right full-body: 1
      • left diagonal full-body: 2
      • front full-body: 5
      • right diagonal full-body: 2
      • bottom-up left full-body: 1
      • bottom-up full-body: 1
      • bottom-up right full-body: 1
    • left-side full-body: 1
    • right-side full-body: 1
    • back full-body: 1
    • front half-body: 2

Background

  1. If the images come from the same source and have similar backgrounds, they need to be edited to have a white background, and the prompt.txt should be set as:

    1[Gender], [EN_name], solo, white_background
    

    You can adopt anime character matting project anime-segmentation to change the background to white, file "inference.py" may need to be modified to:

            if opt.only_matted:
                # img = np.concatenate((mask * img + 1 - mask, mask * 255), axis=2).astype(np.uint8)
                # img = cv2.cvtColor(img, cv2.COLOR_RGBA2BGRA)
                # cv2.imwrite(f'{opt.out}/{i:06d}.png', img)
                
                # Change the background to white
                white_mask = np.ones_like(mask) * 255
                img = np.concatenate((mask * img + (white_mask * (1 - mask)), mask * 255),axis=2).astype(np.uint8)
                img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
                cv2.imwrite(f'{opt.out}/{i:06d}.png', img)
    
  2. If the images come from different sources and have varying backgrounds, the prompt.txt can be set as:

    1[Gender], [EN_name], solo
    

Adjust for training LoRA

If using the https://github.com/Akegarasu/lora-scripts code for training, the following settings are also required:

  1. Each image should be repeated 6 times by default. The more repetitions, the better the model understands the image, but it takes more time and can lead to overfitting. For example, use "20_conan"; for real people, you can use even more, like "100_conan".
  2. The prompt.txt for each image should be set as:
    1[Gender], [EN_name], solo, white_background.
    
Downloads last month
851
Edit dataset card