YaYaB's picture
Update README.md
7744353
|
raw
history blame
1.67 kB
metadata
license: cc-by-nc-sa-4.0
annotations_creators:
  - machine-generated
language:
  - en
language_creators:
  - other
multilinguality:
  - monolingual
pretty_name: One Piece BLIP captions
size_categories:
  - n<1K
source_datasets:
  - YaYaB/onepiece-blip-captions
tags: []
task_categories:
  - text-to-image
task_ids: []

Disclaimer

This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions

Dataset Card for One Piece BLIP captions

Dataset used to train One Piece text to image model

BLIP generated captions for One piece images collected from the web. Original images were obtained from Anime Characters and captioned with the pre-trained BLIP model.

For each row the dataset contains image and text keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.

Examples

pk1.jpg

a man in a straw hat

pk10.jpg

a man in a green coat holding two swords

pk100.jpg

a man with red hair and a black coat

Citation

If you use this dataset, please cite it as:

@misc{yayab2022onepiece,
      author = {YaYaB},
      title = {One Piece BLIP captions},
      year={2022},
      howpublished= {\url{https://huggingface.co/datasets/YaYaB/onepiece-blip-captions/}}
}