cQueenccc's picture
Update README.md
43f137f
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: caption
      dtype: string
  splits:
    - name: train
      num_bytes: 171055893.125
      num_examples: 1087
  download_size: 170841790
  dataset_size: 171055893.125
language:
  - en
task_categories:
  - text-to-image
annotations_creators:
  - machine-generated
size_categories:
  - n<1K

Disclaimer

This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions

Dataset Card for A subset of Vivian Maier's photographs BLIP captions

The captions are generated with the pre-trained BLIP model.

For each row the dataset contains image and caption keys. image is a varying size PIL jpeg, and caption is the accompanying text caption. Only a train split is provided.

Examples

vv1.jpg

A group of people

vv10.jpg

person floating in the water

vv100.jpg

a person standing next to a refrigerator

Citation

If you use this dataset, please cite it as:

@misc{cqueenccc2023vivian,
      author = {cQueenccc},
      title = {Vivian Maier's photograph split BLIP captions},
      year={2023},
      howpublished= {\url{https://huggingface.co/datasets/cQueenccc/Vivian-Blip-Captions/}}
}