Datasets:
File size: 1,627 Bytes
1f00eb2 2a58259 1f00eb2 1668d51 23e052d 1f00eb2 23e052d d7fe2d9 23e052d 43f137f 23e052d 43f137f 23e052d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 171055893.125
num_examples: 1087
download_size: 170841790
dataset_size: 171055893.125
language:
- en
task_categories:
- text-to-image
annotations_creators:
- machine-generated
size_categories:
- n<1K
---
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Vivian Maier's photographs BLIP captions
The captions are generated with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `caption` keys. `image` is a varying size PIL jpeg, and `caption` is the accompanying text caption. Only a train split is provided.
## Examples
![vv1.jpg](https://raw.githubusercontent.com/CQUEEN-lpy/cqueenccc.github.io/main/imgs/vivian_a%20group%20of%20people.jpg)
> A group of people
![vv10.jpg](https://raw.githubusercontent.com/CQUEEN-lpy/cqueenccc.github.io/main/imgs/vivian_a%20person%20floating%20in%20the%20water.jpg)
> person floating in the water
![vv100.jpg](https://raw.githubusercontent.com/CQUEEN-lpy/cqueenccc.github.io/main/imgs/vivian_a%20person%20standing%20next%20to%20a%20refrigerator.jpg)
> a person standing next to a refrigerator
## Citation
If you use this dataset, please cite it as:
```
@misc{cqueenccc2023vivian,
author = {cQueenccc},
title = {Vivian Maier's photograph split BLIP captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/cQueenccc/Vivian-Blip-Captions/}}
}
``` |