BioTrove-Train / README.md
BGLab's picture
updated readme
4f0a7da verified
metadata
License: cc0-1.0
language:
  - en
pretty_name: BioTrove
task_categories:
  - image-classification
  - zero-shot-classification
tags:
  - biology
  - image
  - animals
  - species
  - taxonomy
  - rare species
  - endangered species
  - evolutionary biology
  - balanced
  - CV
  - multimodal
  - CLIP
  - knowledge-guided
size_categories: 100M<n<1B
license: mit

BioTrove: A Large Curated Image Dataset Enabling AI for Biodiversity

Description

See the BioTrove dataset card on HuggingFace to access the main BioTrove dataset (161.9M)

BioTrove comprises well-processed metadata with full taxa information and URLs pointing to image files. The metadata can be used to filter specific categories, visualize data distribution, and manage imbalance effectively. We provide a collection of software tools that enable users to easily download, access, and manipulate the dataset.

BioTrove-Train Dataset (40M)

BioTrove-Train comprises over 40M image samples and 33K species across 7 taxonomic groups- seven taxonomic categories: Aves, Arachnida, Insecta, Plantae, Fungi, Mollusca, and Reptilia.
These taxonomic categories were chosen due to their significant impact on biodiversity and agricultural ecosystems, as well as their relative underrepresentation in standard image recognition models/foundation models.
Overall, this dataset nearly matches the state-of-the-art curated dataset (TREEOFLIFE-10M) in terms of species diversity, while comfortably exceeding it in terms of scale by a factor of nearly 4 times.

New Benchmark Datasets

We created three new benchmark datasets for fine-grained image classification. In addition, we provide a new benchmark dataset for species recognition across various developmental Life-stages.

BioTrove-Balanced

For balanced species distribution across the 7 categories, we curated BioTrove-Balanced. Each category includes up to 500 species, with 50 images per species, totaling of ~112K image samples.

BioTrove-Unseen

To provide a robust benchmark for evaluating the generalization capability of models on unseen species, we curated BioTrove-Unseen. The test dataset was constructed by identifying species with fewer than 30 instances in BioTrove, ensuring that the dataset contains species that were unseen by BioTrove-CLIP. Each species contained 10 images, totaling of ~11.9K image samples.

BioTrove-LifeStages

To assess the model’s ability to recognize species across various developmental stages, we curated BioTrove-LifeStages. This dataset has 20 labels in total and focuses on insects, since these species often exhibit significant visual differences across their lifespan. BioTrove-LifeStages contains five insect species and utilized the observation export feature on the iNaturalist platform to collect data from Feb 1, 2024 to May 20, 2024 to ensure no overlap with the training dataset. For each species, life stage filters (egg, larva, pupa, or adult) were applied.

Dataset Information

  • Full Taxa Information: Detailed metadata, including taxonomic hierarchy and image URLs.
  • Comprehensive Metadata: Enables filtering, visualization, and effective management of data imbalance.
  • Software Tools: Collection of tools for easy dataset access, download, and manipulation.
  • Balanced Species Distribution: Up to 500 species per category with 50 images per species.
  • Unseen Species Benchmark: Includes species with fewer than 30 instances to evaluate generalization capability.
  • Life Stages Dataset: Focuses on insects across various developmental stages.

BioTrove-CLIP Models

See the BioTrove-CLIP model card on HuggingFace to download the trained model checkpoints

We released three trained model checkpoints in the BioTrove-CLIP model card on HuggingFace. These CLIP-style models were trained on BioTrove-Train for the following configurations:

  • BioTrove-CLIP-O: Trained a ViT-B/16 backbone initialized from the OpenCLIP's checkpoint. The training was conducted for 40 epochs.
  • BioTrove-CLIP-B: Trained a ViT-B/16 backbone initialized from the BioCLIP's checkpoint. The training was conducted for 8 epochs.
  • BioTrove-CLIP-M: Trained a ViT-L/14 backbone initialized from the MetaCLIP's checkpoint. The training was conducted for 12 epochs.

Usage

To start using the BioTrove dataset, follow the instructions provided in the GitHub. Model checkpoints are shared in the BioTrove-CLIP HuggingFace Model card. Metadata files are included in the Directory. Please download the metadata from the Directory and pre-process the data using the biotrove_process PyPI library. The instructions to use the library can be found in here. The Readme file contains the detailed description of data preparation steps.

Directory

main/
β”œβ”€β”€ BioTrove/
β”‚   β”œβ”€β”€ chunk_0.csv
β”‚   β”œβ”€β”€ chunk_0.parquet
β”‚   β”œβ”€β”€ chunk_1.parquet
β”‚   β”œβ”€β”€ .
β”‚   β”œβ”€β”€ .
β”‚   β”œβ”€β”€ .
β”‚   └── chunk_2692.parquet
β”œβ”€β”€ BioTrove-benchmark/
β”‚   β”œβ”€β”€ BioTrove-Balanced.csv
β”‚   β”œβ”€β”€ BioTrove-Balanced.parquet
β”‚   β”œβ”€β”€ BioTrove-Lifestages.csv
β”‚   β”œβ”€β”€ BioTrove-Lifestages.parquet
β”‚   β”œβ”€β”€ BioTrove-Unseen.csv
β”‚   └──BioTrove-Unseen.parquet
β”œβ”€β”€README.md
└──.gitignore

Acknowledgements

This work was supported by the AI Research Institutes program supported by the NSF and USDA-NIFA under AI Institute: for Resilient Agriculture, Award No. 2021-67021-35329. This was also partly supported by the NSF under CPS Frontier grant CNS-1954556. Also, we gratefully acknowledge the support of NYU IT High Performance Computing resources, services, and staff expertise.

Citation

If you find this dataset useful in your research, please consider citing our paper:
@misc{yang2024BioTrovelargemultimodaldataset,
        title={BioTrove: A Large Multimodal Dataset Enabling AI for Biodiversity}, 
        author={Chih-Hsuan Yang, Benjamin Feuer, Zaki Jubery, Zi K. Deng, Andre Nakkab,
           Md Zahid Hasan, Shivani Chiranjeevi, Kelly Marshall, Nirmal Baishnab, Asheesh K Singh,
            Arti Singh, Soumik Sarkar, Nirav Merchant, Chinmay Hegde, Baskar Ganapathysubramanian},
        year={2024},
        eprint={2406.17720},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2406.17720}, 
  }

For more details and access to the dataset, please visit the Project Page.