davanstrien's picture
davanstrien HF staff
Update README.md
d5e75ca verified
metadata
dataset_info:
  features:
    - name: file_name
      dtype: string
    - name: file_size
      dtype: float64
    - name: file_path
      dtype: string
    - name: animal_name
      dtype: string
    - name: low_level
      dtype: string
    - name: high_level
      dtype: string
    - name: image
      dtype: image
  splits:
    - name: train
      num_bytes: 860741621.468
      num_examples: 1163
    - name: test
      num_bytes: 94474930
      num_examples: 130
    - name: validation
      num_bytes: 103795765
      num_examples: 130
  download_size: 1030692259
  dataset_size: 1059012316.468
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

Dataset Card for Benchmarking Large Language Models for Image Classification of Marine Mammals

As Artificial Intelligence (AI) has developed rapidly over the past few decades, the new generation of AI, Large Language Models (LLMs) trained on massive datasets, has achieved ground-breaking performance in many applications. Further progress has been made in multimodal LLMs, with many datasets created to evaluate LLMs with vision abilities. However, none of those datasets focuses solely on marine mammals, which are indispensable for ecological equilibrium. In this work, we build a benchmark dataset with 1,423 images of 65 kinds of marine mammals, where each animal is uniquely classified into different levels of class, ranging from species-level to medium-level to group-level. Moreover, we evaluate several approaches for classifying these marine mammals: (1) machine learning (ML) algorithms using embeddings provided by neural networks, (2) influential pre-trained neural networks, (3) zero-shot models: CLIP and LLMs, and (4) a novel LLM-based multi-agent system (MAS). The results demonstrate the strengths of traditional models and LLMs in different aspects, and the MAS can further improve the classification performance.

Dataset Details

Dataset Description

  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

BibTeX:

@article{qi2024benchmarking,
  title={Benchmarking Large Language Models for Image Classification of Marine Mammals},
  author={Qi, Yijiashun and Cai, Shuzhang and Zhao, Zunduo and Li, Jiaming and Lin, Yanbin and Wang, Zhiqiang},
  journal={arXiv preprint arXiv:2410.19848},
  year={2024}
}

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]