library_name: transformers
tags:
- biology
- biodiversity
co2_eq_emissions:
emissions: 240
source: https://calculator.green-algorithms.org/
training_type: pre-training
geographical_location: Switzerland
hardware_used: 1 v100 GPU
license: apache-2.0
datasets:
- Saving-Willy/Happywhale-kaggle
- Saving-Willy/test-sync
metrics:
- accuracy
pipeline_tag: image-classification
Model Card for CetaceaNet
We provide a model for classifying whale species from images of their tails and fins.
Model Details
The model takes as input a natural image of a cetacean and returns the three most probable cetacean species identified in this image.
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: HappyWhale
- Shared by [optional]: The Saving-Willy organization
- Model type: EfficientNet
Model Sources
- Repository: https://github.com/knshnb/kaggle-happywhale-1st-place
- Paper: https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.14167
Uses
This model is intended for research use cases. It is intended to be fine-tuned on new data gathered by research institutions around the World.
Downstream Use
We think that an interesting downstream use case would be identifying whale IDs based on our model (and future extensions of it).
Out-of-Scope Use
This model is not intended to facilitate marine tourism or the exploitation of cetaceans in the wild and marine wildlife.
How to Get Started with the Model
Install the necessary libraries to run our model (transformers
and the extra requirements.txt):
pip install requirements.txt
Use the code below to get started with the model.
import cv2
from transformers import AutoModelForImageClassification
cetacean_classifier = AutoModelForImageClassification.from_pretrained("Saving-Willy/cetacean-classifier", trust_remote_code=True)
img = cv2.imread("tail.jpg")
predictions = cetacean_classifier(img)
Training and Evaluation Details
To learn more about how the model was trained and evaluated, see 1st Place Solution of Kaggle Happywhale Competition.
Citation
If you use this model in your research, please cite:
the original model authors:
@article{patton2023deep,
title={A deep learning approach to photo--identification demonstrates high performance on two dozen cetacean species},
author={Patton, Philip T and Cheeseman, Ted and Abe, Kenshin and Yamaguchi, Taiki and Reade, Walter and Southerland, Ken and Howard, Addison and Oleson, Erin M and Allen, Jason B and Ashe, Erin and others},
journal={Methods in ecology and evolution},
volume={14},
number={10},
pages={2611--2625},
year={2023},
publisher={Wiley Online Library}
}
the HappyWhale project:
@misc{happy-whale-and-dolphin,
author = {Ted Cheeseman and Ken Southerland and Walter Reade and Addison Howard},
title = {Happywhale - Whale and Dolphin Identification},
year = {2022},
howpublished = {\url{https://kaggle.com/competitions/happy-whale-and-dolphin}},
note = {Kaggle}
}