File size: 855 Bytes
88306b9 09262a9 333c82a 09262a9 88306b9 09262a9 88306b9 09262a9 88306b9 09262a9 88306b9 09262a9 88306b9 09262a9 88306b9 09262a9 88306b9 09262a9 dc7b487 09262a9 88306b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
license: cc-by-nc-4.0
library_name: transformers
datasets:
- imagenet-1k
---
# Hiera mae_in1k_ft_in1k
This model is the transformers format converted version of the **Hiera** model `mae_in1k_ft_in1k` (https://github.com/facebookresearch/hiera)
[Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/abs/2306.00989)
```py
from PIL import Image
import torch
from transformers import AutoModelForImageClassification, AutoImageProcessor
REPO = "p1atdev/hiera_mae_in1k_ft_in1k"
processor = AutoImageProcessor.from_pretrained(REPO)
model = AutoModelForImageClassification.from_pretrained(REPO, trust_remote_code=True)
image = Image.open("image.png")
with torch.no_grad():
outputs = model(**processor(image, return_tensors="pt"))
print(outputs.logits.argmax().item())
# 207 (golden retriever (imagenet-1k))
```
|