File size: 3,002 Bytes
2d36e29 8b66cd0 2d36e29 124f144 2d36e29 e115d34 2d36e29 de7fd94 3a21d2e 2d36e29 8f4d5fb fae0db4 a7a741b 2d36e29 add97f9 3f99cf0 add97f9 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 4c53a06 11b9565 add97f9 2d36e29 3a21d2e 2d36e29 3a21d2e 2d36e29 361b516 2d36e29 124f144 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
- image-classification
metrics:
- accuracy
model-index:
- name: fashion-clothing-decade
results: []
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fashion Clothing Decade
This model predicts what decade clothing is from. It takes an image and outputs one of the following labels:
**1910s, 1920s, 1930s, 1940s, 1950s, 1960s, 1970s, 1980s, 1990s, 2000s**
Try the [demo](https://huggingface.co/spaces/tonyassi/Which-decade-are-you-from)!
### How to use
```python
from transformers import pipeline
pipe = pipeline("image-classification", model="tonyassi/fashion-clothing-decade")
result = pipe('image.png')
print(result)
```
## Dataset
Trained on a total of 2500 images. ~250 images from each label.
### 1910s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/zdb7EyuVxp1ncGrkoAT7h.jpeg)
### 1920s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/GGM1mMwezbsfPg2dKIvvd.jpeg)
### 1930s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/rDcMdiH3q7UHtQcfSLYzn.jpeg)
### 1940s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/TpDsDnXMubqvfu8dn6nNA.jpeg)
### 1950s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/lpMCJ9PfolWjhFqb81D1w.jpeg)
### 1960s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/x0FOiI2IMtHXthCafa76t.jpeg)
### 1970s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/H45UJGv9lzXlxF_Z616Cj.jpeg)
### 1980s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/74d7kg69pRFDrv1QjTt9G.jpeg)
### 1990s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/FZ__rQWiIAZN_1q1eOaNJ.jpeg)
### 2000s
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/h81edMfzSYnWBxb7ZVliB.jpeg)
## Model description
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
## Training and evaluation data
- Loss: 0.8707
- Accuracy: 0.7505
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1 |