mandelakori commited on
Commit
787b9a2
1 Parent(s): e01a873

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -81
README.md CHANGED
@@ -1,88 +1,15 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - image-classification
5
- - vision
6
- - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- model-index:
10
- - name: outputs
11
- results:
12
- - task:
13
- name: Image Classification
14
- type: image-classification
15
- metrics:
16
- - name: Accuracy
17
- type: accuracy
18
- value: 0.9107332624867163
19
  ---
20
 
21
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
22
- should probably proofread and complete it, then remove this comment. -->
23
 
24
- # outputs
25
 
26
- This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the [PETA dataset](http://mmlab.ie.cuhk.edu.hk/projects/PETA_files/Pedestrian%20Attribute%20Recognition%20At%20Far%20Distance.pdf) dataset.
27
- It achieves the following results on the evaluation set:
28
- - Loss: 0.2170
29
- - Accuracy: 0.9107
30
 
31
- ## Model description
32
 
33
- More information needed
34
-
35
- #### How to use
36
-
37
- You can use this model with Transformers *pipeline* .
38
-
39
- ```python
40
- from transformers import pipeline
41
- gender_classifier = pipeline(model="NTQAI/pedestrian_gender_recognition")
42
- image_path = "abc.jpg"
43
-
44
- results = gender_classifier(image_path)
45
- print(results)
46
- ```
47
-
48
- ## Intended uses & limitations
49
-
50
- More information needed
51
-
52
- ## Training and evaluation data
53
-
54
- More information needed
55
-
56
- ## Training procedure
57
-
58
- ### Training hyperparameters
59
-
60
- The following hyperparameters were used during training:
61
- - learning_rate: 2e-05
62
- - train_batch_size: 8
63
- - eval_batch_size: 8
64
- - seed: 1337
65
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
- - lr_scheduler_type: linear
67
- - num_epochs: 5.0
68
-
69
- ### Training results
70
-
71
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
72
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|
73
- | 0.5193 | 1.0 | 2000 | 0.3346 | 0.8533 |
74
- | 0.337 | 2.0 | 4000 | 0.2892 | 0.8778 |
75
- | 0.3771 | 3.0 | 6000 | 0.2493 | 0.8969 |
76
- | 0.3819 | 4.0 | 8000 | 0.2275 | 0.9100 |
77
- | 0.3581 | 5.0 | 10000 | 0.2170 | 0.9107 |
78
-
79
-
80
- ### Framework versions
81
-
82
- - Transformers 4.24.0.dev0
83
- - Pytorch 1.12.1+cu113
84
- - Datasets 2.6.1
85
- - Tokenizers 0.13.1
86
-
87
- ### Contact information
88
- For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
 
1
  ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: image-classification
6
+ library_name: transformers
7
+ license_name: all-rights-reserved
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
 
 
10
 
 
11
 
 
 
 
 
12
 
13
+ © 2024 Mandela Logan. All rights reserved.
14
 
15
+ No part of this model may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright holder. Unauthorized use or reproduction of this model is strictly prohibited by copyright law.