File size: 2,544 Bytes
59106a5
 
 
 
 
 
c6d76a8
 
59106a5
 
 
 
 
 
 
 
 
 
 
c6d76a8
7a5ed24
 
59106a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39256e5
 
59106a5
 
 
 
39256e5
c6d76a8
 
 
39256e5
 
7a5ed24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c6d76a8
59106a5
 
 
175da98
59106a5
175da98
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---

library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset-hand
  results: []
---


<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# videomae-base-finetuned-ucf101-subset-hand

This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5545
- Accuracy: 0.9

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05

- train_batch_size: 3

- eval_batch_size: 3

- seed: 42

- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08

- lr_scheduler_type: linear

- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1600



### Training results



| Training Loss | Epoch    | Step | Validation Loss | Accuracy |

|:-------------:|:--------:|:----:|:---------------:|:--------:|

| 1.6621        | 11.0006  | 100  | 1.6555          | 0.2      |

| 1.6996        | 22.0012  | 200  | 1.5660          | 0.4167   |

| 0.886         | 33.0019  | 300  | 1.4698          | 0.4167   |

| 0.6728        | 44.0025  | 400  | 0.7168          | 0.7833   |

| 0.1659        | 55.0031  | 500  | 1.3954          | 0.6667   |

| 0.1471        | 66.0037  | 600  | 1.7320          | 0.6      |

| 0.0085        | 77.0044  | 700  | 1.4034          | 0.7333   |

| 0.0613        | 88.005   | 800  | 1.1479          | 0.7      |

| 0.1256        | 99.0056  | 900  | 1.3657          | 0.7      |

| 0.194         | 111.0006 | 1000 | 1.0879          | 0.7      |

| 0.1336        | 122.0012 | 1100 | 0.9304          | 0.8      |

| 0.0605        | 133.0019 | 1200 | 0.6773          | 0.85     |

| 0.0025        | 144.0025 | 1300 | 0.7631          | 0.7833   |

| 0.0016        | 155.0031 | 1400 | 0.4629          | 0.9167   |

| 0.1698        | 166.0037 | 1500 | 0.5422          | 0.9167   |

| 0.0011        | 177.0044 | 1600 | 0.5545          | 0.9      |





### Framework versions



- Transformers 4.45.0

- Pytorch 2.4.1+cu118

- Datasets 3.0.0

- Tokenizers 0.20.0