AlyGreo commited on
Commit
8f002a2
1 Parent(s): 0ad5599

End of training

Browse files
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/electra-base-generator
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: electra-base-finetuned-imdb
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # electra-base-finetuned-imdb
18
+
19
+ This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.0001
22
+ - Accuracy: 1.0
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 0.0002
42
+ - train_batch_size: 20
43
+ - eval_batch_size: 20
44
+ - seed: 42
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: linear
47
+ - training_steps: 200
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 0.0001 | 1.0 | 200 | 0.0001 | 1.0 |
54
+
55
+
56
+ ### Framework versions
57
+
58
+ - Transformers 4.44.2
59
+ - Pytorch 2.4.1+cu121
60
+ - Tokenizers 0.19.1
config.json CHANGED
@@ -9,6 +9,10 @@
9
  "hidden_act": "gelu",
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 256,
 
 
 
 
12
  "initializer_range": 0.02,
13
  "intermediate_size": 1024,
14
  "layer_norm_eps": 1e-12,
 
9
  "hidden_act": "gelu",
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 256,
12
+ "id2label": {
13
+ "0": "Negative",
14
+ "1": "Positive"
15
+ },
16
  "initializer_range": 0.02,
17
  "intermediate_size": 1024,
18
  "layer_norm_eps": 1e-12,
runs/Oct05_11-10-37_0bb0c781a97e/events.out.tfevents.1728126637.0bb0c781a97e.1116.8 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d040c14f1b77ec14d6e6d4d5c4ff96ae597f75c014f28e34961a83b22312205b
3
- size 5982
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09fbcae650913f783fab34b823b783143005ee87c41ffb36c3a5c3bf06955a94
3
+ size 6659
runs/Oct05_11-10-37_0bb0c781a97e/events.out.tfevents.1728126740.0bb0c781a97e.1116.9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bedd2c1c3fd46da10e590103ef455266f3eef4c840a0879168f8aca21c5b1e93
3
+ size 411