veb commited on
Commit
f49d071
·
1 Parent(s): 2fcfb60

Training in progress epoch 0

Browse files
Files changed (6) hide show
  1. README.md +12 -13
  2. config.json +3 -3
  3. special_tokens_map.json +7 -1
  4. tf_model.h5 +2 -2
  5. tokenizer.json +2 -16
  6. tokenizer_config.json +14 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_keras_callback
4
  model-index:
@@ -11,12 +12,10 @@ probably proofread and complete it, then remove this comment. -->
11
 
12
  # veb/twitch-bert-base-cased-finetuned
13
 
14
- This model was trained from scratch on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Train Loss: 0.2929
17
- - Train Sparse Categorical Accuracy: 0.8768
18
- - Validation Loss: 0.1927
19
- - Validation Sparse Categorical Accuracy: 0.9483
20
  - Epoch: 0
21
 
22
  ## Model description
@@ -36,19 +35,19 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
40
- - training_precision: float32
41
 
42
  ### Training results
43
 
44
- | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
45
- |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
46
- | 0.2929 | 0.8768 | 0.1927 | 0.9483 | 0 |
47
 
48
 
49
  ### Framework versions
50
 
51
- - Transformers 4.19.2
52
- - TensorFlow 2.7.0
53
- - Datasets 2.2.2
54
  - Tokenizers 0.12.1
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
 
12
 
13
  # veb/twitch-bert-base-cased-finetuned
14
 
15
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 3.4267
18
+ - Validation Loss: 2.8382
 
 
19
  - Epoch: 0
20
 
21
  ## Model description
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -610, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
+ - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
+ | Train Loss | Validation Loss | Epoch |
44
+ |:----------:|:---------------:|:-----:|
45
+ | 3.4267 | 2.8382 | 0 |
46
 
47
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.20.1
51
+ - TensorFlow 2.6.4
52
+ - Datasets 2.3.2
53
  - Tokenizers 0.12.1
config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
- "_name_or_path": "veb/twitch-bert-base-cased-finetuned",
3
  "architectures": [
4
- "BertForSequenceClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
@@ -18,7 +18,7 @@
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
- "transformers_version": "4.19.2",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 28996
 
1
  {
2
+ "_name_or_path": "bert-base-cased",
3
  "architectures": [
4
+ "BertForMaskedLM"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "classifier_dropout": null,
 
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
+ "transformers_version": "4.20.1",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 28996
special_tokens_map.json CHANGED
@@ -1 +1,7 @@
1
- {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2f508293dc744f769dc04c1f63cda14895450cf69850a06dc85ff12f489c232
3
- size 433518320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99ba8bf9725f9187d817d90d271da844d908e8fecff67ed97a88a79ada208766
3
+ size 524305832
tokenizer.json CHANGED
@@ -1,21 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": {
11
- "Fixed": 512
12
- },
13
- "direction": "Right",
14
- "pad_to_multiple_of": null,
15
- "pad_id": 0,
16
- "pad_type_id": 0,
17
- "pad_token": "[PAD]"
18
- },
19
  "added_tokens": [
20
  {
21
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
tokenizer_config.json CHANGED
@@ -1 +1,14 @@
1
- {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "veb/twitch-bert-base-cased-finetuned", "tokenizer_class": "BertTokenizer"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_lower_case": false,
4
+ "mask_token": "[MASK]",
5
+ "model_max_length": 512,
6
+ "name_or_path": "bert-base-cased",
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "special_tokens_map_file": null,
10
+ "strip_accents": null,
11
+ "tokenize_chinese_chars": true,
12
+ "tokenizer_class": "BertTokenizer",
13
+ "unk_token": "[UNK]"
14
+ }