joshuaspear
commited on
Commit
•
6aa94c2
1
Parent(s):
188732b
Model save
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ model-index:
|
|
12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
should probably proofread and complete it, then remove this comment. -->
|
14 |
|
15 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jsphd/week10_tutorial_llm_BERT/runs/
|
16 |
# bert-base-cased-finetuned-health-qa
|
17 |
|
18 |
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
|
@@ -34,15 +34,15 @@ More information needed
|
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
-
- learning_rate:
|
38 |
-
- train_batch_size:
|
39 |
-
- eval_batch_size:
|
40 |
- seed: 42
|
41 |
-
- gradient_accumulation_steps:
|
42 |
- total_train_batch_size: 1024
|
43 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
44 |
- lr_scheduler_type: linear
|
45 |
-
- num_epochs:
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
### Framework versions
|
|
|
12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
should probably proofread and complete it, then remove this comment. -->
|
14 |
|
15 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jsphd/week10_tutorial_llm_BERT/runs/u4klp2p1)
|
16 |
# bert-base-cased-finetuned-health-qa
|
17 |
|
18 |
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
|
|
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
+
- learning_rate: 0.001
|
38 |
+
- train_batch_size: 64
|
39 |
+
- eval_batch_size: 64
|
40 |
- seed: 42
|
41 |
+
- gradient_accumulation_steps: 16
|
42 |
- total_train_batch_size: 1024
|
43 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
44 |
- lr_scheduler_type: linear
|
45 |
+
- num_epochs: 15
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
### Framework versions
|