joshuaspear commited on
Commit
0516dd1
·
verified ·
1 Parent(s): 77b0fad

Model save

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -13,6 +13,8 @@ model-index:
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jsphd/week10_tutorial_llm/runs/wrln9zk3)
 
 
16
  # bloom-560m-finetuned-health-qa
17
 
18
  This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
@@ -35,9 +37,11 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
- - train_batch_size: 1
39
- - eval_batch_size: 1
40
  - seed: 42
 
 
41
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
  - lr_scheduler_type: linear
43
  - num_epochs: 5
 
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jsphd/week10_tutorial_llm/runs/wrln9zk3)
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jsphd/week10_tutorial_llm/runs/m0jvvs1u)
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/jsphd/week10_tutorial_llm/runs/m0jvvs1u)
18
  # bloom-560m-finetuned-health-qa
19
 
20
  This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1e-05
40
+ - train_batch_size: 2
41
+ - eval_batch_size: 2
42
  - seed: 42
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 4
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - num_epochs: 5