tuanna08go commited on
Commit
71b1701
·
verified ·
1 Parent(s): bd15ba9

End of training

Browse files
Files changed (2) hide show
  1. README.md +11 -18
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -47,7 +47,7 @@ flash_attention: true
47
  fp16: null
48
  fsdp: null
49
  fsdp_config: null
50
- gradient_accumulation_steps: 16
51
  gradient_checkpointing: false
52
  group_by_length: false
53
  hub_model_id: tuanna08go/9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
@@ -58,7 +58,7 @@ learning_rate: 0.0001
58
  load_in_4bit: false
59
  load_in_8bit: false
60
  local_rank: null
61
- logging_steps: 10
62
  lora_alpha: 16
63
  lora_dropout: 0.05
64
  lora_fan_in_fan_out: null
@@ -66,8 +66,8 @@ lora_model_dir: null
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
- max_steps: 50
70
- micro_batch_size: 8
71
  mlflow_experiment_name: /tmp/511bc7520306da8a_train_data.json
72
  model_type: AutoModelForCausalLM
73
  num_epochs: 1
@@ -91,7 +91,7 @@ wandb_name: 9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: 9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
94
- warmup_steps: 2
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
@@ -102,8 +102,6 @@ xformers_attention: null
102
  # 9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
103
 
104
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
105
- It achieves the following results on the evaluation set:
106
- - Loss: 0.5400
107
 
108
  ## Model description
109
 
@@ -123,26 +121,21 @@ More information needed
123
 
124
  The following hyperparameters were used during training:
125
  - learning_rate: 0.0001
126
- - train_batch_size: 8
127
- - eval_batch_size: 8
128
  - seed: 42
129
- - gradient_accumulation_steps: 16
130
- - total_train_batch_size: 128
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 2
134
- - training_steps: 50
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
- | No log | 0.0051 | 1 | 0.6309 |
141
- | 0.6061 | 0.0511 | 10 | 0.6027 |
142
- | 0.5841 | 0.1021 | 20 | 0.5628 |
143
- | 0.5459 | 0.1532 | 30 | 0.5474 |
144
- | 0.5444 | 0.2042 | 40 | 0.5410 |
145
- | 0.5309 | 0.2553 | 50 | 0.5400 |
146
 
147
 
148
  ### Framework versions
 
47
  fp16: null
48
  fsdp: null
49
  fsdp_config: null
50
+ gradient_accumulation_steps: 4
51
  gradient_checkpointing: false
52
  group_by_length: false
53
  hub_model_id: tuanna08go/9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
 
58
  load_in_4bit: false
59
  load_in_8bit: false
60
  local_rank: null
61
+ logging_steps: 5
62
  lora_alpha: 16
63
  lora_dropout: 0.05
64
  lora_fan_in_fan_out: null
 
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
+ max_steps: 1
70
+ micro_batch_size: 2
71
  mlflow_experiment_name: /tmp/511bc7520306da8a_train_data.json
72
  model_type: AutoModelForCausalLM
73
  num_epochs: 1
 
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: 9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
94
+ warmup_steps: 1
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
 
102
  # 9d3d6c1c-01e3-47b5-8f25-e2bdb653d048
103
 
104
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
 
 
105
 
106
  ## Model description
107
 
 
121
 
122
  The following hyperparameters were used during training:
123
  - learning_rate: 0.0001
124
+ - train_batch_size: 2
125
+ - eval_batch_size: 2
126
  - seed: 42
127
+ - gradient_accumulation_steps: 4
128
+ - total_train_batch_size: 8
129
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
130
  - lr_scheduler_type: cosine
131
  - lr_scheduler_warmup_steps: 2
132
+ - training_steps: 1
133
 
134
  ### Training results
135
 
136
  | Training Loss | Epoch | Step | Validation Loss |
137
  |:-------------:|:------:|:----:|:---------------:|
138
+ | No log | 0.0003 | 1 | 0.6470 |
 
 
 
 
 
139
 
140
 
141
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ad33962941d36aca7856cc2eaea138b62c367eff8fe3b2e5eb227d967e1eb70e
3
  size 25342042
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cba69cde24d4551d6b26b2ac2b5f35952116c01e2f2a78747faf149932837a0
3
  size 25342042