lesso commited on
Commit
11fd728
1 Parent(s): 90fb795

End of training

Browse files
Files changed (2) hide show
  1. README.md +10 -15
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -22,7 +22,6 @@ adapter: lora
22
  base_model: sethuiyer/Medichat-Llama3-8B
23
  bf16: false
24
  chat_template: llama3
25
- dataset_prepared_path: null
26
  datasets:
27
  - data_files:
28
  - 95cbf511558bbb4b_train_data.json
@@ -42,8 +41,8 @@ deepspeed: null
42
  early_stopping_patience: null
43
  eval_max_new_tokens: 128
44
  eval_table_size: null
45
- evals_per_epoch: 4
46
- flash_attention: true
47
  fp16: true
48
  fsdp: null
49
  fsdp_config: null
@@ -66,7 +65,7 @@ lora_model_dir: null
66
  lora_r: 8
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
- max_steps: 10
70
  micro_batch_size: 1
71
  mlflow_experiment_name: /tmp/95cbf511558bbb4b_train_data.json
72
  model_type: AutoModelForCausalLM
@@ -77,7 +76,7 @@ pad_to_sequence_len: true
77
  resume_from_checkpoint: null
78
  s2_attention: null
79
  sample_packing: false
80
- saves_per_epoch: 4
81
  sequence_len: 1024
82
  strict: false
83
  tf32: false
@@ -91,7 +90,7 @@ wandb_name: b9ccccb9-64a3-4207-a2ed-fb5da2aeefd2
91
  wandb_project: Gradients-On-Demand
92
  wandb_run: your_name
93
  wandb_runid: b9ccccb9-64a3-4207-a2ed-fb5da2aeefd2
94
- warmup_steps: 10
95
  weight_decay: 0.0
96
  xformers_attention: null
97
 
@@ -103,7 +102,7 @@ xformers_attention: null
103
 
104
  This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
- - Loss: 2.1259
107
 
108
  ## Model description
109
 
@@ -130,18 +129,14 @@ The following hyperparameters were used during training:
130
  - total_train_batch_size: 4
131
  - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
- - lr_scheduler_warmup_steps: 10
134
- - training_steps: 10
135
  - mixed_precision_training: Native AMP
136
 
137
  ### Training results
138
 
139
- | Training Loss | Epoch | Step | Validation Loss |
140
- |:-------------:|:------:|:----:|:---------------:|
141
- | 2.0249 | 0.0030 | 1 | 2.2815 |
142
- | 2.3406 | 0.0090 | 3 | 2.2815 |
143
- | 2.6119 | 0.0180 | 6 | 2.2540 |
144
- | 1.7983 | 0.0270 | 9 | 2.1259 |
145
 
146
 
147
  ### Framework versions
 
22
  base_model: sethuiyer/Medichat-Llama3-8B
23
  bf16: false
24
  chat_template: llama3
 
25
  datasets:
26
  - data_files:
27
  - 95cbf511558bbb4b_train_data.json
 
41
  early_stopping_patience: null
42
  eval_max_new_tokens: 128
43
  eval_table_size: null
44
+ evals_per_epoch: 1
45
+ flash_attention: false
46
  fp16: true
47
  fsdp: null
48
  fsdp_config: null
 
65
  lora_r: 8
66
  lora_target_linear: true
67
  lr_scheduler: cosine
68
+ max_steps: 1000
69
  micro_batch_size: 1
70
  mlflow_experiment_name: /tmp/95cbf511558bbb4b_train_data.json
71
  model_type: AutoModelForCausalLM
 
76
  resume_from_checkpoint: null
77
  s2_attention: null
78
  sample_packing: false
79
+ saves_per_epoch: 1
80
  sequence_len: 1024
81
  strict: false
82
  tf32: false
 
90
  wandb_project: Gradients-On-Demand
91
  wandb_run: your_name
92
  wandb_runid: b9ccccb9-64a3-4207-a2ed-fb5da2aeefd2
93
+ warmup_steps: 0
94
  weight_decay: 0.0
95
  xformers_attention: null
96
 
 
102
 
103
  This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
104
  It achieves the following results on the evaluation set:
105
+ - Loss: 1.0635
106
 
107
  ## Model description
108
 
 
129
  - total_train_batch_size: 4
130
  - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
131
  - lr_scheduler_type: cosine
132
+ - training_steps: 333
 
133
  - mixed_precision_training: Native AMP
134
 
135
  ### Training results
136
 
137
+ | Training Loss | Epoch | Step | Validation Loss |
138
+ |:-------------:|:-----:|:----:|:---------------:|
139
+ | 0.8482 | 1.0 | 333 | 1.0635 |
 
 
 
140
 
141
 
142
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a94317c295e7b8221749385f9f9f7ad18bdcf24c212100fe9488e1bbc1cbb50e
3
  size 84047370
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:104c46145be25520cf67693749ef0de394110dde74006158300b60389d152d47
3
  size 84047370