nlparabic commited on
Commit
8e4e771
1 Parent(s): a980a85

Model save

Browse files
Files changed (3) hide show
  1. README.md +0 -9
  2. egy_training_log.txt +144 -0
  3. training_args.bin +1 -1
README.md CHANGED
@@ -3,9 +3,6 @@ license: apache-2.0
3
  base_model: riotu-lab/ArabianGPT-01B
4
  tags:
5
  - generated_from_trainer
6
- metrics:
7
- - bleu
8
- - rouge
9
  model-index:
10
  - name: results
11
  results: []
@@ -17,12 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
17
  # results
18
 
19
  This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 3.4630
22
- - Bleu: 0.0984
23
- - Rouge1: 0.3093
24
- - Rouge2: 0.0718
25
- - Rougel: 0.2296
26
 
27
  ## Model description
28
 
 
3
  base_model: riotu-lab/ArabianGPT-01B
4
  tags:
5
  - generated_from_trainer
 
 
 
6
  model-index:
7
  - name: results
8
  results: []
 
14
  # results
15
 
16
  This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset.
 
 
 
 
 
 
17
 
18
  ## Model description
19
 
egy_training_log.txt CHANGED
@@ -143,3 +143,147 @@ WARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the
143
  WARNING:root:Epoch 1.0: No losses recorded yet.
144
  INFO:__main__:*** Evaluate ***
145
  INFO:absl:Using default tokenizer.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
  WARNING:root:Epoch 1.0: No losses recorded yet.
144
  INFO:__main__:*** Evaluate ***
145
  INFO:absl:Using default tokenizer.
146
+ WARNING:root:No losses were recorded, so the loss graph was not generated.
147
+ WARNING:__main__:Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
148
+ INFO:__main__:Training/evaluation parameters TrainingArguments(
149
+ _n_gpu=1,
150
+ accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},
151
+ adafactor=False,
152
+ adam_beta1=0.9,
153
+ adam_beta2=0.999,
154
+ adam_epsilon=1e-08,
155
+ auto_find_batch_size=False,
156
+ batch_eval_metrics=False,
157
+ bf16=False,
158
+ bf16_full_eval=False,
159
+ data_seed=None,
160
+ dataloader_drop_last=False,
161
+ dataloader_num_workers=0,
162
+ dataloader_persistent_workers=False,
163
+ dataloader_pin_memory=True,
164
+ dataloader_prefetch_factor=None,
165
+ ddp_backend=None,
166
+ ddp_broadcast_buffers=None,
167
+ ddp_bucket_cap_mb=None,
168
+ ddp_find_unused_parameters=None,
169
+ ddp_timeout=1800,
170
+ debug=[],
171
+ deepspeed=None,
172
+ disable_tqdm=False,
173
+ dispatch_batches=None,
174
+ do_eval=True,
175
+ do_predict=False,
176
+ do_train=True,
177
+ eval_accumulation_steps=None,
178
+ eval_delay=0,
179
+ eval_do_concat_batches=True,
180
+ eval_on_start=False,
181
+ eval_steps=500,
182
+ eval_strategy=IntervalStrategy.STEPS,
183
+ eval_use_gather_object=False,
184
+ evaluation_strategy=steps,
185
+ fp16=False,
186
+ fp16_backend=auto,
187
+ fp16_full_eval=False,
188
+ fp16_opt_level=O1,
189
+ fsdp=[],
190
+ fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
191
+ fsdp_min_num_params=0,
192
+ fsdp_transformer_layer_cls_to_wrap=None,
193
+ full_determinism=False,
194
+ gradient_accumulation_steps=1,
195
+ gradient_checkpointing=False,
196
+ gradient_checkpointing_kwargs=None,
197
+ greater_is_better=False,
198
+ group_by_length=False,
199
+ half_precision_backend=auto,
200
+ hub_always_push=False,
201
+ hub_model_id=None,
202
+ hub_private_repo=False,
203
+ hub_strategy=HubStrategy.EVERY_SAVE,
204
+ hub_token=<HUB_TOKEN>,
205
+ ignore_data_skip=False,
206
+ include_inputs_for_metrics=False,
207
+ include_num_input_tokens_seen=False,
208
+ include_tokens_per_second=False,
209
+ jit_mode_eval=False,
210
+ label_names=None,
211
+ label_smoothing_factor=0.0,
212
+ learning_rate=5e-05,
213
+ length_column_name=length,
214
+ load_best_model_at_end=True,
215
+ local_rank=0,
216
+ log_level=passive,
217
+ log_level_replica=warning,
218
+ log_on_each_node=True,
219
+ logging_dir=/home/iais_marenpielka/Bouthaina/results/runs/Aug25_12-05-28_lmgpu-node-09,
220
+ logging_first_step=False,
221
+ logging_nan_inf_filter=True,
222
+ logging_steps=500,
223
+ logging_strategy=IntervalStrategy.STEPS,
224
+ lr_scheduler_kwargs={},
225
+ lr_scheduler_type=SchedulerType.LINEAR,
226
+ max_grad_norm=1.0,
227
+ max_steps=-1,
228
+ metric_for_best_model=loss,
229
+ mp_parameters=,
230
+ neftune_noise_alpha=None,
231
+ no_cuda=False,
232
+ num_train_epochs=1.0,
233
+ optim=OptimizerNames.ADAMW_TORCH,
234
+ optim_args=None,
235
+ optim_target_modules=None,
236
+ output_dir=/home/iais_marenpielka/Bouthaina/results,
237
+ overwrite_output_dir=False,
238
+ past_index=-1,
239
+ per_device_eval_batch_size=8,
240
+ per_device_train_batch_size=8,
241
+ prediction_loss_only=False,
242
+ push_to_hub=True,
243
+ push_to_hub_model_id=None,
244
+ push_to_hub_organization=None,
245
+ push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
246
+ ray_scope=last,
247
+ remove_unused_columns=True,
248
+ report_to=[],
249
+ restore_callback_states_from_checkpoint=False,
250
+ resume_from_checkpoint=None,
251
+ run_name=/home/iais_marenpielka/Bouthaina/results,
252
+ save_on_each_node=False,
253
+ save_only_model=False,
254
+ save_safetensors=True,
255
+ save_steps=500,
256
+ save_strategy=IntervalStrategy.STEPS,
257
+ save_total_limit=None,
258
+ seed=42,
259
+ skip_memory_metrics=True,
260
+ split_batches=None,
261
+ tf32=None,
262
+ torch_compile=False,
263
+ torch_compile_backend=None,
264
+ torch_compile_mode=None,
265
+ torch_empty_cache_steps=None,
266
+ torchdynamo=None,
267
+ tpu_metrics_debug=False,
268
+ tpu_num_cores=None,
269
+ use_cpu=False,
270
+ use_ipex=False,
271
+ use_legacy_prediction_loop=False,
272
+ use_mps_device=False,
273
+ warmup_ratio=0.0,
274
+ warmup_steps=500,
275
+ weight_decay=0.0,
276
+ )
277
+ INFO:__main__:Checkpoint detected, resuming training at /home/iais_marenpielka/Bouthaina/results/checkpoint-319. To avoid this behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch.
278
+ INFO:datasets.builder:Using custom data configuration default-0637777c38512acf
279
+ INFO:datasets.info:Loading Dataset Infos from /home/iais_marenpielka/Bouthaina/miniconda3/lib/python3.12/site-packages/datasets/packaged_modules/text
280
+ INFO:datasets.builder:Overwrite dataset info from restored data version if exists.
281
+ INFO:datasets.info:Loading Dataset info from /home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101
282
+ INFO:datasets.builder:Found cached dataset text (/home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101)
283
+ INFO:datasets.info:Loading Dataset info from /home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101
284
+ INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-37ea47da1c3ae86a.arrow
285
+ INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-d1d890b04a73f183.arrow
286
+ WARNING:__main__:The tokenizer picked seems to have a very large `model_max_length` (1000000000000000019884624838656). Using block_size=768 instead. You can change that default value by passing --block_size xxx.
287
+ INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-607ae57e4b4160b3.arrow
288
+ INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-0637777c38512acf/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-4ddbb6e08bb37d3f.arrow
289
+ WARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:733840b82584d112699caee1f2ed0810e15082aa2fee8a5abc68161d0cbd9217
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:714edc226dc208cc3e36ade88f6e2e415e590e9525779410fcc66a6d67118dcd
3
  size 5240