Model save
Browse files- README.md +5 -5
- egy_training_log.txt +143 -0
- training_args.bin +1 -1
README.md
CHANGED
@@ -18,11 +18,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
-
-
|
22 |
-
-
|
23 |
-
- Rouge1: 0.
|
24 |
-
- Rouge2: 0.
|
25 |
-
- Rougel: 0.
|
26 |
|
27 |
## Model description
|
28 |
|
|
|
18 |
|
19 |
This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Bleu: 0.3119
|
22 |
+
- Loss: 2.0654
|
23 |
+
- Rouge1: 0.5862
|
24 |
+
- Rouge2: 0.3489
|
25 |
+
- Rougel: 0.5479
|
26 |
|
27 |
## Model description
|
28 |
|
egy_training_log.txt
CHANGED
@@ -619,3 +619,146 @@ INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_maren
|
|
619 |
WARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
|
620 |
INFO:__main__:*** Evaluate ***
|
621 |
INFO:absl:Using default tokenizer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
619 |
WARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
|
620 |
INFO:__main__:*** Evaluate ***
|
621 |
INFO:absl:Using default tokenizer.
|
622 |
+
WARNING:__main__:Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
|
623 |
+
INFO:__main__:Training/evaluation parameters TrainingArguments(
|
624 |
+
_n_gpu=1,
|
625 |
+
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},
|
626 |
+
adafactor=False,
|
627 |
+
adam_beta1=0.9,
|
628 |
+
adam_beta2=0.999,
|
629 |
+
adam_epsilon=1e-08,
|
630 |
+
auto_find_batch_size=False,
|
631 |
+
batch_eval_metrics=False,
|
632 |
+
bf16=False,
|
633 |
+
bf16_full_eval=False,
|
634 |
+
data_seed=None,
|
635 |
+
dataloader_drop_last=False,
|
636 |
+
dataloader_num_workers=0,
|
637 |
+
dataloader_persistent_workers=False,
|
638 |
+
dataloader_pin_memory=True,
|
639 |
+
dataloader_prefetch_factor=None,
|
640 |
+
ddp_backend=None,
|
641 |
+
ddp_broadcast_buffers=None,
|
642 |
+
ddp_bucket_cap_mb=None,
|
643 |
+
ddp_find_unused_parameters=None,
|
644 |
+
ddp_timeout=1800,
|
645 |
+
debug=[],
|
646 |
+
deepspeed=None,
|
647 |
+
disable_tqdm=False,
|
648 |
+
dispatch_batches=None,
|
649 |
+
do_eval=True,
|
650 |
+
do_predict=False,
|
651 |
+
do_train=True,
|
652 |
+
eval_accumulation_steps=None,
|
653 |
+
eval_delay=0,
|
654 |
+
eval_do_concat_batches=True,
|
655 |
+
eval_on_start=False,
|
656 |
+
eval_steps=None,
|
657 |
+
eval_strategy=IntervalStrategy.EPOCH,
|
658 |
+
eval_use_gather_object=False,
|
659 |
+
evaluation_strategy=epoch,
|
660 |
+
fp16=False,
|
661 |
+
fp16_backend=auto,
|
662 |
+
fp16_full_eval=False,
|
663 |
+
fp16_opt_level=O1,
|
664 |
+
fsdp=[],
|
665 |
+
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
|
666 |
+
fsdp_min_num_params=0,
|
667 |
+
fsdp_transformer_layer_cls_to_wrap=None,
|
668 |
+
full_determinism=False,
|
669 |
+
gradient_accumulation_steps=1,
|
670 |
+
gradient_checkpointing=False,
|
671 |
+
gradient_checkpointing_kwargs=None,
|
672 |
+
greater_is_better=False,
|
673 |
+
group_by_length=False,
|
674 |
+
half_precision_backend=auto,
|
675 |
+
hub_always_push=False,
|
676 |
+
hub_model_id=None,
|
677 |
+
hub_private_repo=False,
|
678 |
+
hub_strategy=HubStrategy.EVERY_SAVE,
|
679 |
+
hub_token=<HUB_TOKEN>,
|
680 |
+
ignore_data_skip=False,
|
681 |
+
include_inputs_for_metrics=False,
|
682 |
+
include_num_input_tokens_seen=False,
|
683 |
+
include_tokens_per_second=False,
|
684 |
+
jit_mode_eval=False,
|
685 |
+
label_names=None,
|
686 |
+
label_smoothing_factor=0.0,
|
687 |
+
learning_rate=5e-05,
|
688 |
+
length_column_name=length,
|
689 |
+
load_best_model_at_end=True,
|
690 |
+
local_rank=0,
|
691 |
+
log_level=passive,
|
692 |
+
log_level_replica=warning,
|
693 |
+
log_on_each_node=True,
|
694 |
+
logging_dir=/home/iais_marenpielka/Bouthaina/results_fixed/runs/Aug25_16-40-59_lmgpu-node-09,
|
695 |
+
logging_first_step=False,
|
696 |
+
logging_nan_inf_filter=True,
|
697 |
+
logging_steps=500,
|
698 |
+
logging_strategy=IntervalStrategy.EPOCH,
|
699 |
+
lr_scheduler_kwargs={},
|
700 |
+
lr_scheduler_type=SchedulerType.LINEAR,
|
701 |
+
max_grad_norm=1.0,
|
702 |
+
max_steps=-1,
|
703 |
+
metric_for_best_model=loss,
|
704 |
+
mp_parameters=,
|
705 |
+
neftune_noise_alpha=None,
|
706 |
+
no_cuda=False,
|
707 |
+
num_train_epochs=3.0,
|
708 |
+
optim=OptimizerNames.ADAMW_TORCH,
|
709 |
+
optim_args=None,
|
710 |
+
optim_target_modules=None,
|
711 |
+
output_dir=/home/iais_marenpielka/Bouthaina/results_fixed,
|
712 |
+
overwrite_output_dir=False,
|
713 |
+
past_index=-1,
|
714 |
+
per_device_eval_batch_size=8,
|
715 |
+
per_device_train_batch_size=8,
|
716 |
+
prediction_loss_only=False,
|
717 |
+
push_to_hub=True,
|
718 |
+
push_to_hub_model_id=None,
|
719 |
+
push_to_hub_organization=None,
|
720 |
+
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
|
721 |
+
ray_scope=last,
|
722 |
+
remove_unused_columns=True,
|
723 |
+
report_to=[],
|
724 |
+
restore_callback_states_from_checkpoint=False,
|
725 |
+
resume_from_checkpoint=None,
|
726 |
+
run_name=/home/iais_marenpielka/Bouthaina/results_fixed,
|
727 |
+
save_on_each_node=False,
|
728 |
+
save_only_model=False,
|
729 |
+
save_safetensors=True,
|
730 |
+
save_steps=500,
|
731 |
+
save_strategy=IntervalStrategy.EPOCH,
|
732 |
+
save_total_limit=None,
|
733 |
+
seed=42,
|
734 |
+
skip_memory_metrics=True,
|
735 |
+
split_batches=None,
|
736 |
+
tf32=None,
|
737 |
+
torch_compile=False,
|
738 |
+
torch_compile_backend=None,
|
739 |
+
torch_compile_mode=None,
|
740 |
+
torch_empty_cache_steps=None,
|
741 |
+
torchdynamo=None,
|
742 |
+
tpu_metrics_debug=False,
|
743 |
+
tpu_num_cores=None,
|
744 |
+
use_cpu=False,
|
745 |
+
use_ipex=False,
|
746 |
+
use_legacy_prediction_loop=False,
|
747 |
+
use_mps_device=False,
|
748 |
+
warmup_ratio=0.0,
|
749 |
+
warmup_steps=500,
|
750 |
+
weight_decay=0.0,
|
751 |
+
)
|
752 |
+
INFO:__main__:Checkpoint detected, resuming training at /home/iais_marenpielka/Bouthaina/results_fixed/checkpoint-8840. To avoid this behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch.
|
753 |
+
INFO:datasets.builder:Using custom data configuration default-93ed01be52df6f6e
|
754 |
+
INFO:datasets.info:Loading Dataset Infos from /home/iais_marenpielka/Bouthaina/miniconda3/lib/python3.12/site-packages/datasets/packaged_modules/text
|
755 |
+
INFO:datasets.builder:Overwrite dataset info from restored data version if exists.
|
756 |
+
INFO:datasets.info:Loading Dataset info from /home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101
|
757 |
+
INFO:datasets.builder:Found cached dataset text (/home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101)
|
758 |
+
INFO:datasets.info:Loading Dataset info from /home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101
|
759 |
+
INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-4cda59a599643701.arrow
|
760 |
+
INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-d82ef9a45800c64f.arrow
|
761 |
+
WARNING:__main__:The tokenizer picked seems to have a very large `model_max_length` (1000000000000000019884624838656). Using block_size=768 instead. You can change that default value by passing --block_size xxx.
|
762 |
+
INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-038f8e8385bf6638.arrow
|
763 |
+
INFO:datasets.arrow_dataset:Loading cached processed dataset at /home/iais_marenpielka/.cache/huggingface/datasets/text/default-93ed01be52df6f6e/0.0.0/96636a050ef51804b84abbfd4f4ad440e01153c24b86293eb5c3b300a41f9101/cache-51f1e2b6546273ed.arrow
|
764 |
+
WARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:03c19b00ba4d71a785dd4b066aba55db40ee99345b86287a50220d84ab6d8903
|
3 |
size 5240
|