Spaces:
Runtime error
Runtime error
metadata
title: Hyenadna Sm 32k Mqtl Classifier Space
emoji: π
colorFrom: gray
colorTo: yellow
sdk: docker
pinned: false
license: creativeml-openrail-m
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
This is the log after fine-tuning for 18 hours. I ain't an expert yet. So first I'm gonna backup the log from the terminal in this readme file.
huggingface-mqtl-classification-hyena-dna on ξ main [!] via π v3.10.12 (venv)
β― python app.py
.env file loaded successfully.
The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
Token is valid (permission: fineGrained).
Your token has been saved to /home/soumic/.cache/huggingface/token
Login successful
Logged in to Hugging Face Hub successfully.
INFO:root:api_key = '9eb6a2adfb2645afb39332d36aa7d3a80195e476'
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: fahimfarhan (notredamians). Use `wandb login --relogin` to force relogin
wandb: WARNING If you're specifying your api key in code, ensure this code is not shared publicly.
wandb: WARNING Consider setting the WANDB_API_KEY environment variable, or running `wandb login` from the command line.
wandb: Appending key for api.wandb.ai to your netrc file: /home/soumic/.netrc
Logged in to wand db successfully.
INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
Some weights of HyenaDNAForSequenceClassification were not initialized from the model checkpoint at LongSafari/hyenadna-small-32k-seqlen-hf and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/transformers/training_args.py:1525: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of π€ Transformers. Use `eval_strategy` instead
warnings.warn(
max_steps is given, it will override any value given in num_train_epochs
wandb: W&B API key is configured. Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.18.1
wandb: Run data is saved locally in /home/soumic/Codes/mqtl-classification/src/huggingface-mqtl-classification-hyena-dna/wandb/run-20241014_204712-09qzuf97
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run laptop_run_hyena_dna-mqtl_classification
wandb: βοΈ View project at https://wandb.ai/notredamians/huggingface
wandb: π View run at https://wandb.ai/notredamians/huggingface/runs/09qzuf97
0%| | 0/20000 [00:00<?, ?it/s]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
2%|β | 312/20000 [16:47<17:46:15, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
2%|β | 500/20000 [26:59<17:37:10, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6766, 'grad_norm': 1.1875, 'learning_rate': 0.0009719, 'epoch': 0.03}
{'eval_loss': 0.6609452962875366, 'eval_accuracy': 0.5925, 'eval_roc_auc': 0.6810560000000001, 'eval_precision': 0.7218225419664268, 'eval_recall': 0.301, 'eval_f1': 0.42484121383203954, 'eval_runtime': 40.0723, 'eval_samples_per_second': 49.91, 'eval_steps_per_second': 6.239, 'epoch': 0.03}
3%|β | 562/20000 [31:02<17:32:56, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
4%|ββ | 874/20000 [47:57<17:17:38, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
5%|ββ | 1000/20000 [54:48<17:07:49, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6563, 'grad_norm': 1.3359375, 'learning_rate': 0.0009438, 'epoch': 1.03}
{'eval_loss': 0.6419531106948853, 'eval_accuracy': 0.6285, 'eval_roc_auc': 0.6852790000000001, 'eval_precision': 0.6175663311985361, 'eval_recall': 0.675, 'eval_f1': 0.6450071667462972, 'eval_runtime': 40.1185, 'eval_samples_per_second': 49.852, 'eval_steps_per_second': 6.232, 'epoch': 1.03}
6%|ββ | 1124/20000 [1:02:14<17:03:06, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
7%|βββ | 1437/20000 [1:19:11<16:48:48, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
8%|βββ | 1500/20000 [1:22:37<16:45:38, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6453, 'grad_norm': 2.53125, 'learning_rate': 0.00091565, 'epoch': 2.03}
{'eval_loss': 0.6514531373977661, 'eval_accuracy': 0.635, 'eval_roc_auc': 0.6814979999999999, 'eval_precision': 0.6537585421412301, 'eval_recall': 0.574, 'eval_f1': 0.6112886048988285, 'eval_runtime': 40.428, 'eval_samples_per_second': 49.471, 'eval_steps_per_second': 6.184, 'epoch': 2.03}
8%|βββ | 1687/20000 [1:33:29<16:35:35, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
10%|ββββ | 1999/20000 [1:50:26<16:17:17, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
10%|ββββ | 2000/20000 [1:50:30<16:39:42, 3.33s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6446, 'grad_norm': 1.78125, 'learning_rate': 0.00088755, 'epoch': 3.03}
{'eval_loss': 0.6583672165870667, 'eval_accuracy': 0.6235, 'eval_roc_auc': 0.657044, 'eval_precision': 0.6063738156761412, 'eval_recall': 0.704, 'eval_f1': 0.6515502082369273, 'eval_runtime': 40.2026, 'eval_samples_per_second': 49.748, 'eval_steps_per_second': 6.218, 'epoch': 3.03}
11%|ββββ | 2249/20000 [2:04:43<16:01:21, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
12%|βββββ | 2500/20000 [2:18:17<15:50:05, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
13%|βββββ | 2562/20000 [2:21:39<15:46:56, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.639, 'grad_norm': 2.28125, 'learning_rate': 0.0008594000000000001, 'epoch': 4.03}
{'eval_loss': 0.6490234136581421, 'eval_accuracy': 0.6315, 'eval_roc_auc': 0.6770584999999999, 'eval_precision': 0.6285434995112414, 'eval_recall': 0.643, 'eval_f1': 0.6356895699456253, 'eval_runtime': 40.3791, 'eval_samples_per_second': 49.531, 'eval_steps_per_second': 6.191, 'epoch': 4.03}
14%|βββββ | 2812/20000 [2:35:56<15:33:06, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
15%|βββββ | 3000/20000 [2:46:07<15:21:46, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
16%|ββββββ | 3124/20000 [2:52:51<15:16:34, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6364, 'grad_norm': 2.21875, 'learning_rate': 0.0008313000000000001, 'epoch': 5.03}
{'eval_loss': 0.6497969031333923, 'eval_accuracy': 0.645, 'eval_roc_auc': 0.6795335, 'eval_precision': 0.6636568848758465, 'eval_recall': 0.588, 'eval_f1': 0.623541887592789, 'eval_runtime': 40.1064, 'eval_samples_per_second': 49.867, 'eval_steps_per_second': 6.233, 'epoch': 5.03}
17%|ββββββ | 3374/20000 [3:07:08<15:00:54, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
18%|ββββββ | 3500/20000 [3:13:56<14:54:33, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
18%|ββββββ | 3687/20000 [3:24:04<14:45:19, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6345, 'grad_norm': 2.5625, 'learning_rate': 0.0008031500000000001, 'epoch': 6.03}
{'eval_loss': 0.647531270980835, 'eval_accuracy': 0.6375, 'eval_roc_auc': 0.6821235000000001, 'eval_precision': 0.6619552414605419, 'eval_recall': 0.562, 'eval_f1': 0.6078961600865332, 'eval_runtime': 40.5432, 'eval_samples_per_second': 49.33, 'eval_steps_per_second': 6.166, 'epoch': 6.03}
20%|βββββββ | 3937/20000 [3:38:21<14:34:09, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
20%|βββββββ | 4000/20000 [3:41:46<14:29:32, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
21%|βββββββ | 4249/20000 [3:55:18<14:15:43, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6374, 'grad_norm': 1.5625, 'learning_rate': 0.00077505, 'epoch': 7.03}
{'eval_loss': 0.6538984179496765, 'eval_accuracy': 0.632, 'eval_roc_auc': 0.6728335, 'eval_precision': 0.6545667447306791, 'eval_recall': 0.559, 'eval_f1': 0.6030204962243797, 'eval_runtime': 40.4373, 'eval_samples_per_second': 49.459, 'eval_steps_per_second': 6.182, 'epoch': 7.03}
22%|ββββββββ | 4499/20000 [4:09:35<14:01:28, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
22%|ββββββββ | 4500/20000 [4:09:37<66:39:15, 15.48s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
24%|ββββββββ | 4812/20000 [4:26:33<13:44:03, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
25%|βββββββββ | 5000/20000 [4:36:45<13:33:37, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6332, 'grad_norm': 1.453125, 'learning_rate': 0.0007469, 'epoch': 8.03}
{'eval_loss': 0.6459375023841858, 'eval_accuracy': 0.6335, 'eval_roc_auc': 0.6818645000000001, 'eval_precision': 0.6606498194945848, 'eval_recall': 0.549, 'eval_f1': 0.5996723102129984, 'eval_runtime': 40.1761, 'eval_samples_per_second': 49.781, 'eval_steps_per_second': 6.223, 'epoch': 8.03}
25%|βββββββββ | 5062/20000 [4:40:48<13:30:16, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
27%|βββββββββ | 5374/20000 [4:57:43<13:12:38, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
28%|βββββββββ | 5500/20000 [5:04:33<13:06:11, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6337, 'grad_norm': 1.296875, 'learning_rate': 0.0007188, 'epoch': 9.03}
{'eval_loss': 0.6479921936988831, 'eval_accuracy': 0.6315, 'eval_roc_auc': 0.6773454999999999, 'eval_precision': 0.6394485683987274, 'eval_recall': 0.603, 'eval_f1': 0.6206896551724138, 'eval_runtime': 40.0385, 'eval_samples_per_second': 49.952, 'eval_steps_per_second': 6.244, 'epoch': 9.03}
28%|ββββββββββ | 5624/20000 [5:11:59<12:59:09, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
30%|ββββββββββ | 5937/20000 [5:28:55<12:42:12, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
30%|ββββββββββ | 6000/20000 [5:32:20<12:38:56, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6316, 'grad_norm': 1.546875, 'learning_rate': 0.00069065, 'epoch': 10.03}
{'eval_loss': 0.6446093916893005, 'eval_accuracy': 0.6365, 'eval_roc_auc': 0.6814725, 'eval_precision': 0.6463022508038585, 'eval_recall': 0.603, 'eval_f1': 0.6239006725297465, 'eval_runtime': 40.1401, 'eval_samples_per_second': 49.825, 'eval_steps_per_second': 6.228, 'epoch': 10.03}
31%|βββββββββββ | 6187/20000 [5:43:09<12:28:12, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
32%|βββββββββββ | 6499/20000 [6:00:03<12:11:07, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
32%|βββββββββββ | 6500/20000 [6:00:06<12:27:39, 3.32s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6321, 'grad_norm': 1.8828125, 'learning_rate': 0.00066255, 'epoch': 11.03}
{'eval_loss': 0.6490703225135803, 'eval_accuracy': 0.638, 'eval_roc_auc': 0.6788120000000001, 'eval_precision': 0.6682926829268293, 'eval_recall': 0.548, 'eval_f1': 0.6021978021978022, 'eval_runtime': 40.1625, 'eval_samples_per_second': 49.798, 'eval_steps_per_second': 6.225, 'epoch': 11.03}
34%|ββββββββββββ | 6749/20000 [6:14:18<11:58:43, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
35%|ββββββββββββ | 7000/20000 [6:27:53<11:44:05, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
35%|ββββββββββββ | 7062/20000 [6:31:15<11:40:39, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6302, 'grad_norm': 1.4765625, 'learning_rate': 0.0006344, 'epoch': 12.03}
{'eval_loss': 0.6459453105926514, 'eval_accuracy': 0.6415, 'eval_roc_auc': 0.6813699999999999, 'eval_precision': 0.6639629200463499, 'eval_recall': 0.573, 'eval_f1': 0.6151368760064412, 'eval_runtime': 40.2064, 'eval_samples_per_second': 49.743, 'eval_steps_per_second': 6.218, 'epoch': 12.03}
37%|ββββββββββββ | 7312/20000 [6:45:28<11:28:13, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
38%|βββββββββββββ | 7500/20000 [6:55:40<11:18:18, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
38%|βββββββββββββ | 7624/20000 [7:02:24<11:11:30, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6313, 'grad_norm': 2.15625, 'learning_rate': 0.0006062999999999999, 'epoch': 13.03}
{'eval_loss': 0.6467031240463257, 'eval_accuracy': 0.632, 'eval_roc_auc': 0.682105, 'eval_precision': 0.6586538461538461, 'eval_recall': 0.548, 'eval_f1': 0.5982532751091703, 'eval_runtime': 40.1557, 'eval_samples_per_second': 49.806, 'eval_steps_per_second': 6.226, 'epoch': 13.03}
39%|βββββββββββββ | 7874/20000 [7:16:40<10:57:30, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
40%|ββββββββββββββ | 8000/20000 [7:23:28<10:51:51, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
41%|ββββββββββββββ | 8187/20000 [7:33:37<10:42:01, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6285, 'grad_norm': 1.71875, 'learning_rate': 0.00057815, 'epoch': 14.03}
{'eval_loss': 0.647335946559906, 'eval_accuracy': 0.6335, 'eval_roc_auc': 0.6786475, 'eval_precision': 0.6539792387543253, 'eval_recall': 0.567, 'eval_f1': 0.6073915372254954, 'eval_runtime': 40.1999, 'eval_samples_per_second': 49.751, 'eval_steps_per_second': 6.219, 'epoch': 14.03}
42%|ββββββββββββββ | 8437/20000 [7:47:52<10:27:11, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
42%|ββββββββββββββ | 8500/20000 [7:51:17<10:23:14, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
44%|βββββββββββββββ | 8749/20000 [8:04:46<10:09:47, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6297, 'grad_norm': 1.96875, 'learning_rate': 0.0005500500000000001, 'epoch': 15.03}
{'eval_loss': 0.6463281512260437, 'eval_accuracy': 0.6265, 'eval_roc_auc': 0.6853465000000001, 'eval_precision': 0.6595208070617906, 'eval_recall': 0.523, 'eval_f1': 0.5833798103736754, 'eval_runtime': 40.142, 'eval_samples_per_second': 49.823, 'eval_steps_per_second': 6.228, 'epoch': 15.03}
45%|ββββββββββββββββ | 8999/20000 [8:19:01<9:56:02, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
45%|βββββββββββββββ | 9000/20000 [8:19:03<46:59:43, 15.38s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
47%|ββββββββββββββββ | 9311/20000 [8:35:54<9:40:01, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
48%|βββββββββββββββββ | 9500/20000 [8:46:09<9:29:21, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6293, 'grad_norm': 1.296875, 'learning_rate': 0.0005219500000000001, 'epoch': 16.03}
{'eval_loss': 0.6480234265327454, 'eval_accuracy': 0.6305, 'eval_roc_auc': 0.6804485, 'eval_precision': 0.660122699386503, 'eval_recall': 0.538, 'eval_f1': 0.5928374655647383, 'eval_runtime': 40.1318, 'eval_samples_per_second': 49.836, 'eval_steps_per_second': 6.229, 'epoch': 16.03}
48%|βββββββββββββββββ | 9561/20000 [8:50:11<9:25:56, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
49%|βββββββββββββββββ | 9874/20000 [9:07:07<9:09:02, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
50%|βββββββββββββββββ | 10000/20000 [9:13:57<9:02:19, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.627, 'grad_norm': 2.265625, 'learning_rate': 0.0004938000000000001, 'epoch': 17.03}
{'eval_loss': 0.6483984589576721, 'eval_accuracy': 0.6355, 'eval_roc_auc': 0.6856035, 'eval_precision': 0.6833558863328822, 'eval_recall': 0.505, 'eval_f1': 0.5807935595169638, 'eval_runtime': 40.13, 'eval_samples_per_second': 49.838, 'eval_steps_per_second': 6.23, 'epoch': 17.03}
51%|βββββββββββββββββ | 10124/20000 [9:21:22<8:56:14, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
52%|ββββββββββββββββββ | 10436/20000 [9:38:17<8:38:39, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
52%|ββββββββββββββββββ | 10500/20000 [9:41:46<8:34:54, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6275, 'grad_norm': 1.390625, 'learning_rate': 0.0004657, 'epoch': 18.03}
{'eval_loss': 0.6519452929496765, 'eval_accuracy': 0.6305, 'eval_roc_auc': 0.6805995, 'eval_precision': 0.6905109489051094, 'eval_recall': 0.473, 'eval_f1': 0.5614243323442136, 'eval_runtime': 40.2132, 'eval_samples_per_second': 49.735, 'eval_steps_per_second': 6.217, 'epoch': 18.03}
53%|ββββββββββββββββββ | 10686/20000 [9:52:34<8:25:15, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
55%|ββββββββββββββββββ | 10999/20000 [10:09:31<8:08:19, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
55%|ββββββββββββββββββ | 11000/20000 [10:09:34<8:19:01, 3.33s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6252, 'grad_norm': 3.34375, 'learning_rate': 0.00043755, 'epoch': 19.03}
{'eval_loss': 0.6541953086853027, 'eval_accuracy': 0.629, 'eval_roc_auc': 0.6794749999999998, 'eval_precision': 0.6827195467422096, 'eval_recall': 0.482, 'eval_f1': 0.5650644783118406, 'eval_runtime': 40.0556, 'eval_samples_per_second': 49.931, 'eval_steps_per_second': 6.241, 'epoch': 19.03}
56%|ββββββββββββββββββ | 11249/20000 [10:23:44<7:53:19, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
57%|βββββββββββββββββββ | 11500/20000 [10:37:17<7:39:27, 3.24s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
58%|βββββββββββββββββββ | 11561/20000 [10:40:36<7:37:39, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6261, 'grad_norm': 1.109375, 'learning_rate': 0.00040945, 'epoch': 20.03}
{'eval_loss': 0.6494452953338623, 'eval_accuracy': 0.627, 'eval_roc_auc': 0.682218, 'eval_precision': 0.6798866855524079, 'eval_recall': 0.48, 'eval_f1': 0.5627198124267292, 'eval_runtime': 40.2047, 'eval_samples_per_second': 49.745, 'eval_steps_per_second': 6.218, 'epoch': 20.03}
59%|βββββββββββββββββββ | 11811/20000 [10:54:52<7:24:27, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
60%|ββββββββββββββββββββ | 12000/20000 [11:05:04<7:13:33, 3.25s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
61%|ββββββββββββββββββββ | 12124/20000 [11:11:48<7:07:20, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6241, 'grad_norm': 3.203125, 'learning_rate': 0.0003813, 'epoch': 21.03}
{'eval_loss': 0.6481562256813049, 'eval_accuracy': 0.634, 'eval_roc_auc': 0.6827625000000002, 'eval_precision': 0.6744791666666666, 'eval_recall': 0.518, 'eval_f1': 0.5859728506787331, 'eval_runtime': 40.2872, 'eval_samples_per_second': 49.644, 'eval_steps_per_second': 6.205, 'epoch': 21.03}
62%|ββββββββββββββββββββ | 12374/20000 [11:26:04<6:53:48, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
62%|ββββββββββββββββββββ | 12500/20000 [11:32:54<6:47:35, 3.26s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
63%|βββββββββββββββββββββ | 12686/20000 [11:43:01<6:38:04, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6242, 'grad_norm': 2.28125, 'learning_rate': 0.0003532, 'epoch': 22.03}
{'eval_loss': 0.649734377861023, 'eval_accuracy': 0.6275, 'eval_roc_auc': 0.6792649999999999, 'eval_precision': 0.6768377253814147, 'eval_recall': 0.488, 'eval_f1': 0.5671121441022661, 'eval_runtime': 40.3914, 'eval_samples_per_second': 49.516, 'eval_steps_per_second': 6.189, 'epoch': 22.03}
65%|βββββββββββββββββββββ | 12936/20000 [11:57:21<6:24:52, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
65%|βββββββββββββββββββββ | 13000/20000 [12:00:48<6:21:27, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
66%|ββββββββββββββββββββββ | 13249/20000 [12:14:22<6:07:38, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6222, 'grad_norm': 2.34375, 'learning_rate': 0.00032505, 'epoch': 23.03}
{'eval_loss': 0.6509531140327454, 'eval_accuracy': 0.632, 'eval_roc_auc': 0.6766355, 'eval_precision': 0.6788617886178862, 'eval_recall': 0.501, 'eval_f1': 0.5765247410817032, 'eval_runtime': 40.5007, 'eval_samples_per_second': 49.382, 'eval_steps_per_second': 6.173, 'epoch': 23.03}
67%|ββββββββββββββββββββββ | 13499/20000 [12:28:40<5:54:09, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
68%|βββββββββββββββββββββ | 13500/20000 [12:28:43<28:00:10, 15.51s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
69%|ββββββββββββββββββββββ | 13811/20000 [12:45:39<5:37:10, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
70%|βββββββββββββββββββββββ | 14000/20000 [12:55:57<5:27:02, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6225, 'grad_norm': 2.546875, 'learning_rate': 0.00029695, 'epoch': 24.03}
{'eval_loss': 0.6520390510559082, 'eval_accuracy': 0.6295, 'eval_roc_auc': 0.6782395000000001, 'eval_precision': 0.6885007278020379, 'eval_recall': 0.473, 'eval_f1': 0.5607587433313574, 'eval_runtime': 40.4911, 'eval_samples_per_second': 49.394, 'eval_steps_per_second': 6.174, 'epoch': 24.03}
70%|βββββββββββββββββββββββ | 14061/20000 [13:00:00<5:23:30, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
72%|βββββββββββββββββββββββ | 14374/20000 [13:17:02<5:06:57, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
72%|ββββββββββββββββββββββββ | 14500/20000 [13:23:54<5:00:11, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6207, 'grad_norm': 2.859375, 'learning_rate': 0.0002688, 'epoch': 25.03}
{'eval_loss': 0.6483515501022339, 'eval_accuracy': 0.6285, 'eval_roc_auc': 0.6745335000000001, 'eval_precision': 0.6604244694132334, 'eval_recall': 0.529, 'eval_f1': 0.5874514158800667, 'eval_runtime': 40.521, 'eval_samples_per_second': 49.357, 'eval_steps_per_second': 6.17, 'epoch': 25.03}
73%|ββββββββββββββββββββββββ | 14624/20000 [13:31:22<4:53:21, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
75%|ββββββββββββββββββββββββ | 14936/20000 [13:48:24<4:36:21, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
75%|ββββββββββββββββββββββββ | 15000/20000 [13:51:53<4:32:50, 3.27s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6209, 'grad_norm': 2.625, 'learning_rate': 0.0002407, 'epoch': 26.03}
{'eval_loss': 0.6515390872955322, 'eval_accuracy': 0.622, 'eval_roc_auc': 0.677573, 'eval_precision': 0.6713483146067416, 'eval_recall': 0.478, 'eval_f1': 0.5584112149532711, 'eval_runtime': 40.6973, 'eval_samples_per_second': 49.143, 'eval_steps_per_second': 6.143, 'epoch': 26.03}
76%|βββββββββββββββββββββββββ | 15186/20000 [14:02:46<4:23:02, 3.28s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
77%|βββββββββββββββββββββββββ | 15499/20000 [14:19:48<4:05:49, 3.28s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
78%|βββββββββββββββββββββββββ | 15500/20000 [14:19:52<4:11:34, 3.35s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6186, 'grad_norm': 3.5625, 'learning_rate': 0.00021255, 'epoch': 27.03}
{'eval_loss': 0.6510000228881836, 'eval_accuracy': 0.6215, 'eval_roc_auc': 0.6764669999999999, 'eval_precision': 0.6613545816733067, 'eval_recall': 0.498, 'eval_f1': 0.5681688533941814, 'eval_runtime': 40.7851, 'eval_samples_per_second': 49.038, 'eval_steps_per_second': 6.13, 'epoch': 27.03}
79%|ββββββββββββββββββββββββββ | 15749/20000 [14:34:10<3:52:32, 3.28s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
80%|ββββββββββββββββββββββββββ | 16000/20000 [14:47:54<3:39:14, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
80%|ββββββββββββββββββββββββββ | 16061/20000 [14:51:15<3:35:40, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6187, 'grad_norm': 3.3125, 'learning_rate': 0.00018445, 'epoch': 28.03}
{'eval_loss': 0.6518672108650208, 'eval_accuracy': 0.622, 'eval_roc_auc': 0.6765534999999999, 'eval_precision': 0.6635388739946381, 'eval_recall': 0.495, 'eval_f1': 0.5670103092783505, 'eval_runtime': 40.841, 'eval_samples_per_second': 48.97, 'eval_steps_per_second': 6.121, 'epoch': 28.03}
82%|ββββββββββββββββββββββββββ | 16311/20000 [15:05:40<3:22:08, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
82%|βββββββββββββββββββββββββββ | 16500/20000 [15:15:59<3:11:49, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
83%|βββββββββββββββββββββββββββ | 16624/20000 [15:22:47<3:04:59, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6168, 'grad_norm': 4.75, 'learning_rate': 0.0001563, 'epoch': 29.03}
{'eval_loss': 0.6528280973434448, 'eval_accuracy': 0.6225, 'eval_roc_auc': 0.674847, 'eval_precision': 0.6639892904953146, 'eval_recall': 0.496, 'eval_f1': 0.5678305666857469, 'eval_runtime': 40.8188, 'eval_samples_per_second': 48.997, 'eval_steps_per_second': 6.125, 'epoch': 29.03}
84%|βββββββββββββββββββββββββββ | 16874/20000 [15:37:11<2:51:19, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
85%|ββββββββββββββββββββββββββββ | 17000/20000 [15:44:05<2:44:29, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
86%|ββββββββββββββββββββββββββββ | 17186/20000 [15:54:17<2:34:17, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.617, 'grad_norm': 3.40625, 'learning_rate': 0.0001282, 'epoch': 30.03}
{'eval_loss': 0.6515468955039978, 'eval_accuracy': 0.6255, 'eval_roc_auc': 0.6761704999999999, 'eval_precision': 0.6640522875816993, 'eval_recall': 0.508, 'eval_f1': 0.5756373937677054, 'eval_runtime': 40.8487, 'eval_samples_per_second': 48.961, 'eval_steps_per_second': 6.12, 'epoch': 30.03}
87%|ββββββββββββββββββββββββββββ | 17436/20000 [16:08:42<2:20:42, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
88%|ββββββββββββββββββββββββββββ | 17500/20000 [16:12:11<2:17:02, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
89%|βββββββββββββββββββββββββββββ | 17748/20000 [16:25:48<2:03:38, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6153, 'grad_norm': 3.5, 'learning_rate': 0.00010005, 'epoch': 31.03}
{'eval_loss': 0.6516093611717224, 'eval_accuracy': 0.625, 'eval_roc_auc': 0.675067, 'eval_precision': 0.6619170984455959, 'eval_recall': 0.511, 'eval_f1': 0.5767494356659142, 'eval_runtime': 40.8897, 'eval_samples_per_second': 48.912, 'eval_steps_per_second': 6.114, 'epoch': 31.03}
90%|βββββββββββββββββββββββββββββ | 17999/20000 [16:40:15<1:49:50, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
90%|βββββββββββββββββββββββββββββ | 18000/20000 [16:40:19<8:41:40, 15.65s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
92%|ββββββββββββββββββββββββββββββ | 18311/20000 [16:57:23<1:32:40, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
92%|ββββββββββββββββββββββββββββββ | 18500/20000 [17:07:45<1:22:23, 3.30s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6157, 'grad_norm': 5.03125, 'learning_rate': 7.195e-05, 'epoch': 32.03}
{'eval_loss': 0.6506797075271606, 'eval_accuracy': 0.6255, 'eval_roc_auc': 0.67511, 'eval_precision': 0.6590621039290241, 'eval_recall': 0.52, 'eval_f1': 0.5813303521520402, 'eval_runtime': 40.9702, 'eval_samples_per_second': 48.816, 'eval_steps_per_second': 6.102, 'epoch': 32.03}
93%|ββββββββββββββββββββββββββββββ | 18561/20000 [17:11:49<1:18:54, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
94%|βββββββββββββββββββββββββββββββ | 18873/20000 [17:28:54<1:01:49, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
95%|βββββββββββββββββββββββββββββββββ | 19000/20000 [17:35:52<54:53, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6152, 'grad_norm': 3.046875, 'learning_rate': 4.385e-05, 'epoch': 33.03}
{'eval_loss': 0.650265634059906, 'eval_accuracy': 0.6235, 'eval_roc_auc': 0.6751385, 'eval_precision': 0.6549560853199499, 'eval_recall': 0.522, 'eval_f1': 0.5809682804674458, 'eval_runtime': 40.9015, 'eval_samples_per_second': 48.898, 'eval_steps_per_second': 6.112, 'epoch': 33.03}
96%|βββββββββββββββββββββββββββββββββ | 19123/20000 [17:43:21<48:07, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
97%|βββββββββββββββββββββββββββββββββ | 19436/20000 [18:00:29<30:56, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
98%|ββββββββββββββββββββββββββββββββββ| 19500/20000 [18:04:00<27:26, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6138, 'grad_norm': 6.0, 'learning_rate': 1.57e-05, 'epoch': 34.03}
{'eval_loss': 0.6503046751022339, 'eval_accuracy': 0.625, 'eval_roc_auc': 0.6752875, 'eval_precision': 0.656641604010025, 'eval_recall': 0.524, 'eval_f1': 0.5828698553948832, 'eval_runtime': 40.8155, 'eval_samples_per_second': 49.001, 'eval_steps_per_second': 6.125, 'epoch': 34.03}
98%|ββββββββββββββββββββββββββββββββββ| 19686/20000 [18:14:55<17:14, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
100%|ββββββββββββββββββββββββββββββββββ| 19998/20000 [18:32:01<00:06, 3.29s/it]/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/home/soumic/Codes/mqtl-classification/venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:295: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with torch.enable_grad(), device_autocast_ctx, torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): # type: ignore[attr-defined]
{'loss': 0.6132, 'grad_norm': 4.0, 'learning_rate': 0.0, 'epoch': 35.02}
{'eval_loss': 0.6502890586853027, 'eval_accuracy': 0.6245, 'eval_roc_auc': 0.6752564999999999, 'eval_precision': 0.655819774718398, 'eval_recall': 0.524, 'eval_f1': 0.5825458588104503, 'eval_runtime': 40.8202, 'eval_samples_per_second': 48.995, 'eval_steps_per_second': 6.124, 'epoch': 35.02}
{'train_runtime': 66770.9915, 'train_samples_per_second': 9.585, 'train_steps_per_second': 0.3, 'train_loss': 0.628925074005127, 'epoch': 35.02}
100%|ββββββββββββββββββββββββββββββββββ| 20000/20000 [18:32:49<00:00, 3.34s/it]
result = TrainOutput(global_step=20000, training_loss=0.628925074005127, metrics={'train_runtime': 66770.9915, 'train_samples_per_second': 9.585, 'train_steps_per_second': 0.3, 'total_flos': 5.030096633856e+16, 'train_loss': 0.628925074005127, 'epoch': 35.0156796875})
test_results = {'eval_loss': 0.6681622266769409, 'eval_accuracy': 0.6058029014507254, 'eval_roc_auc': 0.6498473473473474, 'eval_precision': 0.6168327796234773, 'eval_recall': 0.5575575575575575, 'eval_f1': 0.5856992639327024, 'eval_runtime': 40.8408, 'eval_samples_per_second': 48.946, 'eval_steps_per_second': 6.121, 'epoch': 35.0156796875}
pytorch_model.bin: 100%|βββββββββββββββββββ| 8.17M/8.17M [00:03<00:00, 2.28MB/s]
wandb: π View run laptop_run_hyena_dna-mqtl_classification at: https://wandb.ai/notredamians/huggingface/runs/09qzuf97
wandb: Find logs at: wandb/run-20241014_204712-09qzuf97/logs
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7e61f8599900>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/4504800232407040/envelope/
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7e61f8599ab0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/4504800232407040/envelope/
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7e61f8599c60>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/4504800232407040/envelope/
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7e61f859bac0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/4504800232407040/envelope/
WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7e61f859bca0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/4504800232407040/envelope/
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7e61f859beb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/4504800232407040/envelope/