marcelovidigal commited on
Commit
65a461e
1 Parent(s): 078cd08

Training in progress, epoch 3

Browse files
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9ce3f039cf9ababce135fb32db76851cc5e847c2b8f73dcfa5ad01812d16627f
3
  size 267832560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52cc5a2e9ccba371808b16ca6dc3bdc707463a7ae53872d4af08cec1fda55cf6
3
  size 267832560
wandb/debug-internal.log CHANGED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240924_172630-x9iddikd/files/output.log CHANGED
@@ -32,3 +32,4 @@ You should probably TRAIN this model on a down-stream task to be able to use it
32
  Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert/distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight']
33
  You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
34
  {'eval_loss': 0.17208045721054077, 'eval_accuracy': 0.939, 'eval_runtime': 40.1609, 'eval_samples_per_second': 24.9, 'eval_steps_per_second': 0.797, 'epoch': 1.0}
 
 
32
  Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert/distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight']
33
  You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
34
  {'eval_loss': 0.17208045721054077, 'eval_accuracy': 0.939, 'eval_runtime': 40.1609, 'eval_samples_per_second': 24.9, 'eval_steps_per_second': 0.797, 'epoch': 1.0}
35
+ {'eval_loss': 0.24476991593837738, 'eval_accuracy': 0.926, 'eval_runtime': 38.171, 'eval_samples_per_second': 26.198, 'eval_steps_per_second': 0.838, 'epoch': 2.0}
wandb/run-20240924_172630-x9iddikd/files/wandb-summary.json CHANGED
@@ -1 +1 @@
1
- {"eval/loss": 0.24476991593837738, "eval/accuracy": 0.926, "eval/runtime": 38.171, "eval/samples_per_second": 26.198, "eval/steps_per_second": 0.838, "train/epoch": 2.0, "train/global_step": 250, "_timestamp": 1727221963.048966, "_runtime": 12372.176056861877, "_step": 9, "train/loss": 0.0672, "train/grad_norm": 1.1029362678527832, "train/learning_rate": 4.000000000000001e-06, "train_runtime": 8026.8642, "train_samples_per_second": 2.492, "train_steps_per_second": 0.156, "total_flos": 2396475988298112.0, "train_loss": 0.11480112991333008}
 
1
+ {"eval/loss": 0.6838799715042114, "eval/accuracy": 0.656, "eval/runtime": 214.6826, "eval/samples_per_second": 4.658, "eval/steps_per_second": 0.149, "train/epoch": 3.0, "train/global_step": 375, "_timestamp": 1727224105.7936158, "_runtime": 14514.920706748962, "_step": 10, "train/loss": 0.0672, "train/grad_norm": 1.1029362678527832, "train/learning_rate": 4.000000000000001e-06, "train_runtime": 8026.8642, "train_samples_per_second": 2.492, "train_steps_per_second": 0.156, "total_flos": 2396475988298112.0, "train_loss": 0.11480112991333008}
wandb/run-20240924_172630-x9iddikd/logs/debug-internal.log CHANGED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240924_172630-x9iddikd/run-x9iddikd.wandb CHANGED
Binary files a/wandb/run-20240924_172630-x9iddikd/run-x9iddikd.wandb and b/wandb/run-20240924_172630-x9iddikd/run-x9iddikd.wandb differ