|
[] |
|
[] |
|
dados_tokenizados: |
|
DatasetDict({ |
|
train: Dataset({ |
|
features: ['rotulo', 'rotulo_simples', 'text', 'label', 'input_ids', 'attention_mask'], |
|
num_rows: 4000 |
|
}) |
|
validation: Dataset({ |
|
features: ['rotulo', 'rotulo_simples', 'text', 'label', 'input_ids', 'attention_mask'], |
|
num_rows: 1000 |
|
}) |
|
test: Dataset({ |
|
features: ['rotulo', 'rotulo_simples', 'text', 'label', 'input_ids', 'attention_mask'], |
|
num_rows: 1000 |
|
}) |
|
}) |
|
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https: |
|
warnings.warn( |
|
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert/distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] |
|
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. |
|
{'eval_loss': 0.21834564208984375, 'eval_accuracy': 0.938, 'eval_runtime': 30.3683, 'eval_samples_per_second': 32.929, 'eval_steps_per_second': 2.075, 'epoch': 1.0} |
|
{'loss': 0.2031, 'grad_norm': 1.1480563879013062, 'learning_rate': 1.2e-05, 'epoch': 2.0} |
|
{'eval_loss': 0.19427122175693512, 'eval_accuracy': 0.938, 'eval_runtime': 42.2287, 'eval_samples_per_second': 23.681, 'eval_steps_per_second': 1.492, 'epoch': 2.0} |
|
{'eval_loss': 0.3195326626300812, 'eval_accuracy': 0.921, 'eval_runtime': 26.5577, 'eval_samples_per_second': 37.654, 'eval_steps_per_second': 2.372, 'epoch': 3.0} |
|
{'loss': 0.0672, 'grad_norm': 1.1029362678527832, 'learning_rate': 4.000000000000001e-06, 'epoch': 4.0} |
|
{'eval_loss': 0.36123067140579224, 'eval_accuracy': 0.925, 'eval_runtime': 26.675, 'eval_samples_per_second': 37.488, 'eval_steps_per_second': 2.362, 'epoch': 4.0} |
|
{'eval_loss': 0.3963741362094879, 'eval_accuracy': 0.926, 'eval_runtime': 25.9784, 'eval_samples_per_second': 38.493, 'eval_steps_per_second': 2.425, 'epoch': 5.0} |
|
{'train_runtime': 8026.8642, 'train_samples_per_second': 2.492, 'train_steps_per_second': 0.156, 'train_loss': 0.11480112991333008, 'epoch': 5.0} |
|
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert/distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] |
|
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. |
|
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert/distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] |
|
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. |
|
{'eval_loss': 0.17208045721054077, 'eval_accuracy': 0.939, 'eval_runtime': 40.1609, 'eval_samples_per_second': 24.9, 'eval_steps_per_second': 0.797, 'epoch': 1.0} |
|
{'eval_loss': 0.24476991593837738, 'eval_accuracy': 0.926, 'eval_runtime': 38.171, 'eval_samples_per_second': 26.198, 'eval_steps_per_second': 0.838, 'epoch': 2.0} |
|
{'eval_loss': 0.6838799715042114, 'eval_accuracy': 0.656, 'eval_runtime': 214.6826, 'eval_samples_per_second': 4.658, 'eval_steps_per_second': 0.149, 'epoch': 3.0} |
|
{'loss': 0.2956, 'grad_norm': 2.695140838623047, 'learning_rate': 9.200000000000002e-06, 'epoch': 4.0} |
|
{'eval_loss': 0.31772053241729736, 'eval_accuracy': 0.87, 'eval_runtime': 37.1806, 'eval_samples_per_second': 26.896, 'eval_steps_per_second': 0.861, 'epoch': 4.0} |
|
{'eval_loss': 0.2808445990085602, 'eval_accuracy': 0.932, 'eval_runtime': 37.3397, 'eval_samples_per_second': 26.781, 'eval_steps_per_second': 0.857, 'epoch': 5.0} |
|
{'eval_loss': 0.3926897644996643, 'eval_accuracy': 0.905, 'eval_runtime': 37.6368, 'eval_samples_per_second': 26.57, 'eval_steps_per_second': 0.85, 'epoch': 6.0} |
|
{'eval_loss': 0.37185582518577576, 'eval_accuracy': 0.922, 'eval_runtime': 37.484, 'eval_samples_per_second': 26.678, 'eval_steps_per_second': 0.854, 'epoch': 7.0} |
|
{'loss': 0.1013, 'grad_norm': 0.6478258371353149, 'learning_rate': 8.400000000000001e-06, 'epoch': 8.0} |
|
{'eval_loss': 0.4580109715461731, 'eval_accuracy': 0.91, 'eval_runtime': 38.2702, 'eval_samples_per_second': 26.13, 'eval_steps_per_second': 0.836, 'epoch': 8.0} |
|
{'eval_loss': 0.4977562129497528, 'eval_accuracy': 0.913, 'eval_runtime': 37.7002, 'eval_samples_per_second': 26.525, 'eval_steps_per_second': 0.849, 'epoch': 9.0} |
|
{'eval_loss': 0.4662289023399353, 'eval_accuracy': 0.92, 'eval_runtime': 38.4422, 'eval_samples_per_second': 26.013, 'eval_steps_per_second': 0.832, 'epoch': 10.0} |
|
{'eval_loss': 0.5506279468536377, 'eval_accuracy': 0.901, 'eval_runtime': 37.4907, 'eval_samples_per_second': 26.673, 'eval_steps_per_second': 0.854, 'epoch': 11.0} |
|
{'loss': 0.0442, 'grad_norm': 0.6364777684211731, 'learning_rate': 7.600000000000001e-06, 'epoch': 12.0} |
|
{'eval_loss': 0.578902006149292, 'eval_accuracy': 0.903, 'eval_runtime': 38.2969, 'eval_samples_per_second': 26.112, 'eval_steps_per_second': 0.836, 'epoch': 12.0} |
|
{'eval_loss': 0.47741687297821045, 'eval_accuracy': 0.92, 'eval_runtime': 37.7268, 'eval_samples_per_second': 26.506, 'eval_steps_per_second': 0.848, 'epoch': 13.0} |
|
{'eval_loss': 0.5484298467636108, 'eval_accuracy': 0.894, 'eval_runtime': 38.013, 'eval_samples_per_second': 26.307, 'eval_steps_per_second': 0.842, 'epoch': 14.0} |
|
{'eval_loss': 0.538878321647644, 'eval_accuracy': 0.909, 'eval_runtime': 38.0368, 'eval_samples_per_second': 26.29, 'eval_steps_per_second': 0.841, 'epoch': 15.0} |
|
{'loss': 0.0268, 'grad_norm': 20.58578109741211, 'learning_rate': 6.800000000000001e-06, 'epoch': 16.0} |
|
|