AdditiveLLM
Collection
32 items
•
Updated
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.3534 | 1.0 | 19907 | 0.3347 | 0.8548 |
0.2855 | 2.0 | 39814 | 0.3274 | 0.8521 |
0.3284 | 3.0 | 59721 | 0.3146 | 0.8588 |
0.2881 | 4.0 | 79628 | 0.3177 | 0.8560 |
0.2813 | 5.0 | 99535 | 0.3165 | 0.8563 |
Base model
distilbert/distilbert-base-uncased