lewtun's picture
lewtun HF staff
Add evaluation results on the autoevaluate--squad-sample config and test split of autoevaluate/squad-sample
a522622
|
raw
history blame
2.41 kB
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: extractive-question-answering
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: autoevaluate/squad-sample
type: autoevaluate/squad-sample
config: autoevaluate--squad-sample
split: test
metrics:
- type: exact_match
value: 70.0
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmZjMTg4Y2RmMDFkNmIyNzJjY2ZjNDI4MjNiNWRhNWFhMjA3NzA2Mjc0ZTA4YzlmOGNiYTZhNDJkZTQwMjE5OCIsInZlcnNpb24iOjF9.W_s5Ug2oa7oAVQnHx0hg7KQkiombED_zkp02oGl3-DwfMf_99lbCHNOBhd9cNeXpaNhuu1BZEB8nR1ZB-KVVCg
- type: f1
value: 76.9929
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDUzNTJiM2ZkMjA2NGIwMTdhZTliMWZlMzZhNGQ0NWNkNjE2YWI3MTUyZDI4OWUwNmQ5ZDRlNzQ0NGI0MWMyMSIsInZlcnNpb24iOjF9.UpR3sRhyOQ7QA928AK6yjj7a6Lrz3fB7OZ1pVmY5iNLT5cWF-BdCoXLZ9TrkaAisTb-0zyen-VZGdmIGKmaVAw
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# extractive-question-answering
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
```
{'exact_match': 72.95175023651845,
'f1': 81.85552166092225,
'latency_in_seconds': 0.008616470915042614,
'samples_per_second': 116.05679516125359,
'total_time_in_seconds': 91.07609757200044}
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.263 | 1.0 | 5533 | 1.2169 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1