asahi417 commited on
Commit
cbad26c
1 Parent(s): a660fc8

commit files to HF hub

Browse files
README.md CHANGED
@@ -31,33 +31,33 @@ model-index:
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
- value: 0.0
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
- value: 0.0
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
- value: 0.0
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
- value: 77.03
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
- value: 56.66
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
- value: 0.0
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
- value: 0.0
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-90000-frquad-qa`
56
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-fr-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-90000) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
59
  ### Overview
60
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-fr-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-90000)
61
  - **Language:** fr
62
  - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
63
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -93,16 +93,16 @@ output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fournea
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
- | AnswerExactMatch | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
- | AnswerF1Score | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
- | BERTScore | 77.03 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
- | Bleu_1 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
- | Bleu_2 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
101
- | Bleu_3 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
102
- | Bleu_4 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
103
- | METEOR | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
104
- | MoverScore | 56.66 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
105
- | ROUGE_L | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
106
 
107
 
108
 
@@ -114,15 +114,15 @@ The following hyperparameters were used during fine-tuning:
114
  - input_types: ['paragraph_question']
115
  - output_types: ['answer']
116
  - prefix_types: None
117
- - model: vocabtrimmer/mt5-small-trimmed-fr-90000
118
  - max_length: 512
119
  - max_length_output: 32
120
- - epoch: 5
121
  - batch: 32
122
- - lr: 0.0001
123
  - fp16: False
124
  - random_seed: 1
125
- - gradient_accumulation_steps: 4
126
  - label_smoothing: 0.15
127
 
128
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-90000-frquad-qa/raw/main/trainer_config.json).
 
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
+ value: 9.96
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
+ value: 27.98
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
+ value: 23.8
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
+ value: 87.4
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
+ value: 68.49
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
+ value: 43.12
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
+ value: 24.06
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-90000-frquad-qa`
56
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-fr-90000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-90000) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
59
  ### Overview
60
+ - **Language model:** [ckpts/mt5-small-trimmed-fr-90000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-90000)
61
  - **Language:** fr
62
  - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
63
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
+ | AnswerExactMatch | 24.06 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
+ | AnswerF1Score | 43.12 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
+ | BERTScore | 87.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
+ | Bleu_1 | 17.28 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
+ | Bleu_2 | 14.01 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
101
+ | Bleu_3 | 11.74 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
102
+ | Bleu_4 | 9.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
103
+ | METEOR | 23.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
104
+ | MoverScore | 68.49 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
105
+ | ROUGE_L | 27.98 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
106
 
107
 
108
 
 
114
  - input_types: ['paragraph_question']
115
  - output_types: ['answer']
116
  - prefix_types: None
117
+ - model: ckpts/mt5-small-trimmed-fr-90000
118
  - max_length: 512
119
  - max_length_output: 32
120
+ - epoch: 13
121
  - batch: 32
122
+ - lr: 0.001
123
  - fp16: False
124
  - random_seed: 1
125
+ - gradient_accumulation_steps: 2
126
  - label_smoothing: 0.15
127
 
128
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-90000-frquad-qa/raw/main/trainer_config.json).
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 1.148764872165851e-20, "Bleu_2": 2.051116381898933e-14, "Bleu_3": 2.4883417255587024e-12, "Bleu_4": 2.7407528945568e-11, "METEOR": 0.0, "ROUGE_L": 0.0, "BERTScore": 0.7678841189615699, "MoverScore": 0.5553836859189898, "AnswerF1Score": 0.0, "AnswerExactMatch": 0.0}, "test": {"Bleu_1": 2.232847769559236e-20, "Bleu_2": 3.985492540479623e-14, "Bleu_3": 4.8345526682602444e-12, "Bleu_4": 5.324679149106555e-11, "METEOR": 2.1914039137865177e-05, "ROUGE_L": 0.0, "BERTScore": 0.7703234967279016, "MoverScore": 0.5665574813142406, "AnswerF1Score": 0.0, "AnswerExactMatch": 0.0}}
 
1
+ {"validation": {"Bleu_1": 0.1932725401837919, "Bleu_2": 0.16007914602896, "Bleu_3": 0.13578319438973033, "Bleu_4": 0.116296643168242, "METEOR": 0.2336239495477666, "ROUGE_L": 0.2869036816616231, "BERTScore": 0.8729169056511284, "MoverScore": 0.6741014345784797, "AnswerF1Score": 42.11315829622192, "AnswerExactMatch": 19.69887076537014}, "test": {"Bleu_1": 0.17281853864733646, "Bleu_2": 0.14009403051623157, "Bleu_3": 0.11740903841504663, "Bleu_4": 0.09957940417233394, "METEOR": 0.23798691012437684, "ROUGE_L": 0.27980331642769446, "BERTScore": 0.8740178610442424, "MoverScore": 0.6849181722997052, "AnswerF1Score": 43.11839268944033, "AnswerExactMatch": 24.058971141781683}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff