Update README.md
Browse files
README.md
CHANGED
@@ -13,9 +13,110 @@ datasets:
|
|
13 |
|
14 |
# Model Card for Model ID
|
15 |
|
16 |
-
ModernBERT fine-tuned on tasksource NLI tasks
|
|
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
|
21 |
```
|
|
|
13 |
|
14 |
# Model Card for Model ID
|
15 |
|
16 |
+
ModernBERT fine-tuned on tasksource NLI tasks, including MNLI, ANLI, SICK, WANLI, doc-nli, LingNLI, FOLIO, FOL-NLI, LogicNLI, Label-NLI...)
|
17 |
+
Test accuracy at 10k training steps (current version, 100k steps incoming at the end of the week).
|
18 |
|
19 |
+
| test_name | test_accuracy |
|
20 |
+
|:-------------------------------------|----------------:|
|
21 |
+
| glue/mnli | 0.82 |
|
22 |
+
| glue/qnli | 0.84 |
|
23 |
+
| glue/rte | 0.78 |
|
24 |
+
| super_glue/cb | 0.75 |
|
25 |
+
| anli/a1 | 0.51 |
|
26 |
+
| anli/a2 | 0.39 |
|
27 |
+
| anli/a3 | 0.38 |
|
28 |
+
| sick/label | 0.91 |
|
29 |
+
| sick/entailment_AB | 0.81 |
|
30 |
+
| snli | 0.82 |
|
31 |
+
| scitail/snli_format | 0.94 |
|
32 |
+
| hans | 0.99 |
|
33 |
+
| WANLI | 0.7 |
|
34 |
+
| recast/recast_ner | 0.84 |
|
35 |
+
| recast/recast_kg_relations | 0.89 |
|
36 |
+
| recast/recast_puns | 0.78 |
|
37 |
+
| recast/recast_verbcorner | 0.87 |
|
38 |
+
| recast/recast_sentiment | 0.97 |
|
39 |
+
| recast/recast_verbnet | 0.74 |
|
40 |
+
| recast/recast_factuality | 0.88 |
|
41 |
+
| recast/recast_megaveridicality | 0.86 |
|
42 |
+
| probability_words_nli/reasoning_2hop | 0.76 |
|
43 |
+
| probability_words_nli/reasoning_1hop | 0.84 |
|
44 |
+
| probability_words_nli/usnli | 0.7 |
|
45 |
+
| nan-nli | 0.62 |
|
46 |
+
| nli_fever | 0.71 |
|
47 |
+
| breaking_nli | 0.98 |
|
48 |
+
| conj_nli | 0.66 |
|
49 |
+
| fracas | 0 |
|
50 |
+
| dialogue_nli | 0.84 |
|
51 |
+
| mpe | 0.69 |
|
52 |
+
| dnc | 0.81 |
|
53 |
+
| recast_white/fnplus | 0.6 |
|
54 |
+
| recast_white/sprl | 0.83 |
|
55 |
+
| recast_white/dpr | 0.57 |
|
56 |
+
| robust_nli/IS_CS | 0.45 |
|
57 |
+
| robust_nli/LI_LI | 0.92 |
|
58 |
+
| robust_nli/ST_WO | 0.66 |
|
59 |
+
| robust_nli/PI_SP | 0.53 |
|
60 |
+
| robust_nli/PI_CD | 0.54 |
|
61 |
+
| robust_nli/ST_SE | 0.58 |
|
62 |
+
| robust_nli/ST_NE | 0.52 |
|
63 |
+
| robust_nli/ST_LM | 0.47 |
|
64 |
+
| robust_nli_is_sd | 0.99 |
|
65 |
+
| robust_nli_li_ts | 0.81 |
|
66 |
+
| add_one_rte | 0.87 |
|
67 |
+
| cycic_classification | 0.62 |
|
68 |
+
| lingnli | 0.73 |
|
69 |
+
| monotonicity-entailment | 0.84 |
|
70 |
+
| scinli | 0.65 |
|
71 |
+
| naturallogic | 0.77 |
|
72 |
+
| syntactic-augmentation-nli | 0.87 |
|
73 |
+
| autotnli | 0.83 |
|
74 |
+
| defeasible-nli/atomic | 0.72 |
|
75 |
+
| defeasible-nli/snli | 0.67 |
|
76 |
+
| help-nli | 0.72 |
|
77 |
+
| nli-veridicality-transitivity | 0.92 |
|
78 |
+
| lonli | 0.88 |
|
79 |
+
| dadc-limit-nli | 0.59 |
|
80 |
+
| folio | 0.44 |
|
81 |
+
| tomi-nli | 0.52 |
|
82 |
+
| temporal-nli | 0.62 |
|
83 |
+
| counterfactually-augmented-snli | 0.69 |
|
84 |
+
| cnli | 0.71 |
|
85 |
+
| chaos-mnli-ambiguity | nan |
|
86 |
+
| logiqa-2.0-nli | 0.51 |
|
87 |
+
| mindgames | 0.83 |
|
88 |
+
| ConTRoL-nli | 0.49 |
|
89 |
+
| logical-fallacy | 0.13 |
|
90 |
+
| conceptrules_v2 | 0.97 |
|
91 |
+
| zero-shot-label-nli | 0.67 |
|
92 |
+
| scone | 0.79 |
|
93 |
+
| monli | 0.76 |
|
94 |
+
| SpaceNLI | 0.89 |
|
95 |
+
| propsegment/nli | 0.82 |
|
96 |
+
| SDOH-NLI | 0.98 |
|
97 |
+
| scifact_entailment | 0.52 |
|
98 |
+
| AdjectiveScaleProbe-nli | 0.91 |
|
99 |
+
| resnli | 0.97 |
|
100 |
+
| semantic_fragments_nli | 0.91 |
|
101 |
+
| dataset_train_nli | 0.81 |
|
102 |
+
| ruletaker | 0.69 |
|
103 |
+
| PARARULE-Plus | 1 |
|
104 |
+
| logical-entailment | 0.53 |
|
105 |
+
| nope | 0.36 |
|
106 |
+
| LogicNLI | 0.34 |
|
107 |
+
| contract-nli/contractnli_a/seg | 0.79 |
|
108 |
+
| contract-nli/contractnli_b/full | 0.67 |
|
109 |
+
| nli4ct_semeval2024 | 0.53 |
|
110 |
+
| biosift-nli | 0.85 |
|
111 |
+
| SIGA-nli | 0.46 |
|
112 |
+
| FOL-nli | 0.49 |
|
113 |
+
| doc-nli | 0.81 |
|
114 |
+
| mctest-nli | 0.84 |
|
115 |
+
| idioms-nli | 0.77 |
|
116 |
+
| lifecycle-entailment | 0.57 |
|
117 |
+
| MSciNLI | 0.65 |
|
118 |
+
| babi_nli | 0.77 |
|
119 |
+
| gen_debiased_nli | 0.82 |
|
120 |
|
121 |
|
122 |
```
|