Update README.md
Browse files
README.md
CHANGED
@@ -302,13 +302,10 @@ license: apache-2.0
|
|
302 |
deberta-v3-base with context length of 1280 fine-tuned on tasksource for 250k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
|
303 |
Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
|
304 |
|
305 |
-
This model is suitable for long context NLI or as a backbone for reward models or classifiers fine-tuning.
|
306 |
-
|
307 |
This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
|
308 |
- Zero-shot entailment-based classification for arbitrary labels [ZS].
|
309 |
- Natural language inference [NLI]
|
310 |
-
-
|
311 |
-
- Further fine-tuning on a new task or tasksource task (classification, token classification or multiple-choice) [FT].
|
312 |
|
313 |
| dataset | accuracy |
|
314 |
|:----------------------------|----------------:|
|
|
|
302 |
deberta-v3-base with context length of 1280 fine-tuned on tasksource for 250k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
|
303 |
Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
|
304 |
|
|
|
|
|
305 |
This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
|
306 |
- Zero-shot entailment-based classification for arbitrary labels [ZS].
|
307 |
- Natural language inference [NLI]
|
308 |
+
- Further fine-tuning on a new task or tasksource task (classification, token classification, reward modeling or multiple-choice) [FT].
|
|
|
309 |
|
310 |
| dataset | accuracy |
|
311 |
|:----------------------------|----------------:|
|