Update README.md
Browse files
README.md
CHANGED
@@ -296,4 +296,4 @@ datasets:
|
|
296 |
deberta-v3-base with context length of 1280 fine-tuned on tasksource for 150k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
|
297 |
Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
|
298 |
|
299 |
-
This model is suitable for long context NLI or and as a backbone for
|
|
|
296 |
deberta-v3-base with context length of 1280 fine-tuned on tasksource for 150k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
|
297 |
Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
|
298 |
|
299 |
+
This model is suitable for long context NLI or and as a backbone for reward models or classifiers fine-tuning.
|