sileod commited on
Commit
e45631e
1 Parent(s): a5f0674

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -1
README.md CHANGED
@@ -296,4 +296,78 @@ datasets:
296
  deberta-v3-base with context length of 1280 fine-tuned on tasksource for 150k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
297
  Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
298
 
299
- This model is suitable for long context NLI or and as a backbone for reward models or classifiers fine-tuning.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
296
  deberta-v3-base with context length of 1280 fine-tuned on tasksource for 150k steps. I oversampled long NLI tasks (ConTRoL, doc-nli).
297
  Training data include helpsteer v1/v2, logical reasoning tasks (FOLIO, FOL-nli, LogicNLI...), OASST, hh/rlhf, linguistics oriented NLI tasks, tasksource-dpo, fact verification tasks.
298
 
299
+ This model is suitable for long context NLI or and as a backbone for reward models or classifiers fine-tuning.
300
+
301
+ This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
302
+ - Zero-shot entailment-based classification for arbitrary labels [ZS].
303
+ - Natural language inference [NLI]
304
+ - Hundreds of previous tasks with tasksource-adapters [TA].
305
+ - Further fine-tuning on a new task or tasksource task (classification, token classification or multiple-choice) [FT].
306
+
307
+ # [ZS] Zero-shot classification pipeline
308
+ ```python
309
+ from transformers import pipeline
310
+ classifier = pipeline("zero-shot-classification",model="tasksource/deberta-base-long-nli")
311
+
312
+ text = "one day I will see the world"
313
+ candidate_labels = ['travel', 'cooking', 'dancing']
314
+ classifier(text, candidate_labels)
315
+ ```
316
+ NLI training data of this model includes [label-nli](https://huggingface.co/datasets/tasksource/zero-shot-label-nli), a NLI dataset specially constructed to improve this kind of zero-shot classification.
317
+
318
+ # [NLI] Natural language inference pipeline
319
+
320
+ ```python
321
+ from transformers import pipeline
322
+ pipe = pipeline("text-classification",model="tasksource/deberta-base-long-nli")
323
+ pipe([dict(text='there is a cat',
324
+ text_pair='there is a black cat')]) #list of (premise,hypothesis)
325
+ # [{'label': 'neutral', 'score': 0.9952911138534546}]
326
+ ```
327
+
328
+ # [TA] Tasksource-adapters: 1 line access to hundreds of tasks
329
+
330
+ ```python
331
+ # !pip install tasknet
332
+ import tasknet as tn
333
+ pipe = tn.load_pipeline('tasksource/deberta-base-long-nli','glue/sst2') # works for 500+ tasksource tasks
334
+ pipe(['That movie was great !', 'Awful movie.'])
335
+ # [{'label': 'positive', 'score': 0.9956}, {'label': 'negative', 'score': 0.9967}]
336
+ ```
337
+ The list of tasks is available in model config.json.
338
+ This is more efficient than ZS since it requires only one forward pass per example, but it is less flexible.
339
+
340
+
341
+ # [FT] Tasknet: 3 lines fine-tuning
342
+
343
+ ```python
344
+ # !pip install tasknet
345
+ import tasknet as tn
346
+ hparams=dict(model_name='tasksource/deberta-base-long-nli', learning_rate=2e-5)
347
+ model, trainer = tn.Model_Trainer([tn.AutoTask("glue/rte")], hparams)
348
+ trainer.train()
349
+ ```
350
+
351
+
352
+ # Citation
353
+
354
+ More details on this [article:](https://aclanthology.org/2024.lrec-main.1361/)
355
+ ```
356
+ @inproceedings{sileo-2024-tasksource,
357
+ title = "tasksource: A Large Collection of {NLP} tasks with a Structured Dataset Preprocessing Framework",
358
+ author = "Sileo, Damien",
359
+ editor = "Calzolari, Nicoletta and
360
+ Kan, Min-Yen and
361
+ Hoste, Veronique and
362
+ Lenci, Alessandro and
363
+ Sakti, Sakriani and
364
+ Xue, Nianwen",
365
+ booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
366
+ month = may,
367
+ year = "2024",
368
+ address = "Torino, Italia",
369
+ publisher = "ELRA and ICCL",
370
+ url = "https://aclanthology.org/2024.lrec-main.1361",
371
+ pages = "15655--15684",
372
+ }
373
+ ```