File size: 2,073 Bytes
b85e126 efda1d9 b85e126 efda1d9 b85e126 0763962 b85e126 0763962 b85e126 1cc17d9 b85e126 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language: ISO 639-1 code for your language, or `multilingual`
thumbnail: url to a thumbnail used in social sharing
tags:
- array
- of
- tags
datasets:
- array of dataset identifiers
metrics:
- array of metric identifiers
widget:
- text: "question: which description describes the word \" java \" best in the following\
\ context? descriptions: [ \" A drink consisting of an infusion of ground coffee\
\ beans \" , \" a platform-independent programming lanugage \" , or \" an island\
\ in Indonesia to the south of Borneo \" ] context: I like to drink ' java '\
\ in the morning ."
---
# T5-large for Word Sense Disambiguation
This is the checkpoint for T5-large after being trained on the [SemCor 3.0 dataset](http://lcl.uniroma1.it/wsdeval/).
Additional information about this model:
* [The t5-large model page](https://huggingface.co/t5-large)
* [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
* [Official implementation by Google](https://github.com/google-research/text-to-text-transfer-transformer)
The model can be loaded to perform a few-shot classification like so:
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
AutoModelForSeq2SeqLM.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
AutoTokenizer.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
input = 'question: which description describes the word " java " best in the following context? \
descriptions:[ " A drink consisting of an infusion of ground coffee beans " ,
" a platform-independent programming lanugage "
, or " an island in Indonesia to the south of Borneo " ]
context: I like to drink " java " in the morning .'
example = tokenizer.tokenize(input, add_special_tokens=True)
answer = model.generate(input_ids=example['input_ids'],
attention_mask=example['attention_mask'],
max_length=135)
# "a distinguishing trait"
```
|