metadata
language:
- en
tags:
- pytorch
- ner
- text generation
- seq2seq
inference: false
license: mit
datasets:
- conll2003
metrics:
- f1
t5-base-qa-ner-conll
Unofficial implementation of InstructionNER. t5-base model tuned on conll2003 dataset.
https://github.com/ovbystrova/InstructionNER
Inference
git clone https://github.com/ovbystrova/InstructionNER
cd InstructionNER
from instruction_ner.model import Model
model = Model(
model_path_or_name="olgaduchovny/t5-base-ner-mit-movie",
tokenizer_path_or_name="olgaduchovny/t5-base-ner-mit-movie"
)
options = [
"ACTOR",
"AWARD",
"CHARACTER",
"DIRECTOR",
"GENRE",
"OPINION",
"ORIGIN",
"PLOT",
"QUOTE",
"RELATIONSHIP",
"SOUNDTRACK",
"YEAR"
]
instruction = "please extract entities and their types from the input sentence, " \
"all entity types are in options"
text = "are there any good romantic comedies out right now"
generation_kwargs = {
"num_beams": 2,
"max_length": 128
}
pred_spans = model.predict(
text=text,
generation_kwargs=generation_kwargs,
instruction=instruction,
options=options
)
>>> [(19, 36, 'GENRE'), (41, 50, 'YEAR')]