Edit model card

SLIMER-PARALLEL-LLaMA3

SLIMER is an LLM specifically instructed for zero-shot NER on English language.

This LLaMA-3 based SLIMER scores +17 % over paper's original SLIMER LLaMA-2, while allowing up to 16 NEs to be extracted in parallel per prompt.

GitHub repository: https://github.com/andrewzamai/SLIMER/tree/v2.0

SLIMER for Italian language can be found at: https://huggingface.co/expertai/LLaMAntino-3-SLIMER-IT

Instructed on a reduced number of samples, it is designed to tackle never-seen-before Named Entity tags by leveraging a prompt enriched with a DEFINITION and GUIDELINES for the NEs to be extracted.

Instruction Tuning Prompt
SLIMER-3-PARALLEL prompt

<|start_header_id|>user<|end_header_id|>

You are given a text chunk (delimited by triple quotes) and an instruction.
Read the text and answer to the instruction in the end.

"""
{input text}
"""

Instruction: Extract the entities of type [NEs_list] from the text chunk you have read. Be aware that not all of these entities are necessarily present. Do not extract entities that do not exist in the text, return an empty list for that tag. Ensure each entity is assigned to only one appropriate class.

To help you, here are dedicated Definition and Guidelines for each entity tag.

{

{"NE_type_1": {"Definition": "", "Guidelines": ""}

...

{"NE_type_N": {"Definition": "", "Guidelines": ""}

}

Return only a JSON object. The JSON should strictly follow this format: {"NE_type_1": [], ..., "NE_type_N":[]}. DO NOT output anything else, just the JSON itself.

<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Currently existing approaches fine-tune on an extensive number of entity classes (around 13K) and assess zero-shot NER capabilities on Out-Of-Distribution input domains. SLIMER performs comparably to these state-of-the-art models on OOD input domains, while being trained only a reduced number of samples and a set of NE tags that overlap in lesser degree with test sets.

We extend the standard zero-shot evaluations (CrossNER and MIT) with BUSTER, which is characterized by financial entities that are rather far from the more traditional tags observed by all models during training. An inverse trend can be observed, with SLIMER emerging as the most effective in dealing with these unseen labels, thanks to its lighter instruction tuning methodology and the use of definition and guidelines.

Model Backbone #Params MIT CrossNER BUSTER AVG
Movie Restaurant AI Literature Music Politics Science
ChatGPT gpt-3.5-turbo - 5.3 32.8 52.4 39.8 66.6 68.5 67.0 - -
InstructUIE Flan-T5-xxl 11B 63.0 21.0 49.0 47.2 53.2 48.2 49.3 - -
UniNER-type LLaMA-1 7B 42.4 31.7 53.5 59.4 65.0 60.8 61.1 34.8 51.1
GoLLIE Code-LLaMA 7B 63.0 43.4 59.1 62.7 67.8 57.2 55.5 27.7 54.6
GLiNER-L DeBERTa-v3 0.3B 57.2 42.9 57.2 64.4 69.6 72.6 62.6 26.6 56.6
GNER-T5 Flan-T5-xxl 11B 62.5 51.0 68.2 68.7 81.2 75.1 76.7 27.9 63.9
GNER-LLaMA LLaMA-1 7B 68.6 47.5 63.1 68.2 75.7 69.4 69.9 23.6 60.8
SLIMER LLaMA-3.1-Instruct 8B 58.4 45.3 58.0 65.0 77.0 71.2 67.3 39.32 60.2
JSON Template
JSON SLIMER prompt
{
  "description": "SLIMER PARALLEL 3 prompt",
  "prompt_input": "<|start_header_id|>system<|end_header_id|>\n\nYou are a helpful NER assistant designed to output JSON.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nYou are given a text chunk (delimited by triple quotes) and an instruction.\nRead the text and answer to the instruction in the end.\n\"\"\"\n{input}\n\"\"\"\nInstruction: Extract the entities of type {ne_tags} from the text chunk you have read. Be aware that not all of these entities are necessarily present. Do not extract entities that do not exist in the text, return an empty list for that tag. Ensure each entity is assigned to only one appropriate class.\nTo help you, here are dedicated Definition and Guidelines for each entity tag.\n{Def_and_Guidelines}\nReturn only a JSON object. The JSON should strictly follow this format:\n{expected_json_format}.\nDO NOT output anything else, just the JSON itself."
}
from vllm import LLM, SamplingParams

vllm_model = LLM(model="expertai/SLIMER-PARALLEL-LLaMA3")
tokenizer = vllm_model.get_tokenizer()

# suggested temperature 0, max_tokens hyperparam
cutoff_len = 4096
sampling_params = SamplingParams(temperature=0, max_tokens=1000, stop=tokenizer.eos_token)

# given list of NE types and dictionary of Def and Guidelines for each --> returns instruction
slimer_prompter = SLIMER_PARALLEL_instruction_prompter("SLIMER_PARALLEL_instruction_template", './src/SFT_finetuning/templates')

# create a dictionary of dictionaries, each NE_type as key should have a {Definition: str, Guidelines: str} value
ne_types_list = ['ORGANIZATION', 'UNIVERSITY', 'LOCATION', 'PERSON', 'CONFERENCE']
def_guidelines_per_NE_dict = {'ORGANIZATION': {'Definition': "'organization' refers to structured groups, institutions, companies, or associations.", 'Guidelines': "Avoid labeling generic terms like 'team' or 'group'. Exercise caution with ambiguous entities like 'Apple' (company vs. fruit) and 'Manchester United' (sports team vs. fan club)."}, 'UNIVERSITY': {'Definition': 'UNIVERSITY represents educational institutions that offer higher education and academic research programs.', 'Guidelines': "Avoid labeling general concepts such as 'education' or 'academia' as UNIVERSITY. Exercise caution with ambiguous terms like 'Cambridge' (can refer to different institutions) and 'Harvard' (can refer to a person)."}, 'LOCATION': {'Definition': 'LOCATION refers to specific geographic entities such as venues, facilities, and institutions that represent physical places with distinct addresses or functions.', 'Guidelines': "Exercise caution with ambiguous terms, e.g., 'Amazon' (company, river, and region) and 'Cambridge' (U.S. city, UK city, and university). Consider the context and specificity to accurately classify locations."}, 'PERSON': {'Definition': 'PERSON refers to individuals, including public figures, celebrities, and notable personalities.', 'Guidelines': 'If a person is working on research (including professor, Ph.D. student, researcher in companies, and etc) avoid labeling it as PERSON entity.'}, 'CONFERENCE': {'Definition': 'CONFERENCE refers to specific events or gatherings where experts, researchers, and professionals convene to present and discuss their work in a particular field or discipline.', 'Guidelines': "Exercise caution when labeling entities that could refer to institutions, organizations, or associations rather than specific events. Take care with ambiguous terms like 'International Journal of Computer Vision', which may refer to a publication rather than a conference."}}

instruction = slimer_prompter.generate_prompt(
  ne_tags=", ".join(ne_types_list),
  def_and_guidelines=json.dumps(def_guidelines_per_NE_dict, indent=2),
  expected_json_format=json.dumps({k: [] for k in def_guidelines_per_NE_dict.keys()}, indent=2)
)

input_text = 'Typical generative model approaches include naive Bayes classifier s , Gaussian mixture model s , variational autoencoders and others .'

# this promper formats the input text to analize with SLIMER instruction
input_instruction_prompter = Prompter('LLaMA3-chat-NOheaders', template_path='./src/SFT_finetuning/templates')

system_message = "You are a helpful NER assistant designed to output JSON."
conversation = [
    {"role": "system", "content": system_message},
    {"role": "user", "content": input_instruction_prompter.generate_prompt(input=input_text, instruction=instruction)},  # the input_text + instruction
]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, truncation=True, max_length=cutoff_len, add_generation_prompt=True)

responses = vllm_model.generate(prompt, sampling_params)

Citation

If you find SLIMER useful in your research or work, please cite the following paper:

@misc{zamai2024lessinstructmoreenriching,
      title={Show Less, Instruct More: Enriching Prompts with Definitions and Guidelines for Zero-Shot NER}, 
      author={Andrew Zamai and Andrea Zugarini and Leonardo Rigutini and Marco Ernandes and Marco Maggini},
      year={2024},
      eprint={2407.01272},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.01272}, 
}
Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for expertai/SLIMER-PARALLEL-LLaMA3

Finetuned
(415)
this model
Quantizations
1 model

Dataset used to train expertai/SLIMER-PARALLEL-LLaMA3

Collection including expertai/SLIMER-PARALLEL-LLaMA3