Edit model card

PoLemma Large

PoLemma models are intended for lemmatization of named entities and multi-word expressions in the Polish language.

They were fine-tuned from the allegro/plT5 models, e.g.: allegro/plt5-large.

Usage

Sample usage:

from transformers import pipeline

pipe = pipeline(task="text2text-generation", model="amu-cai/polemma-large", tokenizer="amu-cai/polemma-large")
hyp = [res['generated_text'] for res in pipe(["federalnego urzędu statystycznego"], clean_up_tokenization_spaces=True, num_beams=5)][0]

Evaluation results

Lemmatization Exact Match was computed on the SlavNER 2021 test set.

Model Exact Match
polemma-large 92.61
polemma-base 91.34
polemma-small 88.46

Citation

If you use the model, please cite the following paper:

@inproceedings{palka-nowakowski-2023-exploring,
    title = "Exploring the Use of Foundation Models for Named Entity Recognition and Lemmatization Tasks in {S}lavic Languages",
    author = "Pa{\l}ka, Gabriela  and
      Nowakowski, Artur",
    booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
    month = may,
    year = "2023",
    address = "Dubrovnik, Croatia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.bsnlp-1.19",
    pages = "165--171",
    abstract = "This paper describes Adam Mickiewicz University{'}s (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages. Additionally, our models for lemmatization will be available at: https://huggingface.co/amu-cai.",
}

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1.post200
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
10
Safetensors
Model size
820M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.