PortuLex_benchmark / README.md
eduagarcia's picture
Update README.md
cddad88 verified
metadata
dataset_info:
  - config_name: LeNER-Br
    features:
      - name: idx
        dtype: int32
      - name: tokens
        sequence: string
      - name: ner_tags
        sequence:
          class_label:
            names:
              '0': O
              '1': B-ORGANIZACAO
              '2': I-ORGANIZACAO
              '3': B-PESSOA
              '4': I-PESSOA
              '5': B-TEMPO
              '6': I-TEMPO
              '7': B-LOCAL
              '8': I-LOCAL
              '9': B-LEGISLACAO
              '10': I-LEGISLACAO
              '11': B-JURISPRUDENCIA
              '12': I-JURISPRUDENCIA
    splits:
      - name: train
        num_bytes: 3953896
        num_examples: 7825
      - name: validation
        num_bytes: 715819
        num_examples: 1177
      - name: test
        num_bytes: 819242
        num_examples: 1390
    download_size: 1049906
    dataset_size: 5488957
  - config_name: UlyssesNER-Br-PL-coarse
    features:
      - name: idx
        dtype: int32
      - name: tokens
        sequence: string
      - name: ner_tags
        sequence:
          class_label:
            names:
              '0': O
              '1': B-DATA
              '2': I-DATA
              '3': B-EVENTO
              '4': I-EVENTO
              '5': B-FUNDAMENTO
              '6': I-FUNDAMENTO
              '7': B-LOCAL
              '8': I-LOCAL
              '9': B-ORGANIZACAO
              '10': I-ORGANIZACAO
              '11': B-PESSOA
              '12': I-PESSOA
              '13': B-PRODUTODELEI
              '14': I-PRODUTODELEI
    splits:
      - name: train
        num_bytes: 1511905
        num_examples: 2271
      - name: validation
        num_bytes: 305472
        num_examples: 489
      - name: test
        num_bytes: 363207
        num_examples: 524
    download_size: 431964
    dataset_size: 2180584
  - config_name: UlyssesNER-Br-PL-fine
    features:
      - name: idx
        dtype: int32
      - name: tokens
        sequence: string
      - name: ner_tags
        sequence:
          class_label:
            names:
              '0': O
              '1': B-DATA
              '2': I-DATA
              '3': B-EVENTO
              '4': I-EVENTO
              '5': B-FUNDapelido
              '6': I-FUNDapelido
              '7': B-FUNDlei
              '8': I-FUNDlei
              '9': B-FUNDprojetodelei
              '10': I-FUNDprojetodelei
              '11': B-LOCALconcreto
              '12': I-LOCALconcreto
              '13': B-LOCALvirtual
              '14': I-LOCALvirtual
              '15': B-ORGgovernamental
              '16': I-ORGgovernamental
              '17': B-ORGnaogovernamental
              '18': I-ORGnaogovernamental
              '19': B-ORGpartido
              '20': I-ORGpartido
              '21': B-PESSOAcargo
              '22': I-PESSOAcargo
              '23': B-PESSOAgrupocargo
              '24': I-PESSOAgrupocargo
              '25': B-PESSOAindividual
              '26': I-PESSOAindividual
              '27': B-PRODUTOoutros
              '28': I-PRODUTOoutros
              '29': B-PRODUTOprograma
              '30': I-PRODUTOprograma
              '31': B-PRODUTOsistema
              '32': I-PRODUTOsistema
    splits:
      - name: train
        num_bytes: 1511905
        num_examples: 2271
      - name: validation
        num_bytes: 305472
        num_examples: 489
      - name: test
        num_bytes: 363207
        num_examples: 524
    download_size: 437232
    dataset_size: 2180584
  - config_name: fgv-coarse
    features:
      - name: idx
        dtype: int32
      - name: tokens
        sequence: string
      - name: ner_tags
        sequence:
          class_label:
            names:
              '0': O
              '1': B-Academic_Citation
              '2': I-Academic_Citation
              '3': B-Legislative_Reference
              '4': I-Legislative_Reference
              '5': B-Person
              '6': I-Person
              '7': B-Precedent
              '8': I-Precedent
    splits:
      - name: train
        num_bytes: 19490545
        num_examples: 415
      - name: validation
        num_bytes: 3934464
        num_examples: 60
      - name: test
        num_bytes: 6080343
        num_examples: 119
    download_size: 3917469
    dataset_size: 29505352
  - config_name: rrip
    features:
      - name: idx
        dtype: int32
      - name: sentence
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': '1'
              '1': '2'
              '2': '3'
              '3': '4'
              '4': '5'
              '5': '6'
              '6': '7'
              '7': '8'
    splits:
      - name: train
        num_bytes: 1174840
        num_examples: 8257
      - name: validation
        num_bytes: 184668
        num_examples: 1053
      - name: test
        num_bytes: 235217
        num_examples: 1474
    download_size: 929466
    dataset_size: 1594725
configs:
  - config_name: LeNER-Br
    data_files:
      - split: train
        path: LeNER-Br/train-*
      - split: validation
        path: LeNER-Br/validation-*
      - split: test
        path: LeNER-Br/test-*
  - config_name: UlyssesNER-Br-PL-coarse
    data_files:
      - split: train
        path: UlyssesNER-Br-PL-coarse/train-*
      - split: validation
        path: UlyssesNER-Br-PL-coarse/validation-*
      - split: test
        path: UlyssesNER-Br-PL-coarse/test-*
  - config_name: UlyssesNER-Br-PL-fine
    data_files:
      - split: train
        path: UlyssesNER-Br-PL-fine/train-*
      - split: validation
        path: UlyssesNER-Br-PL-fine/validation-*
      - split: test
        path: UlyssesNER-Br-PL-fine/test-*
  - config_name: fgv-coarse
    data_files:
      - split: train
        path: fgv-coarse/train-*
      - split: validation
        path: fgv-coarse/validation-*
      - split: test
        path: fgv-coarse/test-*
  - config_name: rrip
    data_files:
      - split: train
        path: rrip/train-*
      - split: validation
        path: rrip/validation-*
      - split: test
        path: rrip/test-*
task_categories:
  - token-classification
  - text-classification
language:
  - pt
tags:
  - legal
pretty_name: PortuLex benchmark
size_categories:
  - 10K<n<100K
extra_gated_heading: Access PortuLex on Hugging Face
extra_gated_prompt: >-
  The PortuLex benchmark includes datasets with specific access requirements:

  1. RRI dataset requires the acceptance of these terms:
  https://bit.ly/rhetoricalrole.

  2. For the FGV-STF corpus, you must request it directly from the original
  authors:
  https://www.sciencedirect.com/science/article/abs/pii/S0306457321002727. 
extra_gated_fields:
  Full Name: text
  Official Email Address: text
  Affiliation: text
  Country: text
  I accepted the RRIP Terms of Commitment: checkbox
  I have obtained permission to access the FGV-STF benchmark directly from the original authors: checkbox

PortuLex_benchmark

"PortuLex" benchmark is a four-task benchmark designed to evaluate the quality and performance of language models in the Portuguese legal domain.

Dataset Task Train Dev Test
RRI CLS 8.26k 1.05k 1.47k
LeNER-Br NER 7.83k 1.18k 1,39k
UlyssesNER-Br NER 3.28k 489 524
FGV-STF NER 415 60 119

Dataset Details

PortuLex is composed by: LeNER-Br, Rhetorical Role Identification (RRI), FGV-STF, UlyssesNER-Br.

  • LeNER-Br: the first Named Entity Recognition (NER) corpus for the legal domain in Brazilian Portuguese from higher and state-level courts.
  • RRI: rhetorical annotations from judicial sentences from the Court of Justice of Mato Grosso do Sul (Brazil).
  • FGV-STF: decisions from the Supreme Federal Court for entity extraction.
  • UlyssesNER-Br: NER corpus of bills and legislative queries from the Chamber of Deputies of Brazil.

Dataset Description

Dataset Evaluation

Macro F1-Score (%) for multiple models evaluated on PortuLex benchmark test splits:

Model LeNER UlyNER-PL FGV-STF RRIP Average (%)
Coarse/Fine Coarse
BERTimbau-based 88.34 86.39/83.83 79.34 82.34 83.78
BERTimbau-large 88.64 87.77/84.74 79.71 83.79 84.60
Albertina-PT-BR-base 89.26 86.35/84.63 79.30 81.16 83.80
Albertina-PT-BR-xlarge 90.09 88.36/86.62 79.94 82.79 85.08
BERTikal-base 83.68 79.21/75.70 77.73 81.11 79.99
JurisBERT-base 81.74 81.67/77.97 76.04 80.85 79.61
BERTimbauLAW-base 84.90 87.11/84.42 79.78 82.35 83.20
Legal-XLM-R-base 87.48 83.49/83.16 79.79 82.35 83.24
Legal-XLM-R-large 88.39 84.65/84.55 79.36 81.66 83.50
Legal-RoBERTa-PT-large 87.96 88.32/84.83 79.57 81.98 84.02
Ours
RoBERTaTimbau-base (Reproduction of BERTimbau) 89.68 87.53/85.74 78.82 82.03 84.29
RoBERTaLegalPT-base (Trained on LegalPT) 90.59 85.45/84.40 79.92 82.84 84.57
RoBERTaCrawlPT-base (Trained on CrawlPT) 89.24 88.22/86.58 79.88 82.80 84.83
RoBERTaLexPT-base (Trained on CrawlPT + LegalPT) 90.73 88.56/86.03 80.40 83.22 85.41

Citation

@InProceedings{garcia2024_roberlexpt,
    author="Garcia, Eduardo A. S.
    and Silva, N{\'a}dia F. F.
    and Siqueira, Felipe
    and Gomes, Juliana R. S.
    and Albuqueruqe, Hidelberg O.
    and Souza, Ellen
    and Lima, Eliomar
    and De Carvalho, André",
    title="RoBERTaLexPT: A Legal RoBERTa Model pretrained with deduplication for Portuguese",
    booktitle="Computational Processing of the Portuguese Language",
    year="2024",
    publisher="Association for Computational Linguistics"
}

Acknowledgment

This work has been supported by the AI Center of Excellence (Centro de Excelência em Inteligência Artificial – CEIA) of the Institute of Informatics at the Federal University of Goiás (INF-UFG).