|
--- |
|
datasets: |
|
- eduagarcia/LegalPT |
|
- eduagarcia/cc100-pt |
|
- eduagarcia/OSCAR-2301-pt_dedup |
|
- eduagarcia/brwac_dedup |
|
language: |
|
- pt |
|
pipeline_tag: fill-mask |
|
tags: |
|
- legal |
|
model-index: |
|
- name: RoBERTaLexPT-base |
|
results: |
|
- task: |
|
type: token-classification |
|
dataset: |
|
type: eduagarcia/portuguese_benchmark |
|
name: LeNER |
|
config: LeNER-Br |
|
split: test |
|
metrics: |
|
- type: seqeval |
|
value: 90.73 |
|
name: Mean F1 |
|
args: |
|
scheme: IOB2 |
|
- task: |
|
type: token-classification |
|
dataset: |
|
type: eduagarcia/portuguese_benchmark |
|
name: UlyNER-PL Coarse |
|
config: UlyssesNER-Br-PL-coarse |
|
split: test |
|
metrics: |
|
- type: seqeval |
|
value: 88.56 |
|
name: Mean F1 |
|
args: |
|
scheme: IOB2 |
|
- task: |
|
type: token-classification |
|
dataset: |
|
type: eduagarcia/portuguese_benchmark |
|
name: UlyNER-PL Fine |
|
config: UlyssesNER-Br-PL-fine |
|
split: test |
|
metrics: |
|
- type: seqeval |
|
value: 86.03 |
|
name: Mean F1 |
|
args: |
|
scheme: IOB2 |
|
license: cc-by-4.0 |
|
metrics: |
|
- seqeval |
|
--- |
|
# RoBERTaLexPT-base |
|
|
|
RoBERTaLexPT-base is pretrained from LegalPT corpus and CrawlPT corpus, using [RoBERTa-base](https://huggingface.co/FacebookAI/roberta-base), introduced by [Liu et al. (2019)](https://arxiv.org/abs/1907.11692). |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Funded by:** [More Information Needed] |
|
- **Language(s) (NLP):** Brazilian Portuguese (pt-BR) |
|
- **License:** [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/deed.en) |
|
|
|
### Model Sources |
|
|
|
- **Repository:** https://github.com/eduagarcia/roberta-legal-portuguese |
|
- **Paper:** [More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
RoBERTaLexPT-base is pretrained from both data: |
|
- [LegalPT](https://huggingface.co/datasets/eduagarcia/LegalPT) is a Portuguese legal corpus by aggregating diverse sources of up to 125GiB data. |
|
- CrawlPT is a duplication of three Portuguese general corpora: [brWaC](https://huggingface.co/datasets/eduagarcia/brwac_dedup), [CC100-PT](https://huggingface.co/datasets/eduagarcia/cc100-pt), [OSCAR-2301](https://huggingface.co/datasets/eduagarcia/OSCAR-2301-pt_dedup). |
|
|
|
### Training Procedure |
|
|
|
Our pretraining process was executed using the [Fairseq library](https://arxiv.org/abs/1904.01038) on a DGX-A100 cluster, utilizing a total of 2 Nvidia A100 80 GB GPUs. |
|
The complete training of a single configuration takes approximately three days. |
|
|
|
|
|
This computational setup is similar to the work of [BERTimbau](https://dl.acm.org/doi/abs/10.1007/978-3-030-61377-8_28), exposing the model to approximately 65 billion tokens during training. |
|
|
|
#### Preprocessing |
|
|
|
Following the approach of [Lee et al. (2022)](http://arxiv.org/abs/2107.06499), we deduplicated all subsets of the LegalPT Corpus using the [MinHash algorithm](https://dl.acm.org/doi/abs/10.5555/647819.736184) and [Locality Sensitive Hashing](https://dspace.mit.edu/bitstream/handle/1721.1/134231/v008a014.pdf?sequence=2&isAllowed=y) to find clusters of duplicate documents. |
|
|
|
To ensure that domain models are not constrained by a generic vocabulary, we utilized the [HuggingFace Tokenizers](https://github.com/huggingface/tokenizers) -- BPE algorithm to train a vocabulary for each pre-training corpus used. |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
The pretraining process involved training the model for 62,500 steps, with a batch size of 2048 sequences, each containing a maximum of 512 tokens. |
|
We employed the masked language modeling objective, where 15\% of the input tokens were randomly masked. |
|
The optimization was performed using the AdamW optimizer with a linear warmup and a linear decay learning rate schedule. |
|
|
|
We adopted the standard [RoBERTa hyperparameters](https://arxiv.org/abs/1907.11692): |
|
|
|
|
|
| **Hyperparameter** | **RoBERTa-base** | |
|
|------------------------|-----------------:| |
|
| Number of layers | 12 | |
|
| Hidden size | 768 | |
|
| FFN inner hidden size | 3072 | |
|
| Attention heads | 12 | |
|
| Attention head size | 64 | |
|
| Dropout | 0.1 | |
|
| Attention dropout | 0.1 | |
|
| Warmup steps | 6k | |
|
| Peak learning rate | 4e-4 | |
|
| Batch size | 2048 | |
|
| Weight decay | 0.01 | |
|
| Maximum training steps | 62.5k | |
|
| Learning rate decay | Linear | |
|
| AdamW $$\epsilon$$ | 1e-6 | |
|
| AdamW $$\beta_1$$ | 0.9 | |
|
| AdamW $$\beta_2$$ | 0.98 | |
|
| Gradient clipping | 0.0 | |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
The model was evaluated on ["PortuLex" benchmark](eduagarcia/portuguese_benchmark), a four-task benchmark designed to evaluate the quality and performance of language models in the Portuguese legal domain. |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] |
|
|
|
#### Summary |
|
|
|
|
|
## Citation |
|
|
|
|
|
[More Information Needed] |
|
|