🇪🇸 spanish-mmBERT-small
This model is a 61.0% smaller version of jhu-clsp/mmBERT-small for the Spanish language, created using vocabulary pruning on the fineweb-2-trimming dataset.
Vocabulary size: 32768 tokens (reduced from 256000)
Tokenizer type: BPE
Training samples: 200000 texts
This pruned model should perform similarly to the original model for Spanish language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Spanish were removed from the original multilingual model's vocabulary.
Usage
You can use this model with the Transformers library:
from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/spanish-mmBERT-small"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
- Downloads last month
- 9
Model tree for mrm8488/spanish-mmBERT-small
Base model
jhu-clsp/mmBERT-small