🇪🇸 spanish-mmBERT-small

This model is a 61.0% smaller version of jhu-clsp/mmBERT-small for the Spanish language, created using vocabulary pruning on the fineweb-2-trimming dataset.

Vocabulary size: 32768 tokens (reduced from 256000)
Tokenizer type: BPE
Training samples: 200000 texts

This pruned model should perform similarly to the original model for Spanish language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Spanish were removed from the original multilingual model's vocabulary.

Usage

You can use this model with the Transformers library:

from transformers import AutoModel, AutoTokenizer

model_name = "mrm8488/spanish-mmBERT-small"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Downloads last month
9
Safetensors
Model size
54.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mrm8488/spanish-mmBERT-small

Quantized
(1)
this model

Dataset used to train mrm8488/spanish-mmBERT-small