Filiberto 124M is a small specialized foundation model trained on Spanish Golden Age Dramas.

Filiberto 124M OCR is only 124 million parameters. It can run easily on CPU or provide correction at scale on GPUs (>10k tokens/seconds).

Training

The pre-training material included a collection of works taken from the TEXORO corpus, via a collaboration with ETSO, totalling ~5 million tokens.

Pre-training ran on 5 epochs with levanter (500 steps total, each processing 1024 sequences of 512 tokens) on a TPUv4-32 for 15 minutes.

Tokenization is currently done with the GPT-2 tokenizer.

Downloads last month
43
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bertin-project/filiberto-124M

Unable to build the model tree, the base model loops to the model itself. Learn more.