Edit model card

XCOMET-lite

Links: EMNLP 2024 | Arxiv | Github repository

XCOMET-lite is a distilled version of Unbabel/XCOMET-XXL — a machine translation evaluation model trained to provide an overall quality score between 0 and 1, where 1 represents a perfect translation.

This model uses microsoft/mdeberta-v3-base as its backbone and has 278 million parameters, making it approximately 38 times smaller than the 10.7 billion-parameter XCOMET-XXL.

Quick Start

  1. Clone the GitHub repository.
  2. Create a conda environment as instructed in the README.

Then, run the following code:

from xcomet.deberta_encoder import XCOMETLite

model = XCOMETLite().from_pretrained("myyycroft/XCOMET-lite")
data = [
    {
        "src": "Elon Musk has acquired Twitter and plans significant changes.",
        "mt": "Илон Маск приобрел Twitter и планировал значительные искажения.",
        "ref": "Илон Маск приобрел Twitter и планирует значительные изменения."
    },
    {
        "src": "Elon Musk has acquired Twitter and plans significant changes.",
        "mt": "Илон Маск приобрел Twitter.",
        "ref": "Илон Маск приобрел Twitter и планирует значительные изменения."
    }
]

model_output = model.predict(data, batch_size=2, gpus=1)

print("Segment-level scores:", model_output.scores)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for myyycroft/XCOMET-lite

Base model

Unbabel/XCOMET-XXL
Finetuned
(1)
this model