metadata
license: apache-2.0
language:
- en
- eu
metrics:
- BLEU
- TER
base_model:
- HiTZ/mt-hitz-en-eu
pipeline_tag: translation
tags:
- ctranslate2
- translation
- marian
Hitz Center’s English-Basque machine translation model converted to CTranslate2
Model description
What is CTranslate2?
CTranslate2 is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
CTranslate2 Installation
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
ct2-transformers-converter Command Used:
ct2-transformers-converter --model HiTZ/mt-hitz-en-eu --output_dir ./ctranslate2/mt-hitz-en-eu-ct2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
CTranslate2 Converted Checkpoint Information:
Compatible With:
Compute Type:
compute_type=int8_float16
fordevice="cuda"
compute_type=int8
fordevice="cpu"
Sample Code - ctranslate2
Clone the repository to the working directory or wherever you wish to store the model artifacts.
git clone https://huggingface.co/xezpeleta/mt-hitz-en-eu-ct2
Take the python code below and update the 'model_dir' variable to the location of the cloned repository.
from ctranslate2 import Translator
import transformers
model_dir = "./mt-hitz-en-eu-ct2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("Hello, world!"))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
Licensing information
This work is licensed under a Apache License, Version 2.0