Edit model card

Quantization of madlad400-7b-mt-bt using Ctranslate2 for running on CPU or Nvidia GPU.

Example usage:

import ctranslate2, transformers
from huggingface_hub import snapshot_download

model_path = snapshot_download("zenoverflow/madlad400-7b-mt-bt-ct2-int8-float16")
print("\n", end="")

translator = ctranslate2.Translator(model_path, device="auto")
tokenizer = transformers.T5Tokenizer.from_pretrained(model_path)

target_lang_code = "ja"

source_text = "This sentence has no meaning."

input_text = f"<2{target_lang_code}> {source_text}"
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))

print(output_text)
Downloads last month
2
Inference Examples
Inference API (serverless) has been turned off for this model.