A lightweight model to do machine translation from Ukrainian to English based on recently published LFM2 model. Use demo to test it.

Also, there's another model: kulyk-en-uk

Run with Docker (CPU):

docker run -p 3000:3000 --rm ghcr.io/egorsmkv/kulyk-rust:latest

Run using Apptainer (CUDA):

  1. Run it using shell:
apptainer shell --nv ./kulyk.sif

Apptainer> /opt/entrypoints/kulyk --verbose --n-len 1024 --model-path-ue /project/models/kulyk-uk-en.gguf --model-path-eu /project/models/kulyk-en-uk.gguf
  1. Run it as a webservice:
apptainer instance start --nv ./kulyk.sif kulyk-ws

# go to http://localhost:3000

Facts:

  • Fine-tuned with 40M samples (filtered by quality metric) from ~53.5M for 1.4 epochs
  • 354M params
  • Requires 1 GB of RAM to run with bf16
  • BLEU on FLORES-200: 36.27
  • Tokens per second: 229.93 (bs=1), 1664.40 (bs=10), 8392.48 (bs=64)
  • License: lfm1.0

Info:

  • Model name is inherited from name of Sergiy Kulyk who was chargé d'affaires of Ukraine in the United States

Training Info:

  • Learning Rate: 3e-5
  • Learning Rate scheduler type: cosine
  • Warmup Ratio: 0.05
  • Max length: 2048
  • Batch Size: 10
  • packed=True
  • Sentences <= 1000 chars
  • Gradient accumulation steps: 4
  • Used Flash Attention 2
  • Time for epoch: 32 hours
  • 2 cards of NVIDIA RTX 3090 Ti (24G)
  • accelerate with DeepSpeed
  • Memory usage: 22.212GB-22.458GB
  • torch 2.7.1

Acknowledgements:

  • Dmytro Chaplynskyi for providing compute to train this model
  • lang-uk members for their compilation of different MT datasets
Downloads last month
93
GGUF
Model size
0.4B params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Yehor/kulyk-uk-en

Base model

LiquidAI/LFM2-350M
Quantized
(27)
this model
Quantizations
3 models

Dataset used to train Yehor/kulyk-uk-en

Space using Yehor/kulyk-uk-en 1

Evaluation results