-
LiquidAI/LFM2.5-VL-450M
Image-Text-to-Text β’ 0.4B β’ Updated β’ 30.2k β’ 158 -
LFM2.5-VL-450M WebGPU
πΉ49Live video captioning and object tracking in your browser
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 131k β’ 276 -
LFM2.5-VL-1.6B WebGPU
π§87In-browser vision-language inference with LFM2.5-VL-1.6B
AI & ML interests
A new generation of foundation models from first principles.
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
-
LiquidAI/LFM2-1.2B-Extract
Text Generation β’ 1B β’ Updated β’ 1.43k β’ 107 -
LiquidAI/LFM2-350M-Extract
Text Generation β’ 0.4B β’ Updated β’ 1.4k β’ 78 -
LiquidAI/LFM2-350M-ENJP-MT
Translation β’ 0.4B β’ Updated β’ 1.32k β’ β’ 88 -
LiquidAI/LFM2-1.2B-RAG
Text Generation β’ 1B β’ Updated β’ 863 β’ 118
End-to-end audio foundation model, designed for low latency and real-time conversations
Collection of post-trained and base LFM2.5 models.
-
LiquidAI/LFM2.5-350M
Text Generation β’ 0.4B β’ Updated β’ 53.5k β’ 281 -
LiquidAI/LFM2.5-1.2B-Thinking
Text Generation β’ 1B β’ Updated β’ 30.8k β’ 335 -
LiquidAI/LFM2.5-1.2B-Instruct
Text Generation β’ 1B β’ Updated β’ 412k β’ 571 -
LiquidAI/LFM2.5-1.2B-JP
Text Generation β’ 1B β’ Updated β’ 2.35k β’ 144
LFM2 is a new generation of hybrid models, designed for on-device deployment.
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
-
LiquidAI/LFM2-VL-3B
Image-Text-to-Text β’ 3B β’ Updated β’ 11.8k β’ 133 -
LiquidAI/LFM2-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 4.22k β’ 226 -
LiquidAI/LFM2-VL-450M
Image-Text-to-Text β’ 0.5B β’ Updated β’ 48.9k β’ 146 -
LiquidAI/LFM2-VL-3B-GGUF
Image-Text-to-Text β’ 3B β’ Updated β’ 53.5k β’ 36
-
LiquidAI/LFM2.5-VL-450M
Image-Text-to-Text β’ 0.4B β’ Updated β’ 30.2k β’ 158 -
LFM2.5-VL-450M WebGPU
πΉ49Live video captioning and object tracking in your browser
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 131k β’ 276 -
LFM2.5-VL-1.6B WebGPU
π§87In-browser vision-language inference with LFM2.5-VL-1.6B
Collection of post-trained and base LFM2.5 models.
-
LiquidAI/LFM2.5-350M
Text Generation β’ 0.4B β’ Updated β’ 53.5k β’ 281 -
LiquidAI/LFM2.5-1.2B-Thinking
Text Generation β’ 1B β’ Updated β’ 30.8k β’ 335 -
LiquidAI/LFM2.5-1.2B-Instruct
Text Generation β’ 1B β’ Updated β’ 412k β’ 571 -
LiquidAI/LFM2.5-1.2B-JP
Text Generation β’ 1B β’ Updated β’ 2.35k β’ 144
LFM2 is a new generation of hybrid models, designed for on-device deployment.
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
-
LiquidAI/LFM2-1.2B-Extract
Text Generation β’ 1B β’ Updated β’ 1.43k β’ 107 -
LiquidAI/LFM2-350M-Extract
Text Generation β’ 0.4B β’ Updated β’ 1.4k β’ 78 -
LiquidAI/LFM2-350M-ENJP-MT
Translation β’ 0.4B β’ Updated β’ 1.32k β’ β’ 88 -
LiquidAI/LFM2-1.2B-RAG
Text Generation β’ 1B β’ Updated β’ 863 β’ 118
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
-
LiquidAI/LFM2-VL-3B
Image-Text-to-Text β’ 3B β’ Updated β’ 11.8k β’ 133 -
LiquidAI/LFM2-VL-1.6B
Image-Text-to-Text β’ 2B β’ Updated β’ 4.22k β’ 226 -
LiquidAI/LFM2-VL-450M
Image-Text-to-Text β’ 0.5B β’ Updated β’ 48.9k β’ 146 -
LiquidAI/LFM2-VL-3B-GGUF
Image-Text-to-Text β’ 3B β’ Updated β’ 53.5k β’ 36
End-to-end audio foundation model, designed for low latency and real-time conversations