small but mighty π₯ you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM π«°π» also with gradient accumulation simulated batch size is 16 β¨ I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work π https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
πΌοΈ Multimodal > At Hugging Face we released SmolVLM, a performant and efficient smol vision language model π > Show Lab released ShowUI-2B: new vision-language-action model to build GUI/web automation agents π€ > Rhymes AI has released the base model of Aria: Aria-Base-64K and Aria-Base-8K with their respective context length > ViDoRe team released ColSmolVLM: A new ColPali-like retrieval model based on SmolVLM > Dataset: Llava-CoT-o1-Instruct: new dataset labelled using Llava-CoT multimodal reasoning modelπ > Dataset: LLaVA-CoT-100k dataset used to train Llava-CoT released by creators of Llava-CoT π
π¬ LLMs > Qwen team released QwQ-32B-Preview, state-of-the-art open-source reasoning model, broke the internet π₯ > AliBaba has released Marco-o1, a new open-source reasoning model π₯ > NVIDIA released Hymba 1.5B Base and Instruct, the new state-of-the-art SLMs with hybrid architecture (Mamba + transformer)
β―οΈ Image/Video Generation > Qwen2VL-Flux: new image generation model based on Qwen2VL image encoder, T5 and Flux for generation > Lightricks released LTX-Video, a new DiT-based video generation model that can generate 24 FPS videos at 768x512 res β―οΈ > Dataset: Image Preferences is a new image generation preference dataset made with DIBT community effort of Argilla π·οΈ
Audio > OuteAI released OuteTTS-0.2-500M new multilingual text-to-speech model based on Qwen-2.5-0.5B trained on 5B audio prompt tokens
What a week! A recap for everything you missed βοΈ merve/nov-22-releases-673fbbcfc1c97c4f411def07 Multimodal β¨ > Mistral AI released Pixtral 124B, a gigantic open vision language model > Llava-CoT (formerly known as Llava-o1) was released, a multimodal reproduction of o1 model by PKU > OpenGVLab released MMPR: a new multimodal reasoning dataset > Jina has released Jina-CLIP-v2 0.98B multilingual multimodal embeddings > Apple released new SotA vision encoders AIMv2
LLMs π¦ > AllenAI dropped a huge release of models, datasets and scripts for TΓΌlu, a family of models based on Llama 3.1 aligned with SFT, DPO and a new technique they have developed called RLVR > Jina has released embeddings-v3: new multilingual embeddings with longer context > Hugging Face released SmolTalk: synthetic dataset used to align SmolLM2 using supervised fine-tuning > Microsoft released orca-agentinstruct-1M-v1: a gigantic instruction dataset of 1M synthetic instruction pairs
Image Generation πΌοΈ > Black Forest Labs released Flux 1. tools: four new models for different image modifications and two LoRAs to do image conditioning and better steer generations
Lastly Hugging Face released a new library Observers: a lightweight SDK for monitoring interactions with AI APIs and easily store and browse them π $ pip install observers
Apple released AIMv2 π a family of state-of-the-art open-set vision encoders apple/aimv2-6720fe1558d94c7805f7688c > like CLIP, but add a decoder and train on autoregression π€― > 19 open models come in 300M, 600M, 1.2B, 2.7B with resolutions of 224, 336, 448 > Load and use with π€ transformers
It's been a while we shipped native quantization support in diffusers π§¨
We currently support bistandbytes as the official backend but using others like torchao is already very simple.
This post is just a reminder of what's possible:
1. Loading a model with a quantization config 2. Saving a model with quantization config 3. Loading a pre-quantized model 4. enable_model_cpu_offload() 5. Training and loading LoRAs into quantized checkpoints
For anyone who struggles with NER or information extraction with LLM.
We showed an efficient workflow for token classification including zero-shot suggestions and model fine-tuning with Argilla, GliNER, the NuMind NuExtract LLM and SpanMarker. @argilla
β¨ Unified 3D generation & text understanding. β¨ 3D meshes as plain text for seamless LLM integration. β¨ High-quality 3D outputs rivaling specialized models.
reacted to prithivMLmods's
post with π€16 days ago
OmniVision-968M: a new local VLM for edge devices, fast & small but performant π¨ a new vision language model with 9x less image tokens, super efficient π aligned with DPO for reducing hallucinations β‘οΈ Apache 2.0 license π₯