Model Card for Ultravox
Ultravox is a multimodal Speech LLM built around a pretrained Llama3.1-70B-Instruct and Whisper-medium backbone.
See https://ultravox.ai for the GitHub repo and more information.
Model Details
Model Description
Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message).
The input to the model is given as a text prompt with a special <|audio|>
pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio.
Using the merged embeddings as input, the model will then generate output text as usual.
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model.
- Developed by: Fixie.ai
- License: MIT
Model Sources
- Repository: https://ultravox.ai
- Demo: See repo
Usage
Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc.
To use the model, try the following:
# pip install transformers peft librosa
import transformers
import numpy as np
import librosa
pipe = transformers.pipeline(model='fixie-ai/ultravox-v0_4', trust_remote_code=True)
path = "<path-to-input-audio>" # TODO: pass the audio here
audio, sr = librosa.load(path, sr=16000)
turns = [
{
"role": "system",
"content": "You are a friendly and helpful character. You love to answer questions for people."
},
]
pipe({'audio': audio, 'turns': turns, 'sampling_rate': sr}, max_new_tokens=30)
Training Details
The model uses a pre-trained Llama3.1-70B-Instruct backbone as well as the encoder part of Whisper-medium.
Only the multi-modal adapter is trained, while Whisper encoder and Llama are kept frozen.
We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based Llama backbone.
Training Data
Training dataset is a mix of ASR datasets, extended by adding a "continuation" generated by Llama 3.1 70B.
Training Procedure
Supervised speech to audio finetuning. For more info, see training code in Ultravox repo.
Training Hyperparameters
- Training regime: BF16 mixed precision training
- Hardward used: 8x H100 GPUs
Speeds, Sizes, Times
The current version of Ultravox, when invoked with audio content, has a time-to-first-token (TTFT) of approximately 400ms, and a tokens-per-second rate of ~50-100 when using 4xH100 SXM GPU, all using a Llama 3.1 70B backbone.
Check out the audio tab on TheFastest.ai for daily benchmarks and a comparison with other existing models.
Evaluation
en_de (BLEU) | es_en (BLEU) | LibriSpeech clean.test (WER) | |
---|---|---|---|
Ultravox v0.3 | 22.66 | 24.74 | 6.67 |
Ultravox v0.4 8B | 25.47 | 37.11 | 4.45 |
Ultravox v0.4 70B | 30.30 | 39.55 | 4.49 |
Llama3.1 8B (text-only) | 32.59 | 44.62 | - |
Llama3.1 70B (text-only) | 38.76 | 46.39 | - |
- Downloads last month
- 76