api-inference documentation

Automatic Speech Recognition

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Automatic Speech Recognition

Automatic Speech Recognition (ASR), also known as Speech to Text (STT), is the task of transcribing a given audio to text.

Example applications:

  • Transcribing a podcast
  • Building a voice assistant
  • Generating subtitles for a video

For more details about the automatic-speech-recognition task, check out its dedicated page! You will find examples and related materials.

Recommended models

This is only a subset of the supported models. Find the model that suits you best here.

Using the API

Python
JavaScript
cURL
import requests

API_URL = "https://api-inference.huggingface.co/models/openai/whisper-large-v3"
headers = {"Authorization": "Bearer hf_***"}

def query(filename):
    with open(filename, "rb") as f:
        data = f.read()
    response = requests.post(API_URL, headers=headers, data=data)
    return response.json()

output = query("sample1.flac")

To use the Python client, see huggingface_hub’s package reference.

API specification

Request

Payload
inputs* string The input audio data as a base64-encoded string. If no parameters are provided, you can also provide the audio data as a raw bytes payload.
parameters object Additional inference parameters for Automatic Speech Recognition
        return_timestamps boolean Whether to output corresponding timestamps with the generated text
        generate object Ad-hoc parametrization of the text generation process
                temperature number The value used to modulate the next token probabilities.
                top_k integer The number of highest probability vocabulary tokens to keep for top-k-filtering.
                top_p number If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.
                typical_p number Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details.
                epsilon_cutoff number If set to float strictly between 0 and 1, only tokens with a conditional probability greater than epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details.
                eta_cutoff number Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details.
                max_length integer The maximum length (in tokens) of the generated text, including the input.
                max_new_tokens integer The maximum number of tokens to generate. Takes precedence over maxLength.
                min_length integer The minimum length (in tokens) of the generated text, including the input.
                min_new_tokens integer The minimum number of tokens to generate. Takes precedence over maxLength.
                do_sample boolean Whether to use sampling instead of greedy decoding when generating new tokens.
                early_stopping enum Possible values: never, true, false.
                num_beams integer Number of beams to use for beam search.
                num_beam_groups integer Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. See this paper for more details.
                penalty_alpha number The value balances the model confidence and the degeneration penalty in contrastive search decoding.
                use_cache boolean Whether the model should use the past last key/values attentions to speed up decoding

Some options can be configured by passing headers to the Inference API. Here are the available headers:

Headers
authorization string Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page.
x-use-cache boolean, default to true There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here.
x-wait-for-model boolean, default to false If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here.

For more information about Inference API headers, check out the parameters guide.

Response

Body
text string The recognized text.
chunks object[] When returnTimestamps is enabled, chunks contains a list of audio chunks identified by the model.
        text string A chunk of text identified by the model
        timestamps number[] The start and end timestamps corresponding with the text
< > Update on GitHub