Create transcription

POSThttps:/api.openai.com/v1/audio/transcriptions

Transcribes audio into the input language.

Request body

  • file
    string
    Required
    The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
  • model
    string

    ID of the model to use. The options are gpt-4o-transcribe, gpt-4o-mini-transcribe, and whisper-1 (which is powered by our open source Whisper V2 model).

    • model
      string
      Required
    • model
      string
      Required
      • whisper-1
        string
      • gpt-4o-transcribe
        string
      • gpt-4o-mini-transcribe
        string
  • language
    string

    The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

  • prompt
    string

    An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

  • response_format
    string
    Defaults: json

    The format of the output, in one of these options: json, text, srt, verbose_json, or vtt. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json.

    • json
      string
    • text
      string
    • srt
      string
    • verbose_json
      string
    • vtt
      string
  • temperature
    number
    Defaults: 0

    The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.

  • include[]
    array

    Additional information to include in the transcription response. logprobs will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. logprobs only works with response_format set to json and only with the models gpt-4o-transcribe and gpt-4o-mini-transcribe.

    • items
      string
      Defaults:
      • logprobs
        string
  • timestamp_granularities[]
    array
    Defaults: segment

    The timestamp granularities to populate for this transcription. response_format must be set verbose_json to use timestamp granularities. Either or both of these options are supported: word, or segment. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.

    • items
      string
      • word
        string
      • segment
        string
  • stream
    boolean or null
    Defaults: false

    If set to true, the model response data will be streamed to the client as it is generated using server-sent events. See the Streaming section of the Speech-to-Text guide for more information.

    Note: Streaming is not supported for the whisper-1 model and will be ignored.

Example request
1
curl https://api.openai.com/v1/audio/transcriptions \
2
-H "Authorization: Bearer $OPENAI_API_KEY" \
3
-H "Content-Type: multipart/form-data" \
4
-F file="@/path/to/file/audio.mp3" \
5
-F model="gpt-4o-transcribe"
Example response
1
{
2
"text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
3
}
Built with