V2 API Reference

The Speechmatics Automatic Speech Recognition REST API is used to submit ASR jobs and receive the results. The supported job type is transcription of audio files.

Version: 2.7.0

Terms of service

https://www.speechmatics.com/terms-and-conditions/

Contact information

support@speechmatics.com

URI scheme

BasePath: /v2/jobs/

Schemes: HTTPS, HTTP

Paths

The base URL https://${APPLIANCE_HOST}/v2/jobs/ is used for REST Speech API requests. If you are using HTTP, the base URL is: http://${APPLIANCE_HOST}:8082/v2/jobs/.

/jobs

Requests without a job ID component are used to create a new job, or to return a list of all submitted jobs

POST

Summary: Create a new job.

Parameters

NameLocated inDescriptionRequiredSchema
configformDataJSON containing a JobConfig model indicating the type and parameters for the recognition job.Yesstring
data_fileformDataThe data file to be processed. Alternatively the data file can be fetched from a url specified in JobConfig.Nofile

Responses

CodeDescriptionSchema
201OKCreateJobResponse
400Bad requestErrorResponse
401UnauthorizedErrorResponse
403ForbiddenErrorResponse
500Internal Server ErrorErrorResponse

GET

Summary: List all jobs.

Responses

CodeDescriptionSchema
200OKRetrieveJobsResponse
401UnauthorizedErrorResponse
500Internal Server ErrorErrorResponse
HTTP Method GET

Summary: Get job details, including progress and any error reports.

/jobs/{jobid}

Requests with a job ID component are used to view the status, transcript or audio data for a job, or remove a given job from the system.

Parameters

NameLocated inDescriptionRequiredSchema
jobidpathID of the job.Yesstring

Responses

CodeDescriptionSchema
200OKRetrieveJobResponse
401UnauthorizedErrorResponse
404Not foundErrorResponse
500Internal Server ErrorErrorResponse
HTTP Method DELETE

Summary: Delete a job and remove all associated resources.

Parameters

NameLocated inDescriptionRequiredSchema
jobidpathID of the job to delete.Yesstring
forcequeryWhen set, a running job will be force terminated. When unset (default), a running job will not be terminated and request will return HTTP 423 Locked.Noboolean

Responses

CodeDescriptionSchema
200The job that was deleted.DeleteJobResponse
401UnauthorizedErrorResponse
404Not foundErrorResponse
423LockedErrorResponse
500Internal Server ErrorErrorResponse
HTTP Method GET

/jobs/{jobid}/transcript

Summary: Get the transcript for a transcription job.

Parameters

NameLocated inDescriptionRequiredSchema
jobidpathID of the job.Yesstring
formatqueryThe transcripton format (by default the json-v2 format is returned). txt and srt are also supported/Nostring

Responses

CodeDescriptionSchema
200OKRetrieveTranscriptResponse
401UnauthorizedErrorResponse
404Not foundErrorResponse
410GoneErrorResponse
500Internal Server ErrorErrorResponse
HTTP Method GET

/jobs/{jobid}/log

Summary: Get the log file for a transcription job.

Parameters

NameLocated inDescriptionRequiredSchema
jobidpathID of the job.Yesstring

Responses

CodeDescriptionSchema
200OKfile
401UnauthorizedErrorResponse
404Not FoundErrorResponse
410GoneErrorResponse
500Internal Server ErrorErrorResponse
501Not ImplementedErrorResponse

Models

ErrorResponse

NameTypeDescriptionRequired
codeintegerThe HTTP status code.Yes
errorstringThe error message.Yes
detailstringThe details of the error.No

TrackingData

NameTypeDescriptionRequired
titlestringThe title of the job.No
referencestringExternal system reference.No
tags[string]No
detailsobjectCustomer-defined JSON structure.No

DataFetchConfig

NameTypeDescriptionRequired
urlstringA URL where a file is storedYes
auth_headers[string]A list of additional headers to be added to the input fetch request when using http or https. This is intended to support authentication or authorization, for example by supplying an OAuth2 bearer token.No

TranscriptionConfig

NameTypeDescriptionRequired
languagestringLanguage model to process the audio input, normally specified as an ISO language codeYes
output_localestringLanguage locale to be used when generating the transcription output, normally specified as an ISO language codeNo
additional_vocab[object]List of custom words or phrases that should be recognized. Alternative pronunciations can be specified to aid recognition.No
punctuation_overridesControl punctuation settings.No
diarizationstringSpecify whether speaker or channel labels are added to the transcript. The default is none. - none: no speaker or channel labels are added. - speaker: speaker attribution is performed based on acoustic matching; all input channels are mixed into a single stream for processing. - channel: multiple input channels are processed individually and collated into a single transcript. - speaker_change: the output indicates when the speaker in the audio changes. No speaker attribution is performed. This is a faster method than speaker. The reported speaker changes may not agree with speaker. - channel_and_speaker_change: both channel and speaker_change are switched on. The speaker change is indicated if more than one speaker are recorded in one channel.No
speaker_diarization_configSpeakerDiarizationConfigConfiguration for speaker diarization. Includes speaker_sensitivity: Range between 0 and 1. A higher sensitivity will increase the likelihood of more unique speakers returning. For example, if you see fewer speakers returned than expected, you can try increasing the sensitivity value or if too many speakers are returned try reducing this value. The default is 0.5.No
speaker_change_sensitivityfloatRanges between zero and one. Controls how responsive the system is for potential speaker changes. High value indicates high sensitivity. Defaults to 0.4.No
channel_diarization_labels[string]Transcript labels to use when using collating separate input channels.No
speaker_diarization_params(Deprecated, Ignored) Configuration for speaker diarizationNo
operating_pointstringSpecify whether to use a standard or enhanced model for transription. By default the model used is standardNo
enable_entitiesBooleanSpecify whether to enable entity types within JSON output, as well as additional spoken_form and written_form metadata. By default falseNo

For the diarization parameter, the following values are valid:

ValueDescription
noneno speaker or channel labels are added.
speakerspeaker attribution is performed based on acoustic matching; all input channels are mixed into a single stream for processing.
channelmultiple input channels are processed individually and collated into a single transcript.
speaker_changethe output indicates when the speaker in the audio changes. No speaker attribution is performed. This is a faster method than speaker. The reported speaker changes may not agree with speaker.
channel_and_speaker_changeboth channel and speaker_change are switched on. The speaker change is indicated if more than one speaker are recorded in one channel.

SpeakerDiarizationConfig

Additional configuration for the Speaker Diarization feature.

NameTypeDescriptionRequired
speaker_sensitivityfloatUsed for speaker diarization feature. Range between 0 and 1. A higher sensitivity will increase the likelihood of more unique speakers returning. For example, if you see fewer speakers returned than expected, you can try increasing the sensitivity value, or if too many speakers are returned try reducing this value. The default is 0.5.No

NotificationConfig

NameTypeDescriptionRequired
urlstringThe url to which a notification message will be sent upon completion of the job. The job id and status are added as query parameters, and any combination of the job inputs and outputs can be included by listing them in contents. If contents is empty, the body of the request will be empty. If only one item is listed, it will be sent as the body of the request with Content-Type set to an appropriate value such as application/octet-stream or application/json. If multiple items are listed they will be sent as named file attachments using the multipart content type. If contents is not specified, the transcript item will be sent within the body of the POST request in json-v2 format. If the job was rejected or failed during processing, that will be indicated by the status, and any output items that are not available as a result will be omitted. The body formatting rules will still be followed as if all items were available. The user-agent header is set to Speechmatics-API/2.0, or Speechmatics API V2 in older API versions.Yes
contents[string]Specifies a list of items to be attached to the notification message. When multiple items are requested, they are included as named file attachments.No
methodstringThe method to be used with http and https urls. The default is post.No
auth_headers[string]A list of additional headers to be added to the notification request when using http or https. This is intended to support authentication or authorization, for example by supplying an OAuth2 bearer token.No

OutputConfig

If you want the transcription output to be in the SubRip Title (SRT) format, and you want to alter the default parameters Speechmatics provides you must provide the output_confiog within the config object

NameTypeDescriptionRequired
srt_overridesobjectParameters that override default values of srt conversion. max_line_length: sets maximum count of characters per subtitle line including white space. max_lines: sets maximum count of lines in a subtitle section.No

JobConfig

JSON object that contains various groups of job configuration parameters. Based on the value of type, a type-specific object such as transcription_config is required to be present to specify all configuration settings or parameters needed to process the job inputs as expected.

If the results of the job are to be forwarded on completion, notification_config can be provided with a list of callbacks to be made; no assumptions should be made about the order in which they will occur.

Customer specific job details or metadata can be supplied in tracking, and this information will be available where possible in the job results and in callbacks.

NameTypeDescriptionRequired
typestringYes
fetch_dataDataFetchConfigNo
fetch_textDataFetchConfigNo
transcription_configTranscriptionConfigNo
notification_config[NotificationConfig]No
trackingTrackingDataNo
output_configOutputConfigNo

CreateJobResponse

In the job response you will see balance and cost values returned, but these are not used by the appliance; they are only maintained for backwards compatibility with the legacy V1 Cloud Offering, and should be ignored by clients.

NameTypeDescriptionRequired
idstringThe unique ID assigned to the job. Keep a record of this for later retrieval of your completed job.Yes

JobDetails

Document describing a job. JobConfig will be present in JobDetails returned for GET jobs/ request in the Cloud Offering and in Batch Appliance, but it will not be present in JobDetails returned as item in RetrieveJobsResponse in case of Batch Appliance.

NameTypeDescriptionRequired
created_atdateTimeThe UTC date time the job was created.Yes
data_namestringName of the data file submitted for job.Yes
durationintegerThe file duration (in seconds). May be missing for fetch URL jobs.No
idstringThe unique id assigned to the job.Yes
statusstringThe status of the job. running - The job is actively running. done - The job completed successfully. rejected - The job was accepted at first, but later could not be processed by the transcriber. deleted - The user deleted the job. * expired - The system deleted the job. Usually because the job was in the done state for a very long time.Yes
configJobConfigNo

RetrieveJobsResponse

NameTypeDescriptionRequired
jobs[JobDetails]Yes

RetrieveJobResponse

NameTypeDescriptionRequired
jobJobDetailsYes

DeleteJobResponse

NameTypeDescriptionRequired
jobJobDetailsYes

JobInfo

Summary information about an ASR job, to support identification and tracking.

NameTypeDescriptionRequired
created_atdateTimeThe UTC date time the job was created.Yes
data_namestringName of data file submitted for job.Yes
durationintegerThe data file audio duration (in seconds).Yes
idstringThe unique id assigned to the job.Yes
trackingTrackingDatacustomer-supplied dataNo

RecognitionMetadata

Summary information about the output from an ASR job, comprising the job type and configuration parameters used when generating the output.

NameTypeDescriptionRequired
created_atdateTimeThe UTC date time the transcription output was created.Yes
typestringYes
transcription_configTranscriptionConfigNo
output_configOutputConfigNo

RecognitionDisplay

NameTypeDescriptionRequired
directionstringYes

RecognitionAlternative

List of possible job output item values, ordered by likelihood.

NameTypeDescriptionRequired
contentstringYes
confidencefloatYes
languagestringYes
displayRecognitionDisplayNo
speakerstringNo
tags[string]No

RecognitionResult

An ASR job output item. The primary item types are word and punctuation. Other item types may be present, for example to provide semantic information of different forms.

NameTypeDescriptionRequired
channelstringNo
start_timefloatYes
end_timefloatYes
entity_classstringIf an entity has been recognised, what type of entity it is. Displayed even if enable_entities is falseYes
spoken_formarrayFor entity results only, the spoken_form is the transcript of the words directly spoken. Only valid if enable_entities is trueNo
written_formarrayFor entity results only, the written_form is a standardized form of the spoken words. Only valid if enable_entities is trueNo
is_eosbooleanWhether the punctuation mark is an end of sentence character. Only applies to punctuation marks.No
typestringNew types of items may appear without being requested; unrecognized item types can be ignored. Current types are word, punctuation, speaker_change, and entityYes
alternatives[RecognitionAlternative]No

RetrieveTranscriptResponse

NameTypeDescriptionRequired
formatstringSpeechmatics JSON transcript format version number.Yes
jobJobInfoYes
metadataRecognitionMetadataYes