Word timestamps
Word timestamps provide precise timing information for each word in the transcription, enabling you to generate captions, subtitles, and align transcripts with audio playback. Use these offsets to generate captions, subtitle tracks, or to align transcripts with downstream analytics.
Enabling Word Timestamps
Pre-Recorded API
Add word_timestamps=true to your Pulse STT query parameters. This works for both raw-byte uploads (Content-Type: audio/wav) and JSON requests with hosted audio URLs.
Sample request
Real-Time WebSocket API
Add word_timestamps=true to your WebSocket connection query parameters when connecting to the Pulse STT WebSocket API.
Output format & field of interest
Responses include a words array with word, start, end, and confidence fields. When diarization is enabled, the array also includes speaker (integer ID for realtime, string label for pre-recorded) and speaker_confidence (0.0 to 1.0, realtime only) fields.
Pre-Recorded API Response
The response of Pre-Recorded API includes the utterances field, which includes sentence level timestamps.
Real-Time WebSocket API Response
When diarize=true is enabled, the words array also includes speaker (integer ID) and speaker_confidence (0.0 to 1.0) fields.
Response Fields
Use Cases
- Caption generation: Create synchronized captions for video or live streams
- Subtitle tracks: Generate SRT or VTT subtitle files
- Analytics: Align transcripts with audio playback for detailed analysis
- Search: Enable time-based search within audio content

