Pulse

View as Markdown

Pulse is a high-accuracy, low-latency speech-to-text model built for real-time transcription across 38 languages, with streaming and non-streaming support.

64ms

TTFT at 1 concurrency

300ms

TTFT at 100 concurrency

38 Languages

Streaming + Non-streaming

2 Modes

Streaming + Non-streaming

Model Overview

Developed bySmallest AI
Model typeSpeech-to-Text
Languages38 supported (plus multi, multi-eu, multi-indic, multi-asian aggregators)
LicenseProprietary
Model format (non-streaming)pulse_offline_<lang>_<version>.smlst
Model format (streaming)pulse_streaming_<lang>_<version>.smlst
Documentationdocs.smallest.ai/waves
Consoleconsole.smallest.ai
Supportsupport@smallest.ai

Key Capabilities

Real-Time Optimized

Ultra-low latency architecture delivering 64ms TTFT at 1 concurrency and 300ms at 100 concurrent requests — designed for live transcription and conversational AI.

Multi-Language

38 languages supported across streaming and non-streaming modes, with automatic language detection and code-switching within a single session.

PII / PCI Redaction

Built-in redaction of personal and payment card data across both streaming and non-streaming use cases.

Speaker Diarization

Automatic multi-speaker identification across both streaming and non-streaming modes, with per-word and per-utterance speaker labels.

Noise Reduction

Background noise handling built into the model.

Code-Switching

Supports multi-language audio within a single session. Best used by setting the known primary language (e.g. es for Spanish handles English+Spanish automatically).


Performance & Benchmarks

Pulse STT is evaluated against three open-source datasets — FLEURS, ESB, and WildASR — and one internal English perturbation suite. Word Error Rate (WER) by language. Lower is better. NA = not available or not supported by that provider.

For the full benchmark comparison across every dataset, see the Performance page.

FLEURS — Streaming

LanguageSmallest PulseDeepgram Nova 2Deepgram Nova 3
Italian4.41%11.05%6.99%
English4.55%15.59%11.21%
Spanish5.99%10.67%7.52%
Portuguese8.32%14.15%11.46%
German9.5%11.1%10.15%
French10.71%14.3%12.07%
Russian14.35%NANA
Dutch11.90%NANA

FLEURS — Pre-recorded

LanguageSmallest PulseDeepgram Nova 2Deepgram Nova 3
English4.55%7.9%6.7%
Italian3.0%10.7%6.2%
Spanish3.2%8.6%4.1%
Portuguese5.0%9.9%7.5%
German6.4%8.2%8.5%
French7.1%13.3%10.7%
Russian9.6%7.9%11.8%
Ukrainian7.5%12.4%NA
Polish10.3%12.2%NA
Dutch15.0%16.3%12.5%
Czech12.4%22.9%19.2%
Slovak13.5%31.2%NA
Swedish18.7%17.7%14.3%
Finnish18.3%14.1%13.2%
Latvian16.5%48.7%NA
Romanian17.8%36.0%NA
Estonian17.8%49.0%NA
Bulgarian24.1%32.7%NA
Danish19.8%21.1%16.1%
Hungarian22.5%31.8%28.6%
Maltese25.5%NANA
Lithuanian25.1%44.9%NA

Hindi — multi-dataset (Streaming)

WER across seven Hindi datasets covering read speech, conversational speech, telephony / contact-center audio, and noise-augmented variants. Compared against IndicWhisper, Sarvam Saaras v3, and Deepgram Nova-3. Lower is better.

DatasetSmallest PulseIndicWhisperSarvam Saaras v3Deepgram Nova-3
FLEURS9.5515.008.3114.09
Kathbath9.7110.308.1516.22
Kathbath (noisy)10.9412.0010.8117.06
Common Voice11.2011.4011.3623.55
Indic-TTS6.397.606.4910.72
MUCS9.1912.008.9616.20
Gramvaani21.4326.8021.8031.44

For the full breakdown including training-data and evaluation-protocol notes, see the Performance page.

English STT — ESB Dataset (Streaming)

A Hugging Face benchmark suite aggregating 8 English speech datasets across diverse domains (audiobooks, parliament, meetings, finance, etc.) to test STT generalization.

Evaluated on the open-source Hugging Face ESB datasets. Smallest Pulse numbers from internal evaluation.

DatasetSmallest PulseDeepgram Nova 2Deepgram Nova 3
LibriSpeech Clean1.804.353.71
LibriSpeech Other3.949.367.72
Common Voice9.2017.7914.59
VoxPopuli3.179.959.38
TEDELIUM2.364.353.57
GigaSpeech4.7411.6310.05
SPGISpeech2.675.263.28
Earnings228.7318.9815.34
AMI11.9319.8616.06
Overall5.3911.289.30

ASR Robustness — WildASR Dataset (Streaming)

An open-source robustness benchmark designed to stress-test STT under real-world degraded conditions: clipping, far-field capture, background noise, phone codec compression, reverberation, and accented speech.

Evaluated on the open-source WildASR dataset. Smallest Pulse numbers from internal evaluation.

DatasetSmallest PulseDeepgram Nova 2Deepgram Nova 3
Clean4.4115.2810.76
Clipping12.9370.4143.15
Far Field12.0974.5258.72
Noise Gap9.0321.9114.19
Phone Codec5.7112.229.27
Reverberation7.9140.7127.21
Accent5.359.177.23
Overall8.7634.8924.36

Internal English Perturbation Benchmark

Not a public dataset. The English audio is sliced by perturbation type (Emotion, Entity, Disfluency, Noise, Accent, Silence, Speaker Diversity, Speed, Boundary, Pitch, Audio Quality, Volume) to isolate model weaknesses.

CategoryPulse English (Streaming)Deepgram Nova 3 (en)
Emotion15.43%19.42%
Entity12.14%11.80%
Disfluency11.91%8.64%
Noise11.57%14.61%
Accent9.13%10.43%
Silence8.99%13.17%
Speaker Diversity7.77%9.91%
Speed3.54%6.85%
Boundary3.02%6.30%
Pitch2.60%4.04%
Audio Quality2.45%4.05%
Volume2.11%3.59%

Features — Non-streaming

FeatureAvailableNotes
Speaker diarizationYesMulti-speaker identification
PII redactionYesPersonal info redaction
PCI redactionYesPayment card data redaction
Word-level timestampsYesPer-word timing
Sentence-level timestampsYesRequires word_timestamps=true to be enabled
PunctuationYesAuto punctuation
Profanity filterYesExplicit content filtering
Language detectionYesAuto language ID
Code-switchingYesMulti-language in same audio
Noise reductionYesBackground noise handling
Emotion and gender detectionYesReturns the percentage score of detected emotion and gender

Features — Streaming

FeatureAvailableNotes
Speaker diarizationYesMulti-speaker identification
Keyword boostingYesCustom vocabulary enhancement
PII redactionYesPersonal info redaction
PCI redactionYesPayment card data redaction
Word-level timestampsYesPer-word timing
Sentence-level timestampsYesPer-sentence timing
PunctuationYesAuto punctuation
Profanity filterNo
Language detectionYesAuto language ID
Code-switchingYesMulti-language in same audio
Custom vocabularyNo
Noise reductionYesBackground noise handling

Supported Languages — Non-streaming

LanguageCodeAvailable
EnglishenYes
ItalianitYes
SpanishesYes
PortugueseptYes
HindihiYes
GermandeYes
FrenchfrYes
UkrainianukYes
RussianruYes
KannadaknYes
MalayalammlYes
PolishplYes
MarathimrYes
GujaratiguYes
CzechcsYes
SlovakskYes
TeluguteYes
Oriya (Odia)orYes
DutchnlYes
BengalibnYes
LatvianlvYes
EstonianetYes
RomanianroYes
PunjabipaYes
FinnishfiYes
SwedishsvYes
BulgarianbgYes
TamiltaYes
HungarianhuYes
DanishdaYes
LithuanianltYes
MaltesemtYes
JapanesejaYes
CantoneseyueYes
MandarinzhYes
KoreankoYes
TagalogtlYes
IndonesianidYes
MalaymsYes

Supported Languages — Streaming

LanguageCodeAvailable
EnglishenYes
ItalianitYes
SpanishesYes
PortugueseptYes
HindihiYes
GermandeYes
FrenchfrYes
UkrainianukYes
RussianruYes
KannadaknYes
MalayalammlYes
PolishplYes
MarathimrYes
GujaratiguYes
CzechcsYes
SlovakskYes
TeluguteYes
Oriya (Odia)orYes
DutchnlYes
BengalibnYes
LatvianlvYes
EstonianetYes
RomanianroYes
PunjabipaYes
FinnishfiYes
SwedishsvYes
BulgarianbgYes
TamiltaYes
HungarianhuYes
DanishdaYes
LithuanianltYes
MaltesemtYes
JapanesejaYes
CantoneseyueYes
MandarinzhYes
KoreankoYes
TagalogtlYes
IndonesianidYes
MalaymsYes

Best Practices

Specify the language parameter when known

When the language of the audio is known in advance, always set it explicitly rather than relying on automatic detection. This yields better transcription accuracy because the model can optimize directly for that language without needing to first identify it.

For example, setting the language parameter to es (Spanish) tells the model to expect Spanish audio, which also handles English+Spanish code-switching scenarios. This produces more accurate outputs compared to using multi-eu or multi.

ParameterUse case
enEnglish
esSpanish (handles English+Spanish)
hiHindi (handles English+Hindi)
multi-euUnknown European-language audio (auto-detects across the European set)
multiTruly unknown or mixed-language audio (full multilingual auto-detection)

When to use multi-eu or multi:

  • When the language is truly unknown beforehand
  • When processing audio from varied or unpredictable sources
  • Prefer multi-eu for European-language input; use multi only for truly mixed multilingual audio

Use Cases

Direct use

  • Real-time call transcription
  • Voice assistant input
  • Meeting transcription
  • Accessibility and captioning
  • Customer support recording analysis

Downstream use

  • Multi-turn conversational agents
  • Voice-to-text pipelines
  • Telephony and IVR systems
  • Content indexing and search
  • Compliance and audit logging

Safety & Compliance

Pulse must not be used for:

  • Recording or transcribing individuals without their explicit consent
  • Surveillance, stalking, or any form of unauthorized monitoring
  • Any illegal or unethical purposes

Additionally:

  • Usage is monitored for policy compliance
  • For compliance documentation (GDPR, SOC2, HIPAA), contact support@smallest.ai

Contact