Lightning v3.1

View as Markdown
Latest Release

Lightning v3.1 is a high-fidelity, low-latency text-to-speech model delivering natural, expressive, and realistic speech at 44 kHz. Optimized for real-time applications with ultra-low latency and voice cloning support, it delivers broadcast-quality audio with genuinely conversational characteristics. Supports 12 languages plus auto for automatic detection and code-switching.

44.1 kHz

Native sample rate

200ms

TTFB at 40 concurrent requests

15+ Languages

Auto-detection + code-switching

3.3x

Real-time factor (faster than playback)

Model Overview

Developed bySmallest AI
Model typeText-to-Speech / Speech Synthesis
Languages15+ (auto-detection + code-switching)
LicenseProprietary
Versionv3.1
Native sample rate44,100 Hz

Key Capabilities

Real-Time Optimized

Ultra-low latency architecture designed for conversational AI and live streaming.

Voice Cloning

Instant voice cloning with just 5-15 seconds of audio via API and console.

Streaming

HTTP, SSE, and WebSocket support for real-time applications.

Multi-Language

12 languages plus auto-detect, with automatic identification and code-switching. No restarts or reconnections needed.

High Fidelity

Broadcast-quality 44.1 kHz audio with natural prosody, intonation, and conversational rhythm.

Pronunciation Control

Custom pronunciation dictionaries for specialized vocabulary, brand names, and domain-specific terms.


Performance & Benchmarks

Head-to-head listener evaluation against eight production TTS systems on the EmergentTTS benchmark, 1,088 samples scored by the LLM-as-a-Judge framework. The first table is the win-rate breakdown per competitor; the per-metric scores are split by category below it.

Win, tie, loss against each competitor

Direct head-to-head listener ratings. Lightning Wins % is the share where Lightning v3.1 was preferred. Ties % is the share where listeners scored both equally. Competitor Wins % is the inverse. Each competitor column sums to 100%.

EmergentTTSGPT-4o-mini
OpenAI
Turbo v2.5
ElevenLabs
Multilingual v2
ElevenLabs
Sonic-3
Cartesia
Gemini 2.5 Pro
Google
MAI-Voice-1
Microsoft
Inworld 1.5
Inworld
S2 Pro
Fish Audio
Lightning Wins % (higher better)40.26%50.28%54.41%68.29%58.43%57.17%54.41%64.25%
Ties %24.17%25.00%23.81%17.00%8.29%17.00%18.11%13.60%
Competitor Wins % (lower better)35.57%24.72%21.78%14.71%33.27%25.83%27.48%22.15%

Per-metric scores

Mean listener score per metric across the same 1,088-sample test set. Tables are split by category — open the accordion under each one to see what each metric measures.

Naturalness — higher is better

MetricLightning v3.1GPT-4o-miniElevenLabs Turbo v2.5ElevenLabs Multilingual v2Sonic-3Gemini 2.5 ProMAI-Voice-1Inworld 1.5S2 Pro
Overall3.253.133.163.173.203.073.173.063.02
Naturalness2.612.412.522.552.572.422.572.412.37
Intonation3.223.063.073.063.122.903.042.912.86
Prosody3.012.732.822.862.832.652.762.612.58
Pronunciation*3.633.673.643.653.673.673.683.683.57
Audio Quality3.763.783.773.753.813.733.793.703.75
  • Overall — Holistic listener rating of how natural the voice sounds end-to-end.
  • Naturalness — How human-like the voice sounds; penalizes robotic or synthetic quality.
  • Intonation — Whether pitch rises and falls appropriately for the sentence type (question, statement, exclamation).
  • Prosody — The broader umbrella of rhythm, stress, and melody, how well the voice “reads” the sentence as a human would.
  • Pronunciation — Whether individual words are phonetically correct, especially names, loanwords, and domain-specific terms.
  • Audio Quality — Technical cleanliness of the output; absence of artifacts, distortion, clipping, or background noise.

Expressiveness — higher is better

MetricLightning v3.1GPT-4o-miniElevenLabs Turbo v2.5ElevenLabs Multilingual v2Sonic-3Gemini 2.5 ProMAI-Voice-1Inworld 1.5S2 Pro
Overall3.453.453.443.463.383.493.503.373.41
Paralinguistics3.613.603.593.613.563.603.583.553.58
Emotions3.293.303.283.313.193.383.413.193.23
  • Overall — Holistic listener rating of how expressive the voice sounds given the context of the sentence.
  • Paralinguistics — Non-verbal vocal elements like laughter, sighs, or filler sounds (“um”, “uh”) and whether they’re rendered appropriately.
  • Emotions — How accurately the voice conveys the intended emotional tone (neutral, warm, urgent, etc.).

Delivery — higher is better

MetricLightning v3.1GPT-4o-miniElevenLabs Turbo v2.5ElevenLabs Multilingual v2Sonic-3Gemini 2.5 ProMAI-Voice-1Inworld 1.5S2 Pro
Boundary Consistency4.944.944.934.954.934.884.774.904.88
Pronunciation Style4.944.964.954.964.964.934.914.944.89
Natural Pace4.474.574.514.514.014.234.474.333.74
Pause Placement4.464.544.494.514.284.344.414.384.09
Breathing Naturalness3.823.063.143.142.792.883.282.772.42
  • Boundary Consistency — Whether phrase and sentence boundaries are marked consistently with pauses or pitch shifts, without arbitrary breaks mid-phrase.
  • Pronunciation Style — Not just correctness, but stylistic choices i.e., formal vs. casual register, regional accent consistency, honorific handling.
  • Natural Pace — Whether the speaking rate feels comfortable and appropriate for the content type, neither rushed nor dragging.
  • Pause Placement — Whether silences appear at semantically correct points (after commas, between clauses) rather than mid-word or mid-phrase.
  • Breathing Naturalness — Whether breath sounds occur at realistic points and with realistic frequency, not absent entirely or inserted randomly.

Accuracy

Mixed direction — most are lower is better; the Whisper-judged Pronunciation % is higher is better.

MetricDirectionLightning v3.1GPT-4o-miniElevenLabs Turbo v2.5ElevenLabs Multilingual v2Sonic-3Gemini 2.5 ProMAI-Voice-1Inworld 1.5S2 Pro
WER*lower1.57%1.26%1.35%1.33%1.43%1.26%1.25%1.10%2.83%
CERlower0.67%0.52%0.60%0.54%0.59%0.62%0.50%0.47%1.16%
Hallucinationlower0.03%0.07%0.08%0.01%0.06%0.04%0.06%0.00%0.22%
Pronunciation %
Whisper jiwer
higher98.61%98.94%98.90%98.87%98.79%99.02%98.95%99.02%97.72%
  • WER (Word Error Rate) — Percentage of words in the transcript that differ from the reference; measures how faithfully the TTS renders the input text.
  • CER (Character Error Rate) — Like WER but at the character level.
  • Hallucination — Words or sounds the TTS generates that have no basis in the input text. Insertions, substitutions, or fabricated content.
  • Pronunciation % (Whisper jiwer) — The proportion of words pronounced correctly out of total words.

MOS v2 — higher is better

MetricLightning v3.1GPT-4o-miniElevenLabs Turbo v2.5ElevenLabs Multilingual v2Sonic-3Gemini 2.5 ProMAI-Voice-1Inworld 1.5S2 Pro
WV-MOS4.714.554.604.634.764.654.624.914.48
  • WV-MOS — The average of all listener ratings on a 1–5 scale across a test set; the standard aggregate quality metric in TTS evaluation.

*For Pronunciation and WER, the residual gap on Lightning v3.1 is concentrated in proper-noun rendering. Use a pronunciation dictionary to pin names, brands, and acronyms; with the dictionary applied, both metrics close to parity.

Want to reproduce these results? See the TTS evaluation script to measure TTFB and synthesis quality in your own environment.


Supported Languages

Automatic Language Detection & Code-Switching: Set language to "auto" (default) and Lightning v3.1 will automatically detect the language from input text. The model also supports code-switching within a single session without requiring a restart or reconnection.

LanguageCodeVoice count
Auto-detectauto
Englishen176
Hindihi115
Tamilta13
Spanishes11
Kannadakn10
Marathimr9
Telugute8
Odiaor8
Punjabipa8
Malayalamml6
Gujaratigu5
Bengalibn4

The list above reflects the voice catalog as the source of truth — i.e. languages for which Lightning v3.1 has at least one trained voice. Pass language="auto" to let the model code-switch across this set.


Top Voices

Curated short-list of the voices we’d recommend for production. Use these voice_id values directly in the voice_id parameter — no setup required. The full Voice Catalog below has the complete list across additional languages.

English (American)

Voice IDNameGender
jordanJordanMale
robertRobertMale
johnnyJohnnyMale
lucasLucasMale
magnusMagnusMale
ronaldRonaldMale
blofeldBlofeldMale
zorinZorinMale
felixFelixMale
malcolmMalcolmMale
laurenLaurenFemale
hannahHannahFemale
vanessaVanessaFemale
brookeBrookeFemale
oliviaOliviaFemale
rachelRachelFemale
nicoleNicoleFemale
elizabethElizabethFemale
ilsaIlsaFemale
christineChristineFemale

English (Other accents)

Voice IDNameGenderAccent
williamWilliamMaleCanadian
ericaEricaFemaleCanadian
chloeChloeFemaleAustralian

Indic (Hindi + English, Indian accent)

Voice IDNameGender
sunidhiSunidhiFemale
chinmayiChinmayiFemale
aanyaAanyaFemale
siyaSiyaFemale
anujaAnujaFemale
avniAvniFemale
ishaniIshaniFemale
yuvikaYuvikaFemale
advikaAdvikaFemale
sanaSanaFemale
sameeraSameeraFemale
srishtiSrishtiFemale
sakshiSakshiFemale
mayaMayaFemale
wasimWasimMale
rehanRehanMale
parthParthMale
atharvAtharvMale
vivaanVivaanMale
devanshDevanshMale
aarushAarushMale

Need something not in this short-list? Call GET /waves/v1/lightning-v3.1/get_voices (217 voices total) or browse the full catalog below. Each voice in the API response includes tags.language, tags.accent, tags.age, and tags.gender so you can filter programmatically.


Voice Catalog

English (US) — Best Voices

Voice IDNameGender
quinnQuinnFemale
miaMiaFemale
magnusMagnusMale
oliviaOliviaFemale
danielDanielMale
rachelRachelFemale
nicoleNicoleFemale
elizabethElizabethFemale

Hindi / English — Best Voices

Voice IDNameGender
neelNeelMale
maithiliMaithiliFemale
devanshDevanshMale
sameeraSameeraFemale
mihirMihirMale
aarushAarushMale
sakshiSakshiFemale
vivaanVivaanMale
srishtiSrishtiFemale

Spanish — Best Voices

Voice IDNameGender
daniellaDaniellaFemale
sandraSandraFemale
carlosCarlosMale
joseJoseMale
luisLuisMale
marianaMarianaFemale
miguelMiguelMale

Other Indian Languages — Best Voices

LanguageVoice IDNameGender
TamiljeevanJeevanMale
TamilrajeshwariRajeshwariFemale
MalayalamvaisakhVaisakhMale
MalayalamshibiShibiFemale
TelugusrihariSrihariMale
TelugupadmajaPadmajaFemale
MarathirupaliRupaliFemale
MarathinileshNileshMale
GujaratiniharikaNiharikaFemale
GujaratidhruvitDhruvitMale
KannadadeepashriDeepashriFemale
KannadapranavPranavMale

Voice Cloning

Instant Voice Cloning

Audio required: 5-15 seconds

Self-serve voice cloning available via API and console. Captures core voice characteristics for quick replication.


API Reference

Endpoints

EndpointMethodUse Case
https://api.smallest.ai/waves/v1/lightning-v3.1/get_speechPOSTSynchronous synthesis
https://api.smallest.ai/waves/v1/lightning-v3.1/streamPOST (SSE)Server-sent events streaming
wss://api.smallest.ai/waves/v1/lightning-v3.1/get_speech/streamWebSocketReal-time streaming

Request Parameters

ParameterTypeRequiredDefaultDescription
textstringYesText to synthesize
voice_idstringYesVoice identifier
sample_rateintegerNo44100Output sample rate (Hz)
speedfloatNo1.0Speech speed (0.5-2.0)
languagestringNo"auto"Language code or "auto" for automatic detection
output_formatstringNo"pcm"Audio format
pronunciation_dictsarrayNoCustom pronunciation IDs (WebSocket only)

Technical Specifications

Audio Output

SpecificationDetails
Native sample rate44,100 Hz
Supported sample rates8,000 / 16,000 / 24,000 / 44,100 Hz
Output formatsPCM, MP3, WAV, ulaw, alaw
Audio channelsMono

Text Formatting Guidelines

AspectRecommendation
Language scriptsUse native script for each language. English/Spanish/French/Italian/Dutch/Swedish/Portuguese/German in Latin script, Hindi/Marathi/Gujarati in Devanagari, Tamil/Kannada/Telugu/Malayalam in their native scripts
Break pointsNatural punctuation (. ! ? ,)
Mixed languageAvoid transliteration. Use native script for each language

Number & Date Handling

TypeFormat
Phone numbersDefault 3-4-3 grouping
DatesDD/MM/YYYY or DD-MM-YYYY
TimeHH:MM or HH:MM:SS

Hardware

  • Recommended GPU: NVIDIA L40S
  • Recommended VRAM: 48 GB

Software

  • Server regions (AWS): India (Hyderabad), USA (Oregon)
  • Automatic geo-location based routing for lowest latency

Best Practices

Code-Switching

Lightning v3.1 supports real-time intra-session language switching via two mutually exclusive language groups. Each group shares a unified phoneme space, enabling seamless mid-utterance transitions between member languages without session re-initialization. Cross-group switching is not supported within a single session.

Language Groups

Indic Group. Optimized for South Asian language pairs with English as the bridging language.

LanguageCode
Englishen
Hindihi
Tamilta
Telugute
Malayalamml
Kannadakn
Marathimr
Gujaratigu

Global Group. Optimized for European language pairs with English and Hindi as bridging languages.

LanguageCode
Englishen
Hindihi
Spanishes
Frenchfr
Italianit
Portuguesept
Germande
Dutchnl
Swedishsv

Intra-group switching is unrestricted. Any language within the same group can be interleaved at the token level. Cross-group switching (e.g., Tamil from Indic + French from Global) is architecturally unsupported and will produce undefined behavior.

en and hi exist in both groups. All other languages are exclusive to one group. The group is determined at session initialization based on the first non-shared language encountered. Design your session’s language set accordingly.

Routing Examples

// Indic group — Hindi <-> Tamil interleaving
"Valid: all languages within Indic group"
// Global group — Spanish <-> French interleaving
"Valid: all languages within Global group"
// Cross-group — Tamil (Indic) + French (Global)
"Invalid: cross-group switching unsupported"

Voice Cloning

Reference Audio

  • Environment. Record in a quiet room with no background noise, hiss, or rumble. Ambient sound is captured in the clone and cannot be removed after the fact.
  • Speaking style. Speak naturally in your normal conversational voice. The model captures timbre, accent, emotional tone, rhythm, and pacing automatically. Do not exaggerate unless a specific tone is intended.
  • Audio length. Provide 5 to 15 seconds of clean, continuous speech.

Multi-Lingual Cloning

  • Language matching. For best results, record reference audio in the same language as your intended output. Cross-lingual cloning is supported (e.g., English reference used for Spanish output), but a language-matched reference produces higher fidelity.
  • Accent retention. When synthesizing in a different language than the reference, the original accent is preserved. A clone from a South Indian English speaker will retain that accent in Hindi or Tamil output. This is by design: the clone reproduces your voice, including accent characteristics. For accent-neutral output in a specific language, provide reference audio from a native speaker of that language.
  • Script encoding. Input text must use native script for each language (Devanagari for Hindi/Marathi/Gujarati, respective Brahmic scripts for Dravidian languages, Latin for European languages). Transliterated input degrades synthesis quality.
  • Group constraint. Cloned voices follow the same language group routing rules. A session initialized in the Indic group cannot switch to Global-exclusive languages, regardless of the voice’s source language.

For detailed recording examples and expressive cloning techniques, see Voice Cloning Best Practices.

Text Formatting

  • Chunk boundaries. Segment input at natural prosodic boundaries (. ! ? ,). Maximum chunk size is 250 characters; optimal throughput at 140 characters per request.
  • Script integrity. Avoid transliteration. Use native script for each language. Mixed-script input within a single language token produces unpredictable phoneme mappings.
  • Numeric normalization. Use standard formats (DD/MM/YYYY, HH:MM). Phone numbers default to 3-4-3 digit grouping.
  • Lexicon overrides. Use pronunciation dictionaries for domain-specific terms, brand names, and acronyms where default grapheme-to-phoneme conversion is insufficient.

For comprehensive text formatting rules (numeric handling, date/time, symbols, chunking logic), see TTS Best Practices.


Use Cases

Direct Use

  • Voice assistants and conversational AI
  • Interactive chatbots with voice output
  • Real-time narration and live streaming
  • Accessibility tools and screen readers
  • Gaming (dynamic character voices)
  • Customer service automation

Downstream Use

  • Multi-turn conversational agents
  • Audio content generation pipelines
  • Telephony and IVR systems
  • Podcast and audiobook generation

Limitations & Safety

Known Limitations

  • Mixed-language text (transliteration) may produce suboptimal results. Hindi text should be in Devanagari script (e.g., “namaste” in Devanagari), not Latin. English text should be in Latin script, not Devanagari. Each language should use its native script.

Recommendations: Use proper script for each language. Break long text at natural punctuation points. Use pronunciation dictionaries for specialized vocabulary. Test voice selection for your specific use case.

Lightning v3.1 must not be used for impersonation or fraud, generating deceptive audio content (deepfakes), creating content that violates consent or privacy, harassment or abuse, or any illegal or unethical purposes.

Safety & Compliance

  • Voice cloning requires explicit consent
  • No retention of synthesized audio
  • No storage of personal voice data beyond cloning scope
  • Usage monitoring for policy compliance

For compliance documentation (GDPR, SOC2, HIPAA), contact support@smallest.ai.


ChannelDetails
Supportsupport@smallest.ai
Documentationdocs.smallest.ai/waves
Consoleapp.smallest.ai
CommunityDiscord