Lightning v3.1
Lightning v3.1 is a high-fidelity, low-latency text-to-speech model delivering natural, expressive, and realistic speech at 44 kHz. Optimized for real-time applications with ultra-low latency and voice cloning support, it delivers broadcast-quality audio with genuinely conversational characteristics. Now with 15 languages, automatic language detection, and code-switching.
Native sample rate
Latency at 20 concurrent requests
Auto-detection + code-switching
Real-time factor (faster than playback)
Model Overview
Key Capabilities
Ultra-low latency architecture designed for conversational AI and live streaming.
Instant voice cloning with just 5-15 seconds of audio via API and console.
HTTP, SSE, and WebSocket support for real-time applications.
15 languages with automatic detection and code-switching. No restarts or reconnections needed.
Broadcast-quality 44.1 kHz audio with natural prosody, intonation, and conversational rhythm.
Custom pronunciation dictionaries for specialized vocabulary, brand names, and domain-specific terms.
Performance & Benchmarks
Audio Generation Evaluation
Full-sentence audio generation. The entire text is synthesized in a single pass and then evaluated.
Evaluation: Seed TTS dataset, 1,088 English samples. LLM-as-a-Judge framework.
Lightning v3.1
ElevenLabs Turbo 2.5
Agent Call Evaluation
Chunk-by-chunk audio generation. Simulates real-world voice agent behavior where text is streamed and synthesized incrementally, as it happens during live calls.
Evaluation: Seed TTS dataset, 1,088 English samples. LLM-as-a-Judge framework.
Lightning v3.1
OpenAI
Cartesia Sonic 3
ElevenLabs Turbo 2.5
Want to reproduce these results? See the TTS evaluation script to measure TTFB and synthesis quality in your own environment.
Supported Languages
Automatic Language Detection & Code-Switching: Set language to "auto" (default) and Lightning v3.1 will automatically detect the language from input text. The model also supports code-switching within a single session without requiring a restart or reconnection.
Voice Catalog
English (US) — Best Voices
Hindi / English — Best Voices
Spanish — Best Voices
Other Indian Languages — Best Voices
Voice Cloning
Audio required: 5-15 seconds
Self-serve voice cloning available via API and console. Captures core voice characteristics for quick replication.
API Reference
Endpoints
Request Parameters
Technical Specifications
Audio Output
Text Formatting Guidelines
Number & Date Handling
Compute Infrastructure
Hardware
- Recommended GPU: NVIDIA L40S
- Recommended VRAM: 48 GB
Software
- Server regions (AWS): India (Hyderabad), USA (Oregon)
- Automatic geo-location based routing for lowest latency
Best Practices
Code-Switching
Lightning v3.1 supports real-time intra-session language switching via two mutually exclusive language groups. Each group shares a unified phoneme space, enabling seamless mid-utterance transitions between member languages without session re-initialization. Cross-group switching is not supported within a single session.
Language Groups
Indic Group. Optimized for South Asian language pairs with English as the bridging language.
Global Group. Optimized for European language pairs with English and Hindi as bridging languages.
Intra-group switching is unrestricted. Any language within the same group can be interleaved at the token level. Cross-group switching (e.g., Tamil from Indic + French from Global) is architecturally unsupported and will produce undefined behavior.
en and hi exist in both groups. All other languages are exclusive to one group. The group is determined at session initialization based on the first non-shared language encountered. Design your session’s language set accordingly.
Routing Examples
Voice Cloning
Reference Audio
- Environment. Record in a quiet room with no background noise, hiss, or rumble. Ambient sound is captured in the clone and cannot be removed after the fact.
- Speaking style. Speak naturally in your normal conversational voice. The model captures timbre, accent, emotional tone, rhythm, and pacing automatically. Do not exaggerate unless a specific tone is intended.
- Audio length. Provide 5 to 15 seconds of clean, continuous speech.
Multi-Lingual Cloning
- Language matching. For best results, record reference audio in the same language as your intended output. Cross-lingual cloning is supported (e.g., English reference used for Spanish output), but a language-matched reference produces higher fidelity.
- Accent retention. When synthesizing in a different language than the reference, the original accent is preserved. A clone from a South Indian English speaker will retain that accent in Hindi or Tamil output. This is by design: the clone reproduces your voice, including accent characteristics. For accent-neutral output in a specific language, provide reference audio from a native speaker of that language.
- Script encoding. Input text must use native script for each language (Devanagari for Hindi/Marathi/Gujarati, respective Brahmic scripts for Dravidian languages, Latin for European languages). Transliterated input degrades synthesis quality.
- Group constraint. Cloned voices follow the same language group routing rules. A session initialized in the Indic group cannot switch to Global-exclusive languages, regardless of the voice’s source language.
For detailed recording examples and expressive cloning techniques, see Voice Cloning Best Practices.
Text Formatting
- Chunk boundaries. Segment input at natural prosodic boundaries (
.!?,). Maximum chunk size is 250 characters; optimal throughput at 140 characters per request. - Script integrity. Avoid transliteration. Use native script for each language. Mixed-script input within a single language token produces unpredictable phoneme mappings.
- Numeric normalization. Use standard formats (
DD/MM/YYYY,HH:MM). Phone numbers default to 3-4-3 digit grouping. - Lexicon overrides. Use pronunciation dictionaries for domain-specific terms, brand names, and acronyms where default grapheme-to-phoneme conversion is insufficient.
For comprehensive text formatting rules (numeric handling, date/time, symbols, chunking logic), see TTS Best Practices.
Use Cases
Direct Use
- Voice assistants and conversational AI
- Interactive chatbots with voice output
- Real-time narration and live streaming
- Accessibility tools and screen readers
- Gaming (dynamic character voices)
- Customer service automation
Downstream Use
- Multi-turn conversational agents
- Audio content generation pipelines
- Telephony and IVR systems
- Podcast and audiobook generation
Limitations & Safety
Known Limitations
- Mixed-language text (transliteration) may produce suboptimal results. Hindi text should be in Devanagari script (e.g., “namaste” in Devanagari), not Latin. English text should be in Latin script, not Devanagari. Each language should use its native script.
Recommendations: Use proper script for each language. Break long text at natural punctuation points. Use pronunciation dictionaries for specialized vocabulary. Test voice selection for your specific use case.
Lightning v3.1 must not be used for impersonation or fraud, generating deceptive audio content (deepfakes), creating content that violates consent or privacy, harassment or abuse, or any illegal or unethical purposes.
Safety & Compliance
- Voice cloning requires explicit consent
- No retention of synthesized audio
- No storage of personal voice data beyond cloning scope
- Usage monitoring for policy compliance
For compliance documentation (GDPR, SOC2, HIPAA), contact support@smallest.ai.

