Sync & Async Synthesis
Sync & Async Synthesis
Generate speech via the REST API — synchronously (one request, complete audio) or asynchronously (multiple requests in parallel).
Sample output (sync, voice: magnus):
Requirements
- An API key from the Smallest AI Console
- For Python:
requests - For JavaScript: Node.js 18+ (built-in
fetch)
Synchronous Text to Speech
Send text, receive complete audio in the response:
Asynchronous Text to Speech
For concurrent requests (e.g., generating multiple audio files in parallel):
The smallestai Python SDK’s WavesClient and AsyncWavesClient synthesis methods are being updated. Use the requests / aiohttp examples above until the next SDK release. (Streaming synthesis via WavesStreamingTTS works — see Streaming TTS.)
Parameters
You can override any parameter per request:
When to Use Each Mode
- Synchronous: Real-time voice assistants, chatbot responses, single audio generation
- Asynchronous: Batch processing, generating multiple audio files, audiobook chapters, concurrent API calls
For real-time streaming where audio starts playing before generation completes, see Streaming TTS.
Full runnable source: quickstart-python.py
Need Help?
Check out the API Reference for the full endpoint specification, or ask on Discord.

