How to use Text to Speech (TTS)

View as MarkdownOpen in Claude

In this tutorial, you will learn how to use the Smallest AI platform to synthesize text to speech both synchronously and asynchronously. By the end of this tutorial, you will be able to convert text into speech using our API.

You can access the source code for the Python SDK on our GitHub repository.

Requirements

Before you begin, ensure you have the following:

  • Python (3.9 or higher) installed on your machine.
  • An API key from the Smallest AI platform.

Setup

Install our SDK

$pip install smallestai

Set your API key as an environment variable

$export SMALLEST_API_KEY=YOUR_API_KEY

Synchronous Text to Speech

Here is an example of how to synthesize text to speech synchronously:

python
1from smallestai.waves import WavesClient
2
3def main():
4 client = WavesClient(api_key="SMALLEST_API_KEY")
5 audio = client.synthesize(
6 "Hello, this is a test for sync synthesis function.",
7 )
8 with open("sync_synthesize.wav", "wb") as f:
9 f.write(audio)
10
11if __name__ == "__main__":
12 main()

Asynchronous Text to Speech

Here is an example of how to synthesize text to speech asynchronously:

python
1import asyncio
2import aiofiles
3from smallestai.waves import AsyncWavesClient
4
5async def main():
6 client = AsyncWavesClient(api_key="SMALLEST_API_KEY")
7 async with client as tts:
8 audio_bytes = await tts.synthesize("Hello, this is a test of the async synthesis function.")
9 async with aiofiles.open("async_synthesize.wav", "wb") as f:
10 await f.write(audio_bytes)
11
12if __name__ == "__main__":
13 asyncio.run(main())

Parameters

  • api_key (str): Your API key (can be set via SMALLEST_API_KEY environment variable)
  • model (str): TTS model to use (default: lightning-v3.1, available: lightning-v2, lightning-v3.1)
  • sample_rate (int): Audio sample rate (default: 24000)
  • voice_id (str): Voice ID (default: “emily”)
  • speed (float): Speech speed multiplier (default: 1.0)
  • language (str): Language code, available languages can be found here (default: “en”)
  • output_format (str): The format of the output audio. Available options: pcm, mp3, wav, mulaw (default: “wav”)

These parameters are part of the Smallest and AsyncSmallest instance. They can be set when creating the instance (as shown above). However, the synthesize function also accepts kwargs, allowing you to override any of these parameters on a per-request basis.

For example, you can modify the speech speed and sample rate just for a particular synthesis request:

python
1audio_bytes = client.synthesize(
2 "Modern problems don't always require modern solutions.",
3 speed=1.5, # Overrides default speed
4 sample_rate=16000 # Overrides default sample rate
5)

Conclusion

The Smallest AI Text-to-Speech SDK offers both synchronous and asynchronous options, catering to a variety of use cases:

  • Synchronous TTS: Ideal for applications where immediate responses are needed, such as real-time voice assistants, chatbot integrations, or interactive voice systems. It ensures that the audio is generated and available instantly for use within the same execution flow.

  • Asynchronous TTS: Designed for scenarios that involve handling multiple requests or large-scale processing. For example, if you need to convert multiple text inputs into speech concurrently, such as creating audio files for an audiobook or processing a batch of text-based announcements, asynchronous TTS allows you to execute these tasks efficiently without blocking other operations. This approach ensures scalability and optimal resource utilization, particularly in environments where time and performance are critical.

By understanding these modes and tailoring their usage to specific requirements, you can build highly responsive, scalable, and efficient solutions using the Smallest AI platform.

If you have any questions or suggestions, please create an issue on the smallest-python-sdk GitHub .