How to stream LLM to TTS in Realtime

View as MarkdownOpen in Claude

Synthesize streaming Text to Speech

The TextToAudioStream class provides real-time text-to-speech (TTS) conversion by streaming text directly into audio output. This feature is particularly useful in applications that require instant feedback, such as voice assistants, live captioning systems, or interactive chatbots, where text is continuously generated and needs to be converted into speech on-the-fly.

This example demonstrates how to stream text from a large language model (LLM) and process it into speech, utilizing the TextToAudioStream class with both synchronous and asynchronous TTS engines.

Example Overview

In this example, text is generated using an LLM (Groq in this case, you can use any LLM), and the generated text is then passed to a TTS system (Smallest API) for real-time audio synthesis. The audio is saved as a .wav file. This entire process happens asynchronously to ensure smooth performance, especially when dealing with large or continuous streams of text.

Code Walkthrough

Stream through a WebSocket

If you are using a voice_id corresponding to a voice clone, you should explicitly set the model parameter to "lightning-large" in the Smallest client or payload.

python
1import asyncio
2import websockets
3from groq import Groq
4from smallestai.waves import WavesClient, TextToAudioStream
5
6# Initialize Groq (LLM) and Smallest (TTS) instances
7llm = Groq(api_key="GROQ_API_KEY")
8tts = WavesClient(api_key="SMALLEST_API_KEY")
9WEBSOCKET_URL = "wss://echo.websocket.events" # Mock WebSocket server
10
11# Async function to stream text generation from LLM
12async def generate_text(prompt):
13 completion = llm.chat.completions.create(
14 messages=[{"role": "user", "content": prompt}],
15 model="llama3-8b-8192",
16 stream=True,
17 )
18
19 # Yield text as it is generated
20 for chunk in completion:
21 text = chunk.choices[0].delta.content
22 if text:
23 yield text
24
25# Main function to run the process
26async def main():
27 # Initialize the TTS processor
28 processor = TextToAudioStream(tts_instance=tts)
29
30 # Generate text from LLM
31 llm_output = generate_text("Explain text to speech like I am five in 5 sentences.")
32
33 # Stream the generated speech throught a websocket
34 async with websockets.connect(WEBSOCKET_URL) as ws:
35 print("Connected to WebSocket server.")
36
37 # Stream the generated speech
38 async for audio_chunk in processor.process(llm_output):
39 await ws.send(audio_chunk) # Send audio chunk
40 echoed_data = await ws.recv() # Receive the echoed message
41 print("Received from server:", echoed_data[:20], "...") # Print first 20 bytes
42
43 print("WebSocket connection closed.")
44
45if __name__ == "__main__":
46 asyncio.run(main())

Saving to a file

If you are using a voice_id corresponding to a voice clone, you should explicitly set the model parameter to "lightning-large" in the Smallest client or payload.

python
1import wave
2import asyncio
3from groq import Groq
4from smallestai.waves import WavesClient, TextToAudioStream
5
6# Initialize Groq (LLM) and Smallest (TTS) instances
7llm = Groq(api_key="GROQ_API_KEY")
8tts = WavesClient(api_key="SMALLEST_API_KEY")
9
10# Async function to stream text generation from LLM
11async def generate_text(prompt):
12 completion = llm.chat.completions.create(
13 messages=[{"role": "user", "content": prompt}],
14 model="llama3-8b-8192",
15 stream=True,
16 )
17
18 # Yield text as it is generated
19 for chunk in completion:
20 text = chunk.choices[0].delta.content
21 if text:
22 yield text
23
24# Async function to save generated audio as a WAV file
25async def save_audio_to_wav(file_path, processor, llm_output):
26 with wave.open(file_path, "wb") as wav_file:
27 wav_file.setnchannels(1) # Mono audio
28 wav_file.setsampwidth(2) # 16-bit samples
29 wav_file.setframerate(24000) # 24 kHz sample rate
30
31 # Process audio chunks and write them to the WAV file
32 async for audio_chunk in processor.process(llm_output):
33 wav_file.writeframes(audio_chunk)
34
35# Main asynchronous function to run the process
36async def main():
37 # Initialize the TTS processor
38 processor = TextToAudioStream(tts_instance=tts)
39
40 # Generate text asynchronously
41 llm_output = generate_text("Explain text to speech like I am five in 5 sentences.")
42
43 # Save the generated speech to a WAV file
44 await save_audio_to_wav("llm_to_speech.wav", processor, llm_output)
45
46if __name__ == "__main__":
47 asyncio.run(main())

Parameters

  • tts_instance: The instance of the TTS engine (either Smallest or AsyncSmallest) used to generate speech from the text.
  • queue_timeout: The wait time (in seconds) for new text to be received before attempting to generate speech. Default is 5.0 seconds.
  • max_retries: The maximum number of retries for failed synthesis attempts. Default is 3.

Output Format

The TextToAudioStream processor streams raw audio data without WAV headers for better streaming efficiency. These raw audio chunks can be:

  • Played directly through an audio device for real-time feedback.
  • Saved to a file (e.g., .wav or .mp3) for later use.
  • Streamed over a network to a client device or service.
  • Further processed for additional applications, such as speech analytics or audio effects.

This approach allows you to handle continuous streams of text and convert them into real-time speech, making it ideal for interactive applications where immediate audio feedback is crucial.