Configuration
Overview
This guide covers advanced configuration options for customizing your TTS Docker deployment. Learn how to optimize resources, configure external services, and tune performance.
Environment Variables
All configuration is managed through environment variables in the .env file.
Core Configuration
Your Smallest.ai license key for validation and usage reporting
API Server Configuration
Port for the API server to listen on
Internal URL for license proxy communication
Internal URL for Lightning TTS communication
Lightning TTS Configuration
Port for Lightning TTS to listen on
Redis connection URL for caching and state management
For external Redis:
With password:
GPU device ID to use (for multi-GPU systems)
Resource Configuration
GPU Allocation
For systems with multiple GPUs, you can specify which GPU to use:
For multiple GPUs per container:
Memory Limits
Set memory limits to prevent resource exhaustion:
CPU Limits
Control CPU allocation:
External Services
External Redis
Use an external Redis instance instead of the embedded one:
Remove the Redis service from docker-compose.yml.
Custom Network
Use a custom Docker network:
Performance Tuning
Voice Configuration
Configure voice parameters:
Batch Processing
Optimize for batch processing:
Model Precision
Control model precision for performance:
Options: fp32, fp16, int8
Volume Mounts
Persistent Model Cache
Cache models to avoid re-downloading:
Log Persistence
Persist logs for debugging:
Health Checks
Add health checks for better monitoring:
Security Configuration
Read-Only Root Filesystem
Enhance security with read-only root filesystem:
Non-Root User
Run containers as non-root:
Environment File Example
Complete .env file example:

