***

title: Introduction
description: Deploy high-performance speech-to-text and text-to-speech models in your own infrastructure
---------------------

For clean Markdown of any page, append .md to the page URL. For a complete documentation index, see https://docs.smallest.ai/waves/v-4-0-0/self-host/getting-started/llms.txt. For full documentation content, see https://docs.smallest.ai/waves/v-4-0-0/self-host/getting-started/llms-full.txt.

Smallest Self-Host enables you to get the same powerful TTS and STT capabilities as our cloud service while keeping your data under complete control.

## Deployment Options

<CardGroup cols={2}>
  <Card title="Docker STT" icon="server" href="/waves/self-host/docker-setup/stt-deployment/quick-start">
    Deploy speech-to-text with Docker. Best for development, testing, and small-scale production.
  </Card>

  <Card title="Docker TTS" icon="server" href="/waves/self-host/docker-setup/tts-deployment/quick-start">
    Deploy text-to-speech with Docker. Quick setup for voice synthesis workloads.
  </Card>

  <Card title="Kubernetes STT" icon="dharmachakra" href="/waves/self-host/kubernetes-setup/quick-start">
    Production-grade STT with autoscaling and high availability on Kubernetes.
  </Card>
</CardGroup>

<Note>
  Kubernetes deployment is currently available for **STT only**. TTS Kubernetes support is coming soon.
</Note>

## Resources

<CardGroup cols={3}>
  <Card title="Architecture" icon="sitemap" href="/waves/self-host/getting-started/architecture">
    System components and data flow
  </Card>

  <Card title="Why Self-Host?" icon="server" href="/waves/self-host/getting-started/why-self-host">
    Benefits of self-hosting
  </Card>

  <Card title="Prerequisites" icon="list-check" href="/waves/self-host/getting-started/prerequisites">
    Requirements and credentials
  </Card>
</CardGroup>

## Support

* **Email**: [support@smallest.ai](mailto:support@smallest.ai)
* **Discord**: [Join our community](https://discord.gg/9WtSXv26WE)