***

title: Quick Start
description: Deploy Smallest Self-Host Speech-to-Text with Docker Compose in under 15 minutes
---------------------

For clean Markdown of any page, append .md to the page URL. For a complete documentation index, see https://docs.smallest.ai/waves/v-4-0-0/self-host/docker-setup/stt-deployment/llms.txt. For full documentation content, see https://docs.smallest.ai/waves/v-4-0-0/self-host/docker-setup/stt-deployment/llms-full.txt.

## Overview

This guide walks you through deploying Smallest Self-Host using Docker Compose. You'll have a fully functional speech-to-text service running in under 15 minutes.

<Note>
  Ensure you've completed all [prerequisites](/waves/self-host/docker-setup/stt-deployment/prerequisites/hardware-requirements) before
  starting this guide.
</Note>

## Step 1: Create Project Directory

Create a directory for your deployment:

```bash
mkdir -p ~/smallest-self-host
cd ~/smallest-self-host
```

## Step 2: Login to Container Registry

Authenticate with the Smallest container registry using credentials provided by support:

```bash
docker login quay.io
```

Enter your username and password when prompted.

<Tip>
  Save your credentials securely. You'll need them if you restart or redeploy
  the containers.
</Tip>

## Step 3: Create Environment File

Create a `.env` file with your license key:

```bash
cat > .env << 'EOF'
LICENSE_KEY=your-license-key-here
EOF
```

Replace `your-license-key-here` with the actual license key provided by Smallest.ai.

<Warning>
  Never commit your `.env` file to version control. Add it to `.gitignore` if
  using git.
</Warning>

## Step 4: Create Docker Compose File

<Tabs>
  <Tab title="Lightning ASR (Standard)">
    **Best for:** Fast inference, real-time applications

    Create a `docker-compose.yml` file:

    ```yaml docker-compose.yml
    version: "3.8"

    services:
      lightning-asr:
        image: quay.io/smallestinc/lightning-asr:latest
        ports:
          - "2233:2233"
        environment:
          - MODEL_URL=${MODEL_URL}
          - LICENSE_KEY=${LICENSE_KEY}
          - REDIS_URL=redis://redis:6379
          - PORT=2233
        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  count: 1
                  capabilities: [gpu]
        restart: unless-stopped
        networks:
          - smallest-network

      api-server:
        image: quay.io/smallestinc/self-hosted-api-server:latest
        container_name: api-server
        environment:
          - LICENSE_KEY=${LICENSE_KEY}
          - LIGHTNING_ASR_BASE_URL=http://lightning-asr:2233
          - API_BASE_URL=http://license-proxy:3369
        ports:
          - "7100:7100"
        networks:
          - smallest-network
        restart: unless-stopped
        depends_on:
          - lightning-asr
          - license-proxy

      license-proxy:
        image: quay.io/smallestinc/license-proxy:latest
        container_name: license-proxy
        environment:
          - LICENSE_KEY=${LICENSE_KEY}
        networks:
          - smallest-network
        restart: unless-stopped

      redis:
        image: redis:7-alpine
        ports:
          - "6379:6379"
        networks:
          - smallest-network
        restart: unless-stopped
        command: redis-server --appendonly yes
        healthcheck:
          test: ["CMD", "redis-cli", "ping"]
          interval: 5s
          timeout: 3s
          retries: 5

    networks:
      smallest-network:
        driver: bridge
        name: smallest-network
    ```
  </Tab>
</Tabs>

## Step 5: Additional Configuration for Lightning ASR

<Tabs>
  <Tab title="Lightning ASR">
    Add the model URL to your `.env` file (required for Lightning ASR):

    ```bash
    echo "MODEL_URL=your-model-url-here" >> .env
    ```

    The MODEL\_URL is provided by Smallest.ai support.
  </Tab>
</Tabs>

## Step 6: Start Services

Launch all services with Docker Compose:

```bash
docker compose up -d
```

## Step 7: Monitor Startup

Watch the logs to monitor startup progress:

```bash
docker compose logs -f
```

Look for these success indicators:

<Steps>
  <Step title="Redis Ready">
    ```
    redis-1  | Ready to accept connections
    ```
  </Step>

  <Step title="License Proxy Ready">
    ```
    license-proxy  | License validated successfully
    license-proxy  | Server listening on port 3369
    ```
  </Step>

  <Step title="Model Service Ready">
    **Lightning ASR:**

    ```
    lightning-asr-1  | Model loaded successfully
    lightning-asr-1  | Server ready on port 2233
    ```
  </Step>

  <Step title="API Server Ready">
    ```
    api-server  | Connected to Lightning ASR
    api-server  | API server listening on port 7100
    ```
  </Step>
</Steps>

## Common Startup Issues

<AccordionGroup>
  <Accordion title="GPU Not Found">
    **Error:** `could not select device driver "nvidia"`

    **Solution:**

    ```bash
    sudo systemctl restart docker
    docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
    ```

    If this fails, reinstall NVIDIA Container Toolkit.
  </Accordion>

  <Accordion title="License Validation Failed">
    **Error:** `License validation failed`

    **Solution:**

    * Verify LICENSE\_KEY in `.env` is correct
    * Check internet connectivity
    * Ensure firewall allows HTTPS to api.smallest.ai
  </Accordion>

  <Accordion title="Model Download Failed">
    **Error:** `Failed to download model`

    **Solution:**

    * Verify MODEL\_URL in `.env` is correct
    * Check disk space: `df -h`
    * Check internet connectivity
  </Accordion>

  <Accordion title="Port Already in Use">
    **Error:** `port is already allocated`

    **Solution:**
    Check what's using the port:

    ```bash
    sudo lsof -i :7100
    ```

    Either stop the conflicting service or change the port in docker-compose.yml
  </Accordion>
</AccordionGroup>

## Managing Your Deployment

### Stop Services

```bash
docker compose stop
```

### Restart Services

```bash
docker compose restart
```

### View Logs

```bash
docker compose logs -f [service-name]
```

Examples:

```bash
docker compose logs -f api-server
docker compose logs -f lightning-asr
```

### Update Images

Pull latest images and restart:

```bash
docker compose pull
docker compose up -d
```

### Remove Deployment

Stop and remove all containers:

```bash
docker compose down
```

Remove containers and volumes (including downloaded models):

```bash
docker compose down -v
```

<Warning>
  Using `-v` flag will delete all data including downloaded models. They will
  need to be re-downloaded on next startup.
</Warning>

## What's Next?

<CardGroup cols={2}>
  <Card title="STT Configuration" href="/waves/self-host/docker-setup/stt-deployment/configuration">
    Customize your deployment with advanced configuration options
  </Card>

  <Card title="STT Services Overview" href="/waves/self-host/docker-setup/stt-deployment/services-overview">
    Learn about each service component in detail
  </Card>

  <Card title="STT Troubleshooting" href="/waves/self-host/docker-setup/stt-deployment/troubleshooting">
    Debug common issues and optimize performance
  </Card>

  <Card title="API Reference" href="/waves/self-host/api-reference/authentication">
    Integrate with your applications using the API
  </Card>
</CardGroup>