***

title: Prerequisites
description: What you need before deploying Smallest Self-Host
---------------------

For clean Markdown of any page, append .md to the page URL. For a complete documentation index, see https://docs.smallest.ai/waves/v-4-0-0/self-host/getting-started/llms.txt. For full documentation content, see https://docs.smallest.ai/waves/v-4-0-0/self-host/getting-started/llms-full.txt.

## Overview

Before deploying Smallest Self-Host, you'll need credentials from Smallest.ai and infrastructure with GPU support.

## Credentials from Smallest.ai

Contact **[support@smallest.ai](mailto:support@smallest.ai)** to obtain the following:

<AccordionGroup>
  <Accordion title="License Key">
    Your unique license key for validation. This is required for all deployments.

    You'll add this to your configuration:

    ```yaml
    global:
      licenseKey: "your-license-key-here"
    ```

    Or as an environment variable:

    ```bash
    LICENSE_KEY=your-license-key-here
    ```
  </Accordion>

  <Accordion title="Container Registry Credentials">
    Credentials to pull Docker images from `quay.io`:

    * **Username**
    * **Password**
    * **Email**

    Login to the registry:

    ```bash
    docker login quay.io
    ```

    For Kubernetes, you'll add these to your `values.yaml`:

    ```yaml
    global:
      imageCredentials:
        create: true
        registry: quay.io
        username: "your-username"
        password: "your-password"
        email: "your-email@example.com"
    ```
  </Accordion>

  <Accordion title="Model Download URLs">
    Download URLs for the AI models (STT and/or TTS).

    For Docker deployments, add to your `.env`:

    ```bash
    MODEL_URL=your-model-url-here
    ```

    For Kubernetes, add to `values.yaml`:

    ```yaml
    models:
      asrModelUrl: "your-asr-model-url"
      ttsModelUrl: "your-tts-model-url"
    ```
  </Accordion>
</AccordionGroup>

## Infrastructure Requirements

<CardGroup cols={2}>
  <Card title="GPU Requirements">
    * **NVIDIA GPU** with 16+ GB VRAM
    * Recommended: A10, L4, L40s, T4, or A100
    * NVIDIA Driver 525+ (for A10, A100, L4)
    * NVIDIA Driver 470+ (for T4, V100)
  </Card>

  <Card title="Container Runtime">
    * Docker 20.10+ or Podman 4.0+
    * NVIDIA Container Toolkit
    * For Kubernetes: GPU Operator or Device Plugin
  </Card>
</CardGroup>

### Minimum Resources

<table>
  <thead>
    <tr>
      <th>
        Component
      </th>

      <th>
        CPU
      </th>

      <th>
        Memory
      </th>

      <th>
        GPU
      </th>

      <th>
        Storage
      </th>
    </tr>
  </thead>

  <tbody>
    <tr>
      <td>
        <strong>Lightning ASR</strong>
      </td>

      <td>
        4-8 cores
      </td>

      <td>
        12-16 GB
      </td>

      <td>
        1x NVIDIA (16+ GB VRAM)
      </td>

      <td>
        50+ GB
      </td>
    </tr>

    <tr>
      <td>
        <strong>Lightning TTS</strong>
      </td>

      <td>
        4-8 cores
      </td>

      <td>
        12-16 GB
      </td>

      <td>
        1x NVIDIA (16+ GB VRAM)
      </td>

      <td>
        20+ GB
      </td>
    </tr>

    <tr>
      <td>
        API Server
      </td>

      <td>
        0.5-2 cores
      </td>

      <td>
        512 MB - 2 GB
      </td>

      <td>
        None
      </td>

      <td>
        1 GB
      </td>
    </tr>

    <tr>
      <td>
        License Proxy
      </td>

      <td>
        0.25-1 core
      </td>

      <td>
        256-512 MB
      </td>

      <td>
        None
      </td>

      <td>
        100 MB
      </td>
    </tr>

    <tr>
      <td>
        Redis
      </td>

      <td>
        0.5-1 core
      </td>

      <td>
        512 MB - 2 GB
      </td>

      <td>
        None
      </td>

      <td>
        1 GB
      </td>
    </tr>
  </tbody>
</table>

## Network Requirements

The License Proxy requires outbound HTTPS access to validate licenses:

<table>
  <thead>
    <tr>
      <th>
        Endpoint
      </th>

      <th>
        Port
      </th>

      <th>
        Purpose
      </th>
    </tr>
  </thead>

  <tbody>
    <tr>
      <td>
        <code>api.smallest.ai</code>
      </td>

      <td>
        443
      </td>

      <td>
        License validation and usage reporting
      </td>
    </tr>
  </tbody>
</table>

<Warning>
  Ensure your firewall and network policies allow outbound HTTPS traffic to `api.smallest.ai`.
</Warning>

## Next Steps

Choose your deployment method and follow the specific prerequisites:

<CardGroup cols={2}>
  <Card title="Docker Prerequisites" href="/waves/self-host/docker-setup/stt-deployment/prerequisites/hardware-requirements">
    Setup requirements for Docker deployments including NVIDIA Container Toolkit installation.
  </Card>

  <Card title="Kubernetes Prerequisites" href="/waves/self-host/kubernetes-setup/prerequisites/hardware-requirements">
    Cluster requirements, GPU node setup, and Helm configuration for Kubernetes deployments.
  </Card>
</CardGroup>