***

title: Software Requirements
description: Tools and software for Kubernetes STT deployment
---------------------

For clean Markdown of any page, append .md to the page URL. For a complete documentation index, see https://docs.smallest.ai/waves/v-4-0-0/self-host/kubernetes-setup/prerequisites/llms.txt. For full documentation content, see https://docs.smallest.ai/waves/v-4-0-0/self-host/kubernetes-setup/prerequisites/llms-full.txt.

## Required Tools

Install the following tools on your local machine.

### Helm

Helm 3.0 or higher is required.

<Tabs>
  <Tab title="macOS">
    ```bash
    brew install helm
    ```
  </Tab>

  <Tab title="Linux">
    ```bash
    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
    ```
  </Tab>

  <Tab title="Windows">
    ```powershell
    choco install kubernetes-helm
    ```
  </Tab>
</Tabs>

Verify installation:

```bash
helm version
```

### kubectl

Kubernetes CLI tool for cluster management.

<Tabs>
  <Tab title="macOS">
    ```bash
    brew install kubectl
    ```
  </Tab>

  <Tab title="Linux">
    ```bash
    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    chmod +x kubectl
    sudo mv kubectl /usr/local/bin/
    ```
  </Tab>

  <Tab title="Windows">
    ```powershell
    choco install kubernetes-cli
    ```
  </Tab>
</Tabs>

Verify installation:

```bash
kubectl version --client
```

## Cluster Access

### Configure kubectl

Ensure kubectl is configured to access your cluster:

```bash
kubectl cluster-info
kubectl get nodes
```

Expected output should show your cluster nodes.

### Test Cluster Access

Verify you have sufficient permissions:

```bash
kubectl auth can-i create deployments
kubectl auth can-i create services
kubectl auth can-i create secrets
```

All should return `yes`.

## GPU Support

### NVIDIA GPU Operator

For Kubernetes clusters, install the NVIDIA GPU Operator to manage GPU resources.

<Note>
  The Smallest Self-Host Helm chart includes the GPU Operator as an optional dependency. You can enable it during installation or install it separately.
</Note>

#### Verify GPU Nodes

Check that GPU nodes are properly labeled:

```bash
kubectl get nodes -l node.kubernetes.io/instance-type
```

Verify GPU resources are available:

```bash
kubectl get nodes -o json | jq '.items[].status.capacity'
```

Look for `nvidia.com/gpu` in the capacity.

## Optional Components

### Prometheus & Grafana

For monitoring and autoscaling based on custom metrics:

* **Prometheus Operator** (included in chart)
* **Grafana** (included in chart)
* **Prometheus Adapter** (included in chart)

These are required for:

* Custom metrics-based autoscaling
* Advanced monitoring dashboards
* Performance visualization

### Cluster Autoscaler

For automatic node scaling on AWS EKS:

* IAM role with autoscaling permissions
* IRSA (IAM Roles for Service Accounts) configured

<Tip>
  See the [Cluster Autoscaler](/waves/self-host/kubernetes-setup/autoscaling/cluster-autoscaler) guide for setup.
</Tip>