***
title: Getting Started
description: From zero to a running AI agent.
---------------------------------------------
This guide walks you through installing the SDK, writing your first intelligent agent, and running it.
## Prerequisites
**OpenAI API Key required.** Set it as an environment variable before running your agent:
```bash
export OPENAI_API_KEY="your-key-here"
```
## Installation
```bash
pip install smallestai
```
## Write Your First Agent
Create two files: one for the agent logic, and one to run the application.
Subclass `OutputAgentNode` and implement `generate_response()` to stream LLM output.
```python my_agent.py
import os
from smallestai.atoms.agent.nodes import OutputAgentNode
from smallestai.atoms.agent.clients.openai import OpenAIClient
class MyAgent(OutputAgentNode):
def __init__(self):
super().__init__(name="my-agent")
self.llm = OpenAIClient(
model="gpt-4o-mini",
api_key=os.getenv("OPENAI_API_KEY")
)
async def generate_response(self):
response = await self.llm.chat(
messages=self.context.messages,
stream=True
)
async for chunk in response:
if chunk.content:
yield chunk.content
```
Wire up `AtomsApp` with a `setup_handler` that adds your agent to the session.
```python main.py
from smallestai.atoms.agent.server import AtomsApp
from smallestai.atoms.agent.session import AgentSession
from my_agent import MyAgent
async def on_start(session: AgentSession):
session.add_node(MyAgent())
await session.start()
await session.wait_until_complete()
if __name__ == "__main__":
app = AtomsApp(setup_handler=on_start)
app.run()
```
Your entry point can be named anything (`app.py`, `run.py`, etc.). When deploying, specify it with `--entry-point your_file.py`.
**Want a greeting?** Use `@session.on_event` to speak when the user joins:
```python
@session.on_event("on_event_received")
async def on_event(_, event):
if isinstance(event, SDKSystemUserJoinedEvent):
agent.context.add_message({"role": "assistant", "content": "Hello!"})
await agent.speak("Hello! How can I help?")
```
Adding the greeting to context ensures the LLM knows the conversation has started.
## Run Your Agent
Once your files are ready, you have two options:
For development and testing, run the file directly:
```bash
python main.py
```
This starts a WebSocket server on `localhost:8080`. In a separate terminal, connect to it:
```bash
smallestai agent chat
```
No account or deployment needed.
To have Smallest AI host your agent in the cloud (for production, API access, or phone calls):
**Prerequisite:** You must first create an agent on the [Atoms platform](https://atoms.smallest.ai). The `agent init` command links your local code to that agent.
```bash
smallestai auth login
```
Link your directory to an existing agent on the platform.
```bash
smallestai agent init
```
Push your code to the cloud.
```bash
smallestai agent deploy --entry-point main.py
```
Run `smallestai agent builds`, select your build, and choose **Make Live**.
## What's Next?
Give your agent calculators, search, and APIs.
Connect multiple agents for complex workflows.