Quick Start
Install the SDK, write your first agent, and test it — locally or deployed to the cloud.
Prerequisites
OpenAI API Key required. Set it as an environment variable before running your agent:
Installation
Write Your First Agent
Create two files: one for the agent logic, and one to run the application.
Create my_agent.py
Subclass OutputAgentNode and implement generate_response() to stream LLM output.
Create main.py
Wire up AtomsApp with a setup_handler that adds your agent to the session.
Your entry point can be named anything (app.py, run.py, etc.). When deploying, specify it with --entry-point your_file.py.
Want a greeting? Use @session.on_event to speak when the user joins:
Adding the greeting to context ensures the LLM knows the conversation has started.
Run Your Agent
Once your files are ready, you have two options:
Run Locally
Deploy to Platform
For development and testing, run the file directly:
This starts a WebSocket server on localhost:8080. In a separate terminal, connect to it:
No account or deployment needed.
What’s Next?
Give your agent calculators, search, and APIs.
Connect multiple agents for complex workflows.

