***
title: Agents
sidebarTitle: Overview
description: The brain of your AI application.
----------------------------------------------
An **Agent** is the core component that powers conversational AI in the Atoms SDK. It's the "brain" that listens to what users say, thinks about how to respond, and speaks back—all in real-time.
## What is an Agent?
In Atoms, an agent is implemented as an `OutputAgentNode`—a specialized node that handles the complete conversation loop:
1. **Listen** — Receives transcribed speech from the user
2. **Think** — Processes the input with an LLM to generate a response
3. **Speak** — Streams the response as audio back to the user
This happens continuously, creating a natural back-and-forth conversation.
## Why Atoms Agents?
| Feature | Description |
| ------------------------- | ---------------------------------------------------------------------- |
| **Real-Time Streaming** | Responses start playing while the LLM is still generating. No waiting. |
| **Interruption Handling** | When users speak mid-response, the agent stops and listens. |
| **Context Management** | Conversation history maintained automatically. |
| **Tool Calling** | Execute functions mid-conversation—check databases, call APIs. |
| **Multi-Provider LLM** | Use OpenAI, Anthropic, or bring your own model. |
| **Production Ready** | Deploy with one command. Handle thousands of concurrent calls. |
## Node Types
The SDK provides three node types for building agents:
| Node | Purpose |
| --------------------- | ----------------------------------------------------- |
| `Node` | Base primitive for routing, logging, and custom logic |
| `OutputAgentNode` | Conversational agent that speaks to users |
| `BackgroundAgentNode` | Silent observer for analytics and monitoring |
Deep dive into node architecture, when to use each type, and how to build custom nodes.
***
## What's Next
Set up prompts and LLM settings.
Give your agent actions and data access.
Conversation flows, interruptions, multi-agent.
Test locally and debug issues.