Nodes

View as MarkdownOpen in Claude

A Node is the basic unit of computation in the Atoms graph. Every “agent” or functional component you build is ultimately a Node.

What is a Node?

In the conceptual graph, a Node is a vertex that performs three key actions:

  • Receive: Accept incoming events like user audio, text, or system triggers.
  • Process: Execute custom Python code, business logic, or AI inference.
  • Send: Emit new events to pass control to the rest of the graph.

Abstracted Nodes

To help you get started quickly, we have abstracted three common node patterns for you. You can use these out of the box or build your own custom nodes from scratch.

1. The Base Node (Node)

The Node class is the raw primitive. It gives you full control but assumes nothing. It is perfect for deterministic logic, API calls, or routing decisions.

Key Features:

  • Raw Event Access: You get the raw event and decide exactly what to do with it.
  • No Overhead: No LLM context or streaming logic unless you build it.

Use Case: Router, API Fetcher, Database Logger, Analytics Tracker.

Override process_event() to handle incoming events.

1from smallestai.atoms.agent.nodes import Node
2
3class RouterNode(Node):
4 async def process_event(self, event):
5 # Deterministic logic
6 if "sales" in event.content:
7 # Broadcast to children (routing logic handles filtering)
8 await self.send_event(event)
9 else:
10 await self.send_event(event)

2. The Output Agent (OutputAgentNode)

This is the most common node type. It is a full-featured conversational agent designed to interact with Large Language Models (LLMs).

Key Features:

  • Auto-Interruption: Automatically handles user interruptions during playback only when the user is speaking.
  • Streaming: Manages the complexity of streaming LLM tokens to the user in real-time.
  • Context Management: Maintains conversation history automatically.

Use Case: The “brain” of your agent—Sales Agent, Support Agent, Triage Agent.

Implement generate_response() as an async generator that yields text chunks.

1from smallestai.atoms.agent.nodes import OutputAgentNode
2from smallestai.atoms.agent.clients.openai import OpenAIClient
3
4class MyAgent(OutputAgentNode):
5 def __init__(self):
6 super().__init__(name="my_agent")
7 # Initialize your own LLM client
8 self.llm = OpenAIClient(model="gpt-4o-mini")
9
10 async def generate_response(self):
11 # 1. Call your LLM
12 # 2. Yield text chunks (the framework handles buffering and events)
13 response = await self.llm.chat(
14 messages=self.context.messages,
15 stream=True
16 )
17 async for chunk in response:
18 if chunk.content:
19 yield chunk.content

3. The Background Agent (BackgroundAgentNode)

A silent observer node that processes events without producing audio output.

Key Features:

  • Silent Processing: Receives all events but doesn’t speak.
  • Parallel Execution: Runs alongside your main agent.
  • State Sharing: Main agent can query its state.

Use Case: Sentiment analysis, call quality monitoring, analytics, webhooks.

1from smallestai.atoms.agent.nodes import BackgroundAgentNode
2from smallestai.atoms.agent.events import SDKEvent, SDKAgentTranscriptUpdateEvent
3
4class SentimentAnalyzer(BackgroundAgentNode):
5 def __init__(self):
6 super().__init__(name="sentiment-analyzer")
7 self.current_sentiment = "neutral"
8
9 async def process_event(self, event: SDKEvent):
10 if isinstance(event, SDKAgentTranscriptUpdateEvent):
11 if event.role == "user":
12 self.current_sentiment = await self._analyze(event.content)

See Background Agent for a complete guide.


How to Write a Custom Node

1

Inherit from Node

Create a new class that inherits from Node (or OutputAgentNode).

1class LoggerNode(Node):
2

Override process_event

Implement the process_event async method. This is your logic handler.

1async def process_event(self, event):
2 print(f"LOG: Received event type {event.type}")
3

Propagate Events

Crucial: You must manually send events if you want the flow to continue.

1await self.send_event(event)

Manual Event Propagation In a custom Node, the chain of events stops with you unless you explicitly move it forward. You MUST call await self.send_event(...) if you want the event to continue causing effects in the graph.


Custom Node Examples

1"""Logs every event for debugging."""
2from loguru import logger
3from smallestai.atoms.agent.nodes import Node
4
5class LoggerNode(Node):
6 async def process_event(self, event):
7 # Log the event
8 logger.info(f"[{event.type}] {event}")
9
10 # Pass it on
11 await self.send_event(event)

Best Practices

Use clear, unique names for debugging. This name shows up in your logs.

1# Good
2super().__init__(name="sales-router")
3
4# Bad
5super().__init__(name="node1")

One node, one responsibility. If you need to filter AND log AND route, chain three small nodes together instead of building one complex node. This makes testing much easier.

Unless you are intentionally building a filter that drops events, always remember to call await self.send_event(event) at the end of your logic.

Don’t let exceptions break the event chain.

1async def process_event(self, event):
2 try:
3 await self.risky_operation()
4 except Exception as e:
5 logger.error(f"Failed: {e}")
6
7 # Still propagate so the call continues
8 await self.send_event(event)