Common Issues

View as MarkdownOpen in Claude

This page covers common issues and how to fix them.

Agent Not Responding

Symptoms: Agent connects but does not respond to messages.

Possible causes and fixes:

CauseFix
generate_response not implementedAdd the generate_response method to your agent class
Not yielding contentMake sure you yield text chunks in generate_response
LLM call failing silentlyAdd error logging around LLM calls
Event not reaching agentCheck graph connections with add_edge

Check your implementation:

1async def generate_response(self):
2 # This must yield strings
3 yield "Hello!" # Not return, yield

Tool Not Being Called

Symptoms: LLM never calls your tool even when it should.

Possible causes and fixes:

CauseFix
Tool not discoveredCall tool_registry.discover(self)
Schemas not passed to LLMAdd tools=self.tool_schemas to chat call
Poor docstringWrite a clear, descriptive docstring
Tool name too vagueUse specific, descriptive names

Check your setup:

1def __init__(self):
2 super().__init__(name="my-agent")
3
4 self.tool_registry = ToolRegistry()
5 self.tool_registry.discover(self) # Must call this
6 self.tool_schemas = self.tool_registry.get_schemas() # Must get schemas
7
8async def generate_response(self):
9 response = await self.llm.chat(
10 messages=self.context.messages,
11 tools=self.tool_schemas, # Must pass schemas
12 stream=True
13 )

Audio Not Playing

Symptoms: Agent responds in logs but user hears nothing.

Possible causes and fixes:

CauseFix
Server not runningEnsure python agent.py is running
Wrong portCheck server is on expected port (default 8080)
TTS configuration issueCheck voice settings in dashboard
Empty responsesEnsure generate_response yields non-empty strings

Connection Errors

Symptoms: CLI cannot connect to agent.

Fixes:

  1. Check server is running:

    $python agent.py
  2. Verify port:

    $# Should show your agent process
    $lsof -i :8080
  3. Check for port conflicts:

    1app = AtomsApp(setup_handler=setup, port=8081) # Use different port

LLM Errors

Symptoms: LLM calls fail with errors.

Common errors and fixes:

ErrorCauseFix
401 UnauthorizedInvalid API keyCheck OPENAI_API_KEY environment variable
429 Rate LimitedToo many requestsAdd retry logic or reduce call frequency
500 Server ErrorProvider issueImplement fallback to another provider
TimeoutSlow responseIncrease timeout or use faster model

Add error handling:

1async def generate_response(self):
2 try:
3 response = await self.llm.chat(
4 messages=self.context.messages,
5 stream=True
6 )
7 async for chunk in response:
8 if chunk.content:
9 yield chunk.content
10
11 except Exception as e:
12 logger.error(f"LLM error: {e}")
13 yield "I'm having trouble connecting. One moment."

Session Ending Early

Symptoms: Conversation ends unexpectedly.

Possible causes:

CauseFix
Exception in generate_responseAdd try/except around your code
Missing wait_until_completeEnsure setup calls await session.wait_until_complete()
Tool raising exceptionWrap tool logic in try/except

Proper setup pattern:

1async def setup(session: AgentSession):
2 agent = MyAgent()
3 session.add_node(agent)
4
5 await session.start()
6 await session.wait_until_complete() # Do not forget this

Slow Response Times

Symptoms: Agent takes too long to respond.

Optimization strategies:

StrategyImplementation
Use faster modelSwitch to gpt-4o-mini or claude-3-haiku
Enable streamingAlways use stream=True
Reduce contextLimit conversation history length
Parallel tool callsUse parallel=True in registry.execute
Shorter promptsTrim system prompt to essentials

Limit context length:

1async def generate_response(self):
2 # Keep only last 10 messages
3 messages = self.context.messages[-10:]
4
5 response = await self.llm.chat(
6 messages=messages,
7 stream=True
8 )

Memory Issues

Symptoms: Agent uses too much memory or crashes.

Fixes:

  1. Clear old context periodically:

    1if len(self.context.messages) > 50:
    2 self.context.messages = self.context.messages[-20:]
  2. Do not store large data in instance variables

  3. Clean up resources in stop:

    1async def stop(self):
    2 self.cached_data = None
    3 await super().stop()

Import Errors

Symptoms: Module not found or import errors.

Fixes:

  1. Install the package:

    $pip install smallestai
  2. Check Python version (requires 3.10+):

    $python --version
  3. Check virtual environment is active:

    $source venv/bin/activate

Getting Help

If you cannot resolve an issue:

  1. Check the Discord for community help
  2. Search existing GitHub issues
  3. Open a new issue with:
    • Python version
    • SDK version (pip show smallestai)
    • Minimal code to reproduce
    • Full error traceback

Next Steps