*** title: Common Issues description: Troubleshoot frequently encountered problems. ---------------------------------------------------------- This page covers common issues and how to fix them. ## Agent Not Responding **Symptoms**: Agent connects but does not respond to messages. **Possible causes and fixes**: | Cause | Fix | | ----------------------------------- | -------------------------------------------------------- | | `generate_response` not implemented | Add the `generate_response` method to your agent class | | Not yielding content | Make sure you `yield` text chunks in `generate_response` | | LLM call failing silently | Add error logging around LLM calls | | Event not reaching agent | Check graph connections with `add_edge` | **Check your implementation**: ```python async def generate_response(self): # This must yield strings yield "Hello!" # Not return, yield ``` ## Tool Not Being Called **Symptoms**: LLM never calls your tool even when it should. **Possible causes and fixes**: | Cause | Fix | | ------------------------- | ------------------------------------------ | | Tool not discovered | Call `tool_registry.discover(self)` | | Schemas not passed to LLM | Add `tools=self.tool_schemas` to chat call | | Poor docstring | Write a clear, descriptive docstring | | Tool name too vague | Use specific, descriptive names | **Check your setup**: ```python def __init__(self): super().__init__(name="my-agent") self.tool_registry = ToolRegistry() self.tool_registry.discover(self) # Must call this self.tool_schemas = self.tool_registry.get_schemas() # Must get schemas async def generate_response(self): response = await self.llm.chat( messages=self.context.messages, tools=self.tool_schemas, # Must pass schemas stream=True ) ``` ## Audio Not Playing **Symptoms**: Agent responds in logs but user hears nothing. **Possible causes and fixes**: | Cause | Fix | | ----------------------- | --------------------------------------------------- | | Server not running | Ensure `python agent.py` is running | | Wrong port | Check server is on expected port (default 8080) | | TTS configuration issue | Check voice settings in dashboard | | Empty responses | Ensure `generate_response` yields non-empty strings | ## Connection Errors **Symptoms**: CLI cannot connect to agent. **Fixes**: 1. Check server is running: ```bash python agent.py ``` 2. Verify port: ```bash # Should show your agent process lsof -i :8080 ``` 3. Check for port conflicts: ```python app = AtomsApp(setup_handler=setup, port=8081) # Use different port ``` ## LLM Errors **Symptoms**: LLM calls fail with errors. **Common errors and fixes**: | Error | Cause | Fix | | ------------------ | ----------------- | ------------------------------------------- | | `401 Unauthorized` | Invalid API key | Check `OPENAI_API_KEY` environment variable | | `429 Rate Limited` | Too many requests | Add retry logic or reduce call frequency | | `500 Server Error` | Provider issue | Implement fallback to another provider | | Timeout | Slow response | Increase timeout or use faster model | **Add error handling**: ```python async def generate_response(self): try: response = await self.llm.chat( messages=self.context.messages, stream=True ) async for chunk in response: if chunk.content: yield chunk.content except Exception as e: logger.error(f"LLM error: {e}") yield "I'm having trouble connecting. One moment." ``` ## Session Ending Early **Symptoms**: Conversation ends unexpectedly. **Possible causes**: | Cause | Fix | | -------------------------------- | -------------------------------------------------------- | | Exception in `generate_response` | Add try/except around your code | | Missing `wait_until_complete` | Ensure setup calls `await session.wait_until_complete()` | | Tool raising exception | Wrap tool logic in try/except | **Proper setup pattern**: ```python async def setup(session: AgentSession): agent = MyAgent() session.add_node(agent) await session.start() await session.wait_until_complete() # Do not forget this ``` ## Slow Response Times **Symptoms**: Agent takes too long to respond. **Optimization strategies**: | Strategy | Implementation | | ------------------- | --------------------------------------- | | Use faster model | Switch to gpt-4o-mini or claude-3-haiku | | Enable streaming | Always use `stream=True` | | Reduce context | Limit conversation history length | | Parallel tool calls | Use `parallel=True` in registry.execute | | Shorter prompts | Trim system prompt to essentials | **Limit context length**: ```python async def generate_response(self): # Keep only last 10 messages messages = self.context.messages[-10:] response = await self.llm.chat( messages=messages, stream=True ) ``` ## Memory Issues **Symptoms**: Agent uses too much memory or crashes. **Fixes**: 1. Clear old context periodically: ```python if len(self.context.messages) > 50: self.context.messages = self.context.messages[-20:] ``` 2. Do not store large data in instance variables 3. Clean up resources in `stop`: ```python async def stop(self): self.cached_data = None await super().stop() ``` ## Import Errors **Symptoms**: Module not found or import errors. **Fixes**: 1. Install the package: ```bash pip install smallestai ``` 2. Check Python version (requires 3.10+): ```bash python --version ``` 3. Check virtual environment is active: ```bash source venv/bin/activate ``` ## Getting Help If you cannot resolve an issue: 1. Check the [Discord](https://discord.gg/smallest) for community help 2. Search existing [GitHub issues](https://github.com/smallest-inc/smallest-python-sdk/issues) 3. Open a new issue with: * Python version * SDK version (`pip show smallestai`) * Minimal code to reproduce * Full error traceback ## Next Steps See complete working examples. Get help from the community.