Tool Execution
The ToolRegistry discovers your tools, generates schemas for the LLM, and executes tool calls when the LLM requests them.
Setup
Call ToolRegistry().discover(self) in __init__ to register all @function_tool methods.
discover(self) scans the agent for methods decorated with @function_tool().
Calling the LLM with Tools
Pass tool_registry.get_schemas() to give the LLM your tool definitions.
The LLM may respond with text, tool calls, or both.
Handling Tool Calls
Collect chunk.tool_calls during streaming, then run tool_registry.execute().
Feeding Results Back
After executing tools, add the results to context and call the LLM again:
ToolRegistry API
Parallel Execution
By default, tools run in parallel when the LLM requests multiple:
[!WARNING] If your tools have dependencies—e.g.,
get_user_id()returns a value needed byget_user_orders(user_id)—usingparallel=Truewill break because both tools run simultaneously. Useparallel=Falsefor dependent tools.
Tips
Always use parallel=True
Unless your tools have dependencies on each other, parallel execution is faster.
Check for tool_calls before executing
The LLM doesn’t always call tools. Only run the execution code if tool_calls is non-empty.
Log tool calls for debugging
Print tc.name and tc.arguments before execution to debug unexpected behavior.

