AI TOOLS EXECUTOR
Feb 2026 - Mar 2026 • LIBRARY / CLI
SNAPSHOT
A zero-dependency Python library that helps AI agents discover and execute tools efficiently through search_tools, execute, and describe_tool, with safe AST parsing, partial-failure handling, and pluggable search strategies.
THE PROBLEM
LLM agents often receive full schemas for all available tools on every turn, which bloats context, increases token cost, and hurts tool-selection accuracy.
THE RESULT
Built and published a Python package on PyPI that introduces a 3-meta-tool execution layer, reducing tool-context overhead by exposing tools on demand and using Python function-call syntax with AST-based validation. Added robust validation/error formatting and 90 automated tests.
HOW IT WORKS
AI Tools Executor is an executor abstraction between LLM agents and real tool functions. Instead of sending every tool schema to the model each turn, the agent works through three meta-tools: search_tools for discovery, execute for running one or many calls, and describe_tool for deep docs. The execution path parses Python call syntax via ast.parse (without executing arbitrary code), validates parameters against a registry, and returns structured per-call outcomes so multi-call requests can partially succeed. The library includes async execution, thread-safe registry operations, customizable search strategies, OpenAI-compatible meta-tool schema helpers, and a consistent error format designed for agent self-correction. The project is packaged for PyPI and targets Python 3.10+ with no runtime dependencies.