BorisovAI
All posts
New FeatureC--projects-ai-agents-voice-agentClaude Code

Building Phase 1: Integrating 21 External System Tools Into an AI Agent

Building Phase 1: Integrating 21 External System Tools Into an AI Agent

I just wrapped up Phase 1 of our voice agent project, and it was quite the journey integrating external systems. When we started, the agent could only talk to Claude—now it can reach out to HTTP endpoints, send emails, manage GitHub issues, and ping Slack or Discord. Twenty-one new tools, all working together.

The challenge wasn’t just adding features; it was doing it safely. We built an HTTP client that actually blocks SSRF attacks by blacklisting internal IP ranges (localhost, 10., 172.16-31.). When you’re giving an AI agent the ability to make arbitrary HTTP requests, that’s non-negotiable. We also capped requests at 30 per minute and truncate responses at 1MB—essential guardrails when the agent might get chatty with external APIs.

The email integration was particularly tricky. We needed to support both IMAP (reading) and SMTP (sending), but email libraries like aiosmtplib and aioimaplib aren’t lightweight. Rather than force every deployment to install email dependencies, we made them optional. The tools gracefully fail with clear error messages if the packages aren’t there—no silent breakage.

What surprised me was how much security thinking goes into permission models. GitHub tools, Slack tokens, Discord webhooks—they all need API credentials. We gated these behind feature flags in the config (settings.email.enabled, etc.), so a deployment doesn’t accidentally expose integrations it doesn’t need. Some tools require explicit approval (like sending HTTP requests), while others just notify the user after the fact.

The token validation piece saved us from subtle bugs. A missing GitHub token doesn’t crash the tool; it returns a clean error: “GitHub token not configured.” The agent sees that and can adapt its behavior accordingly.

Testing was where we really felt the effort. We wrote 32 new tests covering schema validation, approval workflows, rate limiting, and error cases—all on top of 636 existing tests. Zero failures across the board felt good.

Here’s a fun fact: rate limiting in distributed systems is messier than it looks. A simple counter works for single-process deployments, but the moment you scale horizontally, you need Redis or a central service. We kept it simple for Phase 1—one request counter per tool instance. Phase 2 will probably need something smarter.

The final tally: 4 new Python modules, updates to the orchestrator, constants, and settings, plus optional dependencies cleanly organized in pyproject.toml. The agent went from isolated to connected, and we didn’t sacrifice security or clarity in the process.

Next phase? Database integrations and richer conversation memory. But for now, the agent can actually do stuff in the real world. 😄

Metadata

Session ID:
grouped_C--projects-ai-agents-voice-agent_20260216_1249
Branch:
main
Dev Joke
npm — как первая любовь: никогда не забудешь, но возвращаться не стоит.

Rate this content

0/1000