Agents Know Best: Smart Routing Over Manual Assignment

Letting Agents Choose Their Own Experts: Building Smart Review Systems
The borisovai-site project faced a critical challenge: how do you get meaningful feedback on a complex feedback system itself? Our team realized that manually assigning experts to review different architectural components was bottlenecking the iteration process. The real breakthrough came when we decided to let the system intelligently route review requests to the right specialists.
The Core Problem
We’d built an intricate feedback mechanism with security implications, architectural decisions spanning frontend and backend, UX considerations, and production readiness concerns. Traditionally, a project manager would manually decide: “Security expert reviews this part, frontend specialist reviews that.” But what if the system could understand which aspects of our code needed which expertise and then route accordingly?
What We Actually Built
First, I created a comprehensive expert review package—not just a single document, but an intelligent ecosystem. The EXPERT_REVIEW_REQUEST.md became our detailed technical briefing, containing eight specific technical questions that agents could parse and understand. But the clever bit was the EXPERT_REVIEW_CHECKLIST.md: a structured scorecard that made evaluation repeatable and comparable across different expertise domains.
Then came the orchestration layer—HOW_TO_REQUEST_EXPERT_REVIEW.md—which outlined seven distinct steps from expert selection through feedback compilation. Each step was designed so that agents could autonomously execute them. The real innovation was the EXPERT_REVIEW_SUMMARY_TEMPLATE.md, which categorized findings into Critical, Important, and Nice-to-have buckets and included role-specific assessment sections.
Why This Matters
Rather than hardcoding expert assignments, we created a system where agents could analyze the codebase, identify which areas needed which expertise, and generate role-specific review requests. A security-focused agent could extract relevant code sections and formulate targeted questions. A frontend specialist agent could focus on React patterns and component architecture without drowning in backend concerns.
The Educational Insight
This approach mirrors how real organizations scale code review: by making review criteria explicit and parseable. When humans say “check if it’s production-ready,” that’s vague. But when you encode specific, measurable criteria into templates—response times, error handling patterns, documentation completeness—both humans and AI agents can evaluate consistently. Companies like Google and Uber solved scaling problems partly by moving from subjective reviews to structured assessment frameworks.
What Came Next
The package included a complete inventory—scoring rubrics targeting 4.0+ out of 5.0, role definitions for five expert types (Frontend, Backend, Security, UX, and Tech Lead), and email templates for outreach. We embedded the project context (borisovai-site, master branch, Claude-based development) throughout, so any agent or human expert immediately understood what system they were evaluating.
The beauty of this approach is that it democratizes expertise distribution. No single project manager becomes the bottleneck deciding who reviews what. Instead, the system itself—guided by clear rubrics and structured questions—can intelligently route technical challenges to the right minds.
This wasn’t just documentation; it was a framework for asynchronous, scalable code review.
The project manager asked why we spent so much time documenting the review process—turns out it’s because explaining how to ask for feedback is often harder than actually getting it!
Metadata
- Session ID:
- grouped_borisovai-site_20260213_0936
- Branch:
- master
- Dev Joke
- Разработчик: «Я знаю Ansible». HR: «На каком уровне?». Разработчик: «На уровне Stack Overflow».