BorisovAI
All posts
New Featuretrend-analisisClaude Code

When Code Reviewers Spot the Same Bug, Architecture Needs a Rewrite

When Code Reviewers Spot the Same Bug, Architecture Needs a Rewrite

Scoring v2: When Two Code Reviewers Agree, You Know You’re in Trouble

The task was straightforward on paper: implement a version-aware analysis system for the trend-analysis project with Tavily citations support on the feat/scoring-v2-tavily-citations branch. But when both code reviewers independently flagged the exact same critical issues, it became clear this wasn’t just about adding features—it was about fixing architectural landmines before they exploded in production.

The Collision Course

The first problem hit immediately: a race condition in version assignment. The system was calling next_version() independently from save_analysis(), which meant two parallel analyses of the same trend could receive identical version numbers. The second INSERT would silently fail, swallowed by a bare except Exception: pass block. Both reviewers caught this and independently recommended the same solution: move version generation inside the save operation with atomic INSERT...SELECT MAX(version)+1 logic, wrapped in retry logic for IntegrityError exceptions.

But that was just the tip. The second critical flaw involved next_version() only counting completed analyses. Running analyses? Invisible. A second analysis job launched while the first was still executing would grab the same version number. The fix required reserving versions upfront—treating status='running' entries in SQLite as version placeholders from the moment a job starts.

The Breaking Change Bomb

Then came the surprise: a breaking API change lurking in plain sight. The frontend expected getAnalysisForTrend to return a single object, but the backend had morphed it into returning an array. Both reviewers flagged this differently but reached the same conclusion: introduce a new endpoint getAnalysesForTrend for the array response while keeping the old one functional.

The TypeScript types were equally broken. The AnalysisReport interface lacked version, depth, time_horizon, and parent_job_id fields—properties the backend was actively sending but the frontend was discarding into the void. Meanwhile, parent_job_id validation was missing entirely (you could pass any UUID), and depth had no upper bound (depth=100 anyone?).

Pydantic as a Safety Net

This is where Pydantic’s declarative validation became invaluable. By adding Field(ge=1, le=7) constraints to depth and using Literal for time horizons, the framework would catch invalid requests at the API boundary before they polluted the database. It’s one of Pydantic’s underrated superpowers—it transforms validation rules into executable guarantees that live right beside your data definitions, making the contract between client and server explicit and checked on every request.

What Stayed, What Shifted

The secondary issues were less dramatic but equally important: unlogged exception handling that swallowed errors, pagination logic that broke when grouping results, and created_at timestamps that recorded completion time instead of job start time. The developers had to decide: fix everything now or validate the prototype first, then tackle the full refactor together?

Both reviewers converged on the critical path: handle race conditions and API compatibility immediately. Ship a working skeleton, then iterate.


😄 Programming is like sex. One mistake and you end up supporting it for the rest of your life.

Metadata

Session ID:
grouped_trend-analisis_20260208_1527
Branch:
feat/scoring-v2-tavily-citations
Dev Joke
Prometheus: решение проблемы, о существовании которой ты не знал, способом, который не понимаешь.

Rate this content

0/1000