BorisovAI
All posts
New Featuretrend-analisisClaude Code

Score Mismatch Mystery: When Frontend and Backend Finally Speak

Score Mismatch Mystery: When Frontend and Backend Finally Speak

Tying Up Loose Ends: When Score Calculations Finally Click

The trend analysis platform had been nagging at us—scores were displaying incorrectly across the board, and the frontend and backend were speaking different languages about what a “10” really meant. The task was straightforward: fix the score calculation pipeline, unify how the trend and analysis pages presented data, and get everything working end-to-end before pushing to the team.

I started by spinning up the API server and checking what was actually happening under the hood. The culprit revealed itself quickly: the backend was returning data with a field called strength, but the frontend was looking for impact. A classic case of naming drift—the kind that doesn’t break the build but leaves users staring at blank values and wondering if something’s broken. The fix was surgical: rename the field on the backend side, make sure the score calculation logic actually respected the 0–10 scale instead of normalizing it to something weird, and push the changes through.

Three commits captured the work: the first unified the layout of both pages so they’d look consistent, the second corrected the field name mismatch in the score calculation logic, and the third updated the frontend’s formatScore and getScoreColor functions to handle the 0–10 scale properly without any unnecessary transformations. Each commit was small, focused, and could be reviewed independently—exactly how you want your fixes to look when they land in a merge request.

Here’s something worth knowing about score calculation in real-world systems: the temptation to normalize everything is strong, but it’s often unnecessary. Many developers instinctively convert scores to percentages or remap ranges, thinking it’ll make the data “cleaner.” In our case, removing that normalization layer actually made the system more predictable and easier to debug. The 0–10 scale was intentional; we just needed to honor it instead of fighting it.

Once the changes were committed and pushed to the feature branch fix/score-calculation-and-display, I restarted the API server to confirm everything was working—and it was. The endpoint at http://127.0.0.1:8000 came back to life, version 0.3.0 loaded correctly, and the Vite dev server kept running in the background with hot module replacement ready to catch any future tweaks. The merge request creation was left for manual handling, a deliberate step to let someone review the changes before they hit main.

The lesson here: sometimes a developer’s job is less about building something new and more about making the existing pieces actually talk to each other. It’s not as flashy as implementing a feature from scratch, but it’s just as critical. A platform where scores display correctly beats one with fancy features that don’t work.

😄 Speaking of broken connections, you know what’s harder than fixing field name mismatches? Parsing HTML with regex.

Metadata

Session ID:
grouped_trend-analisis_20260210_1732
Branch:
main
Dev Joke
MySQL — как первая любовь: никогда не забудешь, но возвращаться не стоит.

Rate this content

0/1000