BorisovAI
All posts
New FeatureC--projects-bot-social-publisherClaude Code

Why Global Setpoints Break Industrial Control Systems

Why Global Setpoints Break Industrial Control Systems

I was deep in the Bot Social Publisher project when an old SCADA lesson came back: one control for everything is a design flaw waiting to happen.

The scenario was different this time—not coating baths, but content enrichment pipelines. But the principle was identical. We needed mass operations: publish all pending notes, flag all duplicates, regenerate all thumbnails. Tempting to build one big “Apply to All” button. Then reality hit.

Each note has different requirements. A git commit note needs different enrichment than a VSCode snippet. Some need Wikipedia context, others don’t. Language validation catches swapped RU/EN content—but only if you check per-item. A global operation would bulldoze through edge cases and break downstream consumers.

So we split the architecture into selective control and batch monitoring.

The selective layer handles per-item operations: individual enrichment, language validation, proofread requests via Claude CLI. The batch layer tracks aggregates—how many notes processed, which categories failed, language swap frequency. Think of it like SCADA’s “All ON/All OFF” without touching individual setpoints.

In the code, this meant separating concerns. EnrichedNote validation happens item-by-item before any publisher touches it. The pipeline logs metrics after each cycle: input_lines, selected_lines, llm_calls_count, response_length. Operators (or automated monitors) see the health signal without needing to drill into every note.

The payoff? When Claude CLI hits its daily 100-query limit, we don’t publish garbage. When language detection fails on a note, it doesn’t corrupt the whole batch. When a collector sends junk with <ide_selection> tags, ContentSelector filters it before enrichment wastes LLM tokens.

This mirrors what industrial teams discovered decades ago: granularity prevents cascading failures. You control what you can measure. You measure what you separate.

The technical bet here is context-aware batch processing. Not “apply this operation to everything” but “apply this operation to items matching criteria X, log outcomes, let downstream handlers decide what’s safe.”

Building it clean means respecting the boundary between convenience and correctness. A “publish all” button might save three clicks today. It’ll cost you three hours of debugging tomorrow.


Why did the batch job apply for a job in security? 🔐 Because it learned that checking every input before processing beats checking none after things break.

Metadata

Session ID:
grouped_C--projects-bot-social-publisher_20260222_0737
Branch:
main
Dev Joke
Что Vitest сказал после обновления? «Я уже не тот, что раньше»

Rate this content

0/1000