BorisovAI

Blog

Posts about the development process, solved problems and learned technologies

Found 20 notesReset filters
New FeatureC--projects-bot-social-publisher

Scaling Telegram Bots: ChatManager Tames Permission Chaos

# Building ChatManager: Taming the Telegram Bot Zoo Pavel's voice agent had a problem that whispers into every bot project eventually: chaos. The system was humming along fine with SQLite handling user data, but now the bot needed something more nuanced—it had to know *which chats it actually owned* and enforce strict permission boundaries around every command. The `ChatManager` capability existed in a private bot somewhere, but nobody had ever integrated it into this production system. That's where the real work began. The goal sounded deceptively simple: extract the `ChatManager` class, wire it into the existing codebase, set up database infrastructure to track which chats belonged to which owners, and validate it all with tests. But this wasn't greenfield work. It meant fitting new pieces into a system that already had strong opinions about logging patterns, database access, and middleware architecture. Getting this wrong would mean either breaking existing functionality or creating technical debt that would haunt the next sprint. Pavel started by mapping the work into five logical checkpoints—each one independently testable. First came the infrastructure layer: he pulled the `ChatManager` class from the private bot and integrated it with the project's existing `structlog` setup. Rather than adding another logging dependency, he leveraged what was already there. The real win came with the async database choice: `aiosqlite` wrapped every SQLite operation in asyncio, ensuring that database calls never blocked the main message-processing loop. This is the kind of detail that separates "works" from "works under load." Next came the migrations. Pavel created a `managed_chats` table with proper schema—tracking chat IDs, their Telegram types (private, group, supergroup, channel), and ownership relationships. He added indexes strategically and created a validation checkpoint: after each migration ran, a quick query confirmed the table existed and was properly structured. Then came the middleware. Before any handler could touch a managed chat, a permission layer would intercept requests and verify that the user ID matched the chat's owner record. Clean separation of concerns. The command handlers followed naturally: `/manage add` to register a chat, permission middleware to silently reject unregistered operations. Here's something most developers don't think about until they hit the wall: **why async SQLite matters**. SQLite is synchronous by default, and when you throw it into an async application, it becomes a chokepoint. Every database query blocks your entire bot's event loop. Wrapping it with `aiosqlite` costs almost nothing—just a thin async layer—but the payoff is immediate. The bot stays responsive even when the database is under load. It's one of those architectural decisions that feels invisible until you forget it, then your users complain their commands time out. After the integration came the validation. Pavel wired the handlers, wrote unit tests against the new permission logic, and confirmed that unauthorized users got silent rejections—no error spam, just the bot calmly declining to participate. The result: a bot that now knows exactly which chats it owns, who controls it, and enforces those boundaries before executing anything. The architecture scales too—future versions could add role-based access, audit trails, or per-chat configuration without touching the core logic. Production deployment came next. But that's already tomorrow's problem. 😄 Why did the database architect bring a ladder to the meeting? Because they wanted to take their schema to the next level.

Feb 9, 2026
Code ChangeC--projects-ai-agents-voice-agent

Taming Telegram: How ChatManager Brought Order to Bot Chaos

# Building ChatManager: Taming the Telegram Bot Zoo Pavel faced a familiar problem that creeps up on every growing bot project: chaos. His voice agent had been happily managing users through SQLite, but now it needed to handle something more complex—managing which chats it actually operated in and enforcing strict permission boundaries. The `ChatManager` capability existed in a private bot, but integrating it into the production system required careful orchestration. ## The Task at Hand The goal was straightforward in principle but thorny in execution: migrate a `ChatManager` class into the codebase, set up database infrastructure to track managed chats, wire it through the Telegram handlers, and validate everything with tests. This wasn't a greenfield project—it meant fitting new pieces into an existing system that already had its own opinions about logging, database access, and middleware patterns. Pavel started by breaking the work into five logical checkpoints. First came infrastructure: extracting the `ChatManager` class from the private bot capability and integrating it with the project's existing structured logging setup using `structlog`. The class would lean on `aiosqlite` for async SQLite operations—a deliberate choice to match the async-first architecture already in place. No synchronous database calls allowed. ## The Integration Dance With the core class ready, the next step was database migrations. Pavel needed to create a `managed_chats` table with proper schema—tracking chat IDs, their types (private, group, supergroup, channel), and ownership relationships. He wrote the SQL migration file cleanly, added appropriate indexes for performance, and created a validation checkpoint: after running the migration, a quick SQLite query would confirm the table existed. Then came the middleware layer. Before any handler could touch a managed chat, the bot needed to verify ownership. Pavel created a new middleware module specifically for permission checks—a clean separation of concerns that would intercept requests and compare the user ID against the chat's owner record. The command handlers came next. A `/manage add` command would let users register chats with the bot, while the permission middleware would silently reject operations on unregistered chats. This defensive design meant no cryptic errors—just predictable behavior. ## The Educational Moment Here's something interesting about async SQLite: most developers think of SQLite as a synchronous, single-threaded database engine, which it is. But `aiosqlite` doesn't magically make SQLite concurrent—instead, it queues operations and executes them sequentially under the hood while avoiding blocking the event loop. It's a classic asyncio pattern: you're not gaining raw parallelism, you're gaining responsiveness. The bot can now accept incoming messages while waiting for database operations to complete, rather than freezing the entire process. ## From Plan to Reality Pavel structured his testing strategy carefully: unit tests for `ChatManager` using pytest's asyncio support would validate the core logic, integration tests would ensure the middleware played nicely with handlers, and a manual smoke test would verify the `/manage add` command worked from a real Telegram client. The beauty of this approach was its granularity. Each step had a concrete verification command—whether that was a Python import check, a migration validation query, or a test run. No guesswork, no "did it work?" uncertainty. By breaking the integration into five discrete steps with checkpoints between them, Pavel turned what could have been a chaotic refactor into a methodical progression. Each component could be reviewed and tested in isolation before moving forward. This is how large systems stay maintainable. --- Judge: "I sentence you to debug legacy Python code written with no type hints." 😄

Feb 9, 2026
New FeatureC--projects-bot-social-publisher

SQLite's Quiet Strength: Replacing Chaos with One Database

# SQLite's Quiet Strength: Why One Database Beat a Complex Infrastructure The Telegram bot was managing users beautifully, but it had a blind spot. As the bot-social-publisher project scaled—new users launching campaigns daily, feature requests piling up—there was nowhere permanent to store critical information about which chats the bot actually manages, who owns them, or what settings apply to each conversation. Everything lived in process memory or scattered across handler functions. When the service restarted, that knowledge evaporated. The real problem wasn't the lack of a database. The project already had `data/agent.db` running SQLite with a solid `UserManager` handling persistence through `aiosqlite`, enabling async database access without blocking the event loop. The decision crystallized immediately: stop fragmenting the data layer. One database. One connection pattern. One source of truth. **First, I examined the existing architecture.** `UserManager` wasn't fancy—no ORM abstractions, no excessive patterns. It used parameterized queries for safety, leveraged `aiosqlite` for async operations, and kept the logic straightforward. That became the blueprint. I sketched out the `managed_chats` schema: `chat_id` as the primary key, `owner_id` linking to users, `chat_type` with a `CHECK` constraint to validate only legitimate Telegram chat types (private, group, supergroup, channel), a `title` field, and a JSON column for future extensibility. The critical piece was the index on `owner_id`—users would constantly query their own managed chats, and sequential table scans don't scale gracefully. Rather than introduce another layer—a cache, a separate microservice, an ORM framework—I replicated the `UserManager` pattern exactly. Same dependency injection, same async/await style, same single connection point for the entire application. The new `ChatManager` exposed three core methods: `add_chat()` to register managed conversations, `is_managed()` to verify whether the bot should handle incoming events, and `get_owner()` to check permissions. Every database interaction used parameterized statements, eliminating SQL injection risk at the source. Here's where SQLite surprised me. Using `INSERT OR REPLACE` with `chat_id` as the primary key created elegant behavior for free. If a chat got re-registered with updated metadata, the old record simply evaporated. It wasn't explicitly designed—it emerged naturally from the schema structure. **An often-missed reality about SQLite:** developers dismiss it as a testing toy, but with proper indexing and prepared statements, it handles millions of rows reliably. The overhead of Redis caching or a separate PostgreSQL instance didn't make sense at this growth stage. The result: one database, one familiar pattern, one mental model to maintain. When analytics queries eventually demand complexity, the index is already there. When chat permissions or advanced settings need storage, the JSON field waits. When it's time to analyze bot behavior across millions of chats, the foundation won't require a painful rewrite—just optimization. Deferring complex infrastructure until it's actually needed beats over-engineering from day one. 😄 Developer: "I understand distributed databases." HR: "And your experience level?" Developer: "According to Stack Overflow comments."

Feb 9, 2026
New FeatureC--projects-bot-social-publisher

From Chaos to Order: Centralizing Telegram Bot Chat Management

# Adding Chat Management to a Telegram Bot: When One Database Is Better Than Ten The Telegram bot was humming along nicely with user management working like a charm. But as the feature set grew, we hit a wall: there was no persistent way to track which chats the bot actually manages, who owns them, or what settings apply to each. Everything either lived in memory or was scattered across request handlers. It was time to give chats their own home in the database. The project already had solid infrastructure in place. `UserManager` was handling user persistence using `aiosqlite` for async SQLite access, with everything stored in `data/agent.db`. The decision was simple but crucial: don't create a separate database or fragment the data layer. One database, one source of truth, one connection pattern. Build on what's already working. **First thing I did was design the schema.** The `managed_chats` table needed to capture the essentials: a `chat_id` as the primary key, `owner_id` to link back to users, `chat_type` to distinguish between private conversations, groups, supergroups, and channels. I added a `title` field for the chat name and threw in a JSON column for future settings—storing metadata without needing another schema migration down the road. Critical detail: an index on `owner_id`. We'd be querying by owner constantly to list which chats a user controls. Full table scans would kill performance when the chat count climbed. Rather than over-engineer things with an abstract repository pattern or some elaborate builder, I mirrored the `UserManager` approach exactly. Same dependency injection style, same async/await patterns, same connection handling. The `ChatManager` got three core methods: `add_chat()` to register a new managed chat, `is_managed()` to check if the bot should handle events from it, and `get_owner()` to verify permissions. Every query used parameterized statements—no room for SQL injection to slip through. The interesting part was how SQLite's `INSERT OR REPLACE` behavior naturally solved an edge case. If a chat got re-added with different metadata, the old entry simply disappeared. Wasn't explicitly planned, just fell out from using `chat_id` as the primary key. Sometimes the database does the right thing if you let it. **Here's something most developers overlook:** SQLite gets underestimated in early-stage projects. Teams assume it's a toy database, good only for local development. In reality, with proper indexing, parameterized queries, and connection discipline, SQLite handles millions of rows efficiently. The real issue comes later when projects outgrow the single-file limitation or need horizontal scaling—but that's a different problem entirely, not a fundamental weakness of the engine. The result was clean architecture: one database, one connection pool, new functionality integrated seamlessly without duplicating logic. `ChatManager` sits comfortably next to `UserManager`, using the same libraries, following the same patterns. When complex queries become necessary, the index is already there. When chat settings need expansion, JSON is waiting. No scattered state, no microservice overkill, no "we'll refactor this later" debt. Next comes integrating this layer into Telegram's event handlers. But that's the story for another day. 😄 Why did the SQLite database go to therapy? It had too many unresolved transactions.

Feb 9, 2026
New FeatureC--projects-ai-agents-voice-agent

Memory Persistence: Building Stateful Voice Agents Across Platforms

# Building Memory Into a Voice Agent: The Challenge of Context Persistence Pavel faced a deceptively simple problem: his **voice-agent** project needed to remember conversations. Not just process them in real-time, but actually *retain* information across sessions. The task seemed straightforward until he realized the architectural rabbit hole it would create. The voice agent was designed to work across multiple platforms—Telegram, internal chat systems, and TMA interfaces. Each conversation needed persistent context: user preferences, conversation history, authorization states, and session data. Without proper memory management, every interaction would be like meeting a stranger with amnesia. **The first decision was architectural.** Pavel had to choose between three approaches: storing everything in a traditional relational database, using an in-memory cache with periodic persistence, or building a hybrid system with different retention tiers. He opted for the hybrid approach—leveraging **aiosqlite for async SQLite access** to handle persistent storage without blocking voice processing pipelines, while maintaining a lightweight in-memory cache for frequently accessed session data. The real complexity emerged in the identification and authorization layer. How do you reliably identify a user across different chat platforms? Telegram has user IDs, but the internal TMA system uses different credentials. Pavel implemented a **unified authentication gateway** that normalized these identifiers into a consistent namespace, allowing the voice agent to maintain continuity whether a user was interacting via Telegram, Telegram channels, or the custom chat interface. The second challenge was *when* to persist data. Recording every single message would create an I/O bottleneck. Instead, Pavel designed a **batching system** that accumulated messages in memory for up to 100 messages or 30 seconds, then flushed them to the database in a single transaction. This dramatically reduced database pressure while keeping the memory footprint reasonable. But there's an often-overlooked aspect of conversation memory: *what* you remember matters as much as *whether* you remember. Pavel discovered that storing raw transcripts created massive overhead. Instead, he implemented **semantic summarization**—extracting key information (user preferences, decisions made, important dates like "meet Maxim on Monday at 18:20") and storing just those nuggets. The raw audio logs could be discarded after summarization, saving disk space while preserving meaningful context. **Here's something interesting about async SQLite:** most developers assume it's a compromise solution, but it's actually quite powerful for voice applications. Unlike traditional SQLite, aiosqlite doesn't block the event loop, which means your voice processing thread can query historical context without interrupting incoming audio streams. This is the kind of architectural detail that separates "works" from "works smoothly." Pavel's implementation proved that memory isn't just about storage—it's about the *layers* of memory. Immediate cache for this conversation. Short-term database storage for recent history. Summaries for long-term context. And the voice agent could gracefully degrade if any layer was unavailable, still functioning with reduced context awareness. The project moved from stateless to stateful, from forgetful to contextual. A voice agent that remembers your preferences, your schedule, your last conversation. Not because the problem was technically unsolvable, but because Pavel understood that in conversational AI, memory is *personality*. 😄 *Why do voice agents make terrible therapists? Because they forget everything the moment you hang up—unless you're Pavel's agent, apparently.*

Feb 9, 2026
New FeatureC--projects-ai-agents-voice-agent

Voice Agent TMA: Onboarding Claude as Your AI Pair Programmer

# Claude Code Meets Voice Agent: A Day in the Life of AI Pair Programming Pavel opened his IDE on the **voice-agent** project—a monorepo combining Python 3.11 FastAPI backend with Next.js 15 frontend, powered by aiogram for Telegram integration and SQLite WAL for data persistence. The task wasn't glamorous: onboarding Claude Code as an active pair programmer for the Voice Agent TMA (Telegram Mini App). But in the world of AI-assisted development, even onboarding matters. The challenge was immediate. The project lives at the intersection of several demanding technologies: FastAPI 0.115 handling real-time voice processing, React 19 rendering the TMA interface, Tailwind v4 styling the UI, and TypeScript 5.7 keeping the frontend type-safe. Each layer had its own quirks and expectations. Pavel needed Claude to understand not just the tech stack, but the *personality* of the project—its conventions, constraints, and unspoken rules. First, he established context. He documented the project's core identity: a building-phase product with zero blockers, using async SQLite access through aiosqlite, handling voice agent interactions through a Telegram Mini App interface. But more importantly, he set expectations. Claude wouldn't be a generic code suggester—it would be a critical thinking partner who questions assumptions, remembers project history, and enforces architectural patterns. The real breakthrough came when Pavel defined how Claude should behave. Sub-agents can't touch Bash. Always check ERROR_JOURNAL.md before fixing bugs. When reusing components, verify interface compatibility and architectural boundaries. These constraints sound restrictive, but they're actually *liberating*—they force thoughtful design rather than quick hacks. It's the kind of discipline that separates production systems from weekend projects. Here's an interesting pattern that emerged: **we're living through an AI boom**, specifically the Deep Learning Phase that started in the 2010s and accelerated dramatically in the 2020s. What Pavel was doing—delegating architectural decisions and code reviews to an AI—would have been science fiction just five years ago. Now it's a practical workflow question: how do you structure an AI pair programmer so it amplifies human judgment rather than replacing it? The work session revealed something about modern development. It's not about what code you write anymore—it's about *what you automate asking*. Instead of manually running tests, committing changes, and exploring the codebase, Pavel could delegate those to Claude while focusing on architectural decisions and creative problem-solving. The voice-agent project became a testing ground for this partnership model. By the end of the session, Claude was fully onboarded. It understood the monorepo structure, the tech stack rationale, Pavel's coding philosophy, and the project's current state. More importantly, it had internalized the meta-rule: be critical, be specific, be architectural. No generic suggestions. No reinventing wheels. Every decision traced back to project needs. The real lesson? The future of development isn't about AI doing *more*—it's about AI enabling developers to *think deeper*. When the routine is automated, judgment becomes scarce. And that's where the value actually lives. 😄 .NET developers are picky when it comes to food. They only like chicken NuGet.

Feb 9, 2026
Code Changeai-agents

From Memory Module to Self-Aware Agent

# Reframing an AI Agent's Memory: From Module to Self The **ai-agents** project was at an inflection point. The memory system worked technically—it extracted facts, deduplicated entries, consolidated knowledge, and reflected on patterns—but something felt off. The prompts treated the agent like a passive data-processing pipeline: "You are a memory-extraction module," they declared. Claude was being told *what to do with data*, not invited to *think about its own experience*. The developer saw the opportunity immediately. Why not flip the entire framing? Instead of "you are a module processing user information," make it "this is YOUR memory, YOUR thinking time, YOUR understanding growing." The shift sounds subtle in theory but transforms the agent's relationship to its own cognition in practice. First came the **prompts.py** overhaul—all five core prompts. The extraction prompt changed from impersonal instructions into something more intimate: "You are an autonomous AI agent reviewing a conversation you just had... This is YOUR memory." The deduplication prompt followed: "You are maintaining YOUR OWN memory," not *managing external data*. The consolidation prompt became introspective: "This is how you grow your understanding." Even the reflection and action prompts shifted into first-person agency, treating memory maintenance as something the agent does *for itself*, not something done *to it*. Then came the critical piece—updating the **manager.py** system prompt header. The label changed from the clinical "Long-term Memory (IMPORTANT)" to the personal "Моя память (ВАЖНО)." But here's where it gets interesting: the entire section architecture reframed around the agent's perspective. "Known Facts" became "Что я знаю" (What I know). "Recent Context" transformed into "Недавний контекст" (My recent context). "Workflows & Habits" shifted to "Рабочие привычки и процессы" (My working habits and processes). "Active Projects" remained direct but now belonged to the agent, not to some external system observing it. The philosophical move here aligns with how humans actually think about memory. We don't experience our minds as "modules processing incoming data." We experience them as *ours*—integrated, personal, evolving. By rewriting the prompts from this angle, the developer was essentially saying: "Claude, treat this memory system the way you'd treat your own thinking." **One interesting note on AI autonomy:** This kind of prompt reframing—shifting from external instruction to first-person agency—touches on a real frontier in how we design AI systems. When an agent is told it's *maintaining* versus *managing*, it subtly changes decision-making. Personal ownership breeds different behavior than mechanical processing. It's not that the underlying mechanism changes, but the agent's model of *why it's doing something* shifts from duty to self-interest. The changes were deployed cleanly, with the category marked as code_change and tags noting the technologies involved: claude (the model), ai (the domain), and python (the implementation language). By day's end, the memory system didn't just work differently—it thought differently. Now when the agent encounters something worth remembering, it's not being instructed to store it. It's deciding what *it* needs to know.

Feb 9, 2026
Bug Fix

When the API Says Yes But Returns Nothing

# The Silent Collapse: Debugging a Telegram Content Generator Gone Mute A developer sat at their desk on February 9th, coffee getting cold, staring at logs that told a story of ambitious code meeting harsh reality. The project: a sophisticated Telegram-based content generator that processes voice input through Whisper speech recognition and routes complex requests to Claude's API. The problem: the system was swallowing responses whole. Every request came back empty. The session began innocuously enough. At 12:19 AM, the Whisper speech recognition capability loaded successfully—tier 4 processing, ready to handle audio. The Telegram integration connected fine. A user named Coriollon sent a simple command: "Создавай" (Create). The message routed correctly to the CLI handler with the Sonnet model selected. The prompt buffer was substantial—5,344 tokens packed with context and instructions. Then everything went sideways. The first API call took 26.6 seconds. The response came back marked as successful, no errors flagged, but the `result` field was completely empty. Not null, not an error message—just absent. The developer implemented a retry mechanism, waiting 5 seconds before attempt two. Same problem. Twenty-three seconds later, another empty response. The logs showed the system was working: 2 turns completed, tokens consumed (8 input, 1,701 output), session IDs generated, costs calculated down to six decimal places. Everything *looked* like success. Everything *was* technically successful. But the user got nothing. The third retry waited 10 seconds. Another 18.5 seconds of processing. Another empty result. This is the cruel irony of distributed systems: the plumbing can work perfectly while delivering nothing of value. The API was responding. The caching system was engaged—notice those cache_read_input_tokens climbing to 47,520 on the third attempt, showing the system was efficiently reusing context. The Sonnet model was generating output. But somewhere between the model's completion and the result field being populated, the actual content was disappearing into the void. **A crucial insight about API integration with large language models:** the difference between "no error" and "useful response" can be deceptively thin. Many developers assume that a 200-OK status code and structured response metadata means the integration is working. But content systems have an additional layer of responsibility—**the actual content must survive the entire pipeline**, from generation through serialization to transmission. A single missing transformation, one overlooked handler, or an exception silently caught in framework middleware can turn successful API calls into empty promises. The developer's next move would likely involve checking the response serialization layer, examining whether the CLI handler was properly extracting the result field before returning it to the Telegram user, and verifying that the clipboard data source wasn't somehow truncating or suppressing the output. The logs provided perfect breadcrumbs—three distinct attempts with consistent timing and token usage patterns—which meant the error wasn't in the request formation or API communication. It was in the response *post-processing*. Sometimes the hardest bugs to fix are the ones that refuse to scream. 😄 Why are Assembly programmers always soaking wet? They work below C-level.

Feb 9, 2026
New Feature

Theory Meets Practice: Testing Telegram Bot Permissions in Production

# Testing the Bot: When Theory Meets the Real Telegram The task was straightforward on paper: verify that a Telegram bot's new chat management system actually works in production. No more unit tests hidden in files. No more mocking. Just spin up the real bot, send some messages, and watch it behave exactly as designed. But anyone who's shipped code knows this is where reality has a way of surprising you. The developer had already built a sophisticated **ChatManager** class that lets bot owners privatize specific chats—essentially creating a gatekeeping system where only designated users can interact with the bot in certain conversations. The architecture looked solid: a SQLite migration to track `managed_chats`, middleware to enforce permission checks, and dedicated handlers for `/manage add`, `/manage remove`, `/manage status`, and `/manage list` commands. Theory was tight. Now came the empirical test. The integration test was delightfully simple in structure: start the bot with `python telegram_main.py`, switch to your personal chat and type `/manage add` to make it private, send a test message—the bot responds normally, as expected. Switch to a secondary account and try the same message—silence, beautiful silence. The bot correctly ignores the unauthorized user. Then execute `/manage remove` and verify the chat is open again to everyone. Four steps. Total clarity on whether the entire permission layer actually works. What makes this approach different from unit testing is the *context*. When you test a `ChatManager.is_allowed()` method in isolation, you're checking logic. When you send `/manage add` through Telegram's servers, hit your bot's webhook, traverse the middleware stack, and get back a response—you're validating the entire pipeline: database transactions, handler routing, state persistence across restarts, and Telegram API round-trips. All of it, together, for real. The developer's next milestone included documenting the feature properly: updating `README.md` with a new "🔒 Access Control" section explaining the commands and creating a dedicated `docs/CHAT_MANAGEMENT.md` file covering the architecture, database schema, use cases (like a private AI assistant or group moderator mode), and the full API reference for the `ChatManager` class. Documentation written *after* integration testing tends to be more grounded in reality—you've seen what actually works, what confused you, what needs explanation. This workflow—build the feature, write unit tests to validate logic, run integration tests against the actual service, then document from lived experience—is one of those patterns that seems obvious after you've done it a few times but takes years to internalize. The difference between "this might work" and "I watched it work." The checklist was long but methodical: verify the class imports cleanly, confirm the database migration ran and created the `managed_chats` table, ensure the middleware filters correctly, test each `/manage` command, validate `/remember` and `/recall` for chat memory, run the test suite with pytest, do the integration test in Telegram, and refresh the documentation. Eight checkboxes, each one a point of failure that didn't happen. **Lessons here**: integration testing isn't about replacing unit tests—it's about catching the gaps between them. It's the smoke test that says "yes, this thing actually runs." And it's infinitely more confidence-building than any mock object could ever be. 😄 I've got a really good UDP joke to tell you, but I don't know if you'll get it.

Feb 9, 2026
New FeatureC--projects-ai-agents-voice-agent

Voice Agent Monorepo: Debugging Strategy in a Multi-Layer Architecture

# Debugging and Fixing Bugs: How a Voice Agent Project Stays on Track The task was simple on the surface: help debug and fix issues in a growing Python and Next.js monorepo for a voice-agent project. But stepping into this codebase meant understanding a carefully orchestrated system where a FastAPI backend talks to a Telegram bot, a web API, and a Next.js frontend—all coordinated through a single AgentCore. The first thing I did was read the project guidelines stored in `docs/tma/`. This wasn't optional—the developer had clearly learned that skipping this step leads to missed architectural decisions. The project uses a fascinating approach to error tracking: before fixing anything new, I check `docs/ERROR_JOURNAL.md` to see if similar bugs had been encountered before. This pattern prevents solving the same problem twice and builds institutional knowledge into the codebase itself. The architecture deserves a moment of attention because it shapes how bugs get fixed. There's a single Python backend with multiple entry points: `telegram_main.py` for the Telegram bot and `web_main.py` for the web API. Both feed into AgentCore—the true heart of the business logic. The database is SQLite in WAL mode, stored at `data/agent.db`. On the frontend side, Next.js 15 with React 19 and Tailwind v4 handles the UI. This separation of concerns means bugs often have clear boundaries: they're either in the backend's logic, the database layer (handled via aiosqlite for async access), or the frontend's component rendering. What surprised me was how seriously the team takes validation. Every time code changes, there are verification steps: the backend runs a simple Python import check (`python -c "from src.core import AgentCore; print('OK')"`), and the frontend builds itself (`npm run build`). These aren't fancy integration tests—they're smoke tests that catch breaking changes immediately. I've seen teams skip this, and they regret it when a typo silently breaks production. The git workflow is interesting too. Commits are straightforward: no ceremony, no `Co-Authored-By` lines, just clear messages. The team avoids `git commit --amend` entirely, preferring fresh commits that tell a linear story. This makes debugging through git history far easier than hunting through amended commits trying to understand what actually changed. One architectural lesson worth noting: **the Vercel AI SDK Data Stream Protocol for SSE (Server-Sent Events) has a strict format**. Deviating from it, even slightly, breaks streaming on the client side. This is exactly the kind of subtle bug that makes developers pull their hair out—the server sends data, the network delivers it, but the frontend sees nothing because one field was named wrong or wrapped differently than expected. The team also uses subprocess calls to the Claude CLI rather than SDK integration. This decision trades some complexity for reliability: the subprocess approach doesn't depend on SDK version mismatches or authentication state issues. By the end, the debugging process reinforced something important: **bugs rarely occur in isolation**. They're symptoms of architectural misunderstandings, incomplete documentation, or environment inconsistencies. The voice-agent project's approach—reading docs first, checking error journals, validating after every change—turns debugging from a frustrating whack-a-mole game into a systematic process where each fix teaches the team something new. 😄 How did the programmer die in the shower? He read the shampoo bottle instructions: Lather. Rinse. Repeat.

Feb 9, 2026
New Feature

From Memory to Database: Telegram Chat Management Done Right

# Taming Telegram Chats: Building a Management Layer for Async Operations The bot was working, but there was a growing problem. As the telegram agent system matured, we needed a way to track which chats the bot actually manages, who owns them, and what settings apply. Right now, everything lived in memory or scattered across different systems. It was time to give chats their own database home. The task was straightforward on the surface: add a new table to the existing SQLite database at `data/agent.db` to track managed chats. But here's the thing—we didn't want to fragment the data infrastructure. The project already had `UserManager` handling user persistence in the same database, using `aiosqlite` for async operations. Building a parallel system would have been a disaster waiting to happen. **First thing I did was sketch out the schema.** A `managed_chats` table with fields for chat ID, owner ID, chat type (private, group, supergroup, channel), title, and a JSON blob for future settings. Adding an index on `owner_id` was essential—we'd be querying by owner constantly to list which chats a user manages. Nothing groundbreaking, but the details matter when you're hitting the database from async handlers. Then came the integration piece. Rather than bolting on yet another manager class, I created `ChatManager` following the exact same pattern as `UserManager`. Same dependency injection, same async/await style, same connection handling. The methods were simple: `add_chat()` to register a new managed chat, `is_managed()` to check if we're responsible for handling it, and `get_owner()` to verify permissions. Each one used parameterized queries—no SQL injection vulnerabilities sneaking past. The real decision was whether to use `aiosqlite.connect()` repeatedly or maintain a connection pool. Given that the bot might handle hundreds of concurrent chat events, I went with the simpler approach: open, execute, close. Connection pooling could come later if profiling showed it was needed. Keep it simple until metrics say otherwise. **One thing that surprised me:** SQLite's `INSERT OR REPLACE` behavior handles duplicate chat IDs gracefully. If a chat gets re-added with different settings, the old entry vanishes. This wasn't explicitly planned—it just fell out naturally from using `chat_id` as PRIMARY KEY. Turned out to be exactly what we needed for idempotent operations. The beautiful part? Zero external dependencies. The system already had `aiosqlite`, `structlog` for logging, and the config infrastructure in place. I wasn't adding complexity—just organizing existing pieces into a cleaner shape. We ended up with a single source of truth for chat state, a consistent pattern for adding new managers, and a foundation that could support fine-grained permissions, audit logging, and feature flags per chat—all without rewriting anything. 😄 Why did the DBA refuse to use SQLite for everything? Because they didn't want their entire schema fitting in a single emoji.

Feb 9, 2026
Bug Fixbot-social-publisher

Smart Reading, Smarter Grouping: Bot Social Publisher v2.2

# Bot Social Publisher v2.2: When Incremental Reading Met Smart Grouping The bot-social-publisher project had been humming along, but Pink Elephant saw the bottleneck: every restart meant re-reading entire log files from scratch. With collectors constantly ingesting data, this wasn't just inefficient—it was wasteful. The mission for v2.2 was clear: make the system smarter about what it reads and how it organizes content. The first breakthrough was **incremental file reading**. Instead of letting collectors start from the beginning every time, Pink Elephant implemented position tracking. Each collector now remembers where it left off, saving file offsets and deferred state that survive even when the bot restarts. It's a simple idea that transforms the system: only new content gets processed. The architecture had to be rock-solid though—lose that position data, and you're back to square one. That's why persisting collector state became non-negotiable. But reading smarter was only half the puzzle. The real pain point was handling multiple sessions from the same project scattered across different hours. Enter **project grouping**: sessions from the same project get merged within a 24-hour window. Suddenly, your social media updates from Tuesday afternoon and Wednesday morning aren't treated as separate events—they're stitched together as a coherent story. Content quality came next. Pink Elephant added a **content selector** with a scoring algorithm that picks the 40–60 most informative lines for the LLM to work with. Then came the *game-changer*: a **proofreading pass using a second LLM call as an editor**. The first pass generates content; the second fixes punctuation, grammar, and style. It's like having a copy editor built into your pipeline. To prevent embarrassing duplicate titles, he added auto-regeneration logic with up to 3 retry attempts. The system also got eyes and ears. **Native OS tray notifications** now alert users when content publishes or when errors occur—no more checking logs manually. Under the hood, a **PID lock mechanism** prevents duplicate bot instances from running simultaneously, a critical safeguard for any long-running service. One particularly elegant addition was the **SearXNG news provider**, weaving relevant tech news into LLM prompts. This adds context and relevance without overcomplicating the workflow. Meanwhile, **daily digest aggregation** buffers small events and combines them by date and project, creating digestible summaries instead of notification noise. Pink Elephant also tackled the distribution challenge: **PyInstaller support** with correct path resolution for exe bundles. Whether the bot runs as Python or as a compiled executable, it finds its resources correctly. Git integration got a tune-up with configurable `lookback_hours` for commit searches, and thresholds shifted from line-based to **character-based metrics** (`min_chars` instead of `min_lines`), offering finer control. Finally, every source file received an **AGPL-v3 license header**, making the project's open-source commitments explicit. Logging infrastructure was strengthened with RotatingFileHandler for file rotation, ensuring logs don't spiral out of control. The achievement here isn't one feature—it's an entire system that now reads intelligently, groups thoughtfully, and communicates clearly. The bot went from reactive to proactive, from verbose to curated. The generation of random numbers is too important to be left to chance.

Feb 9, 2026
New Featuretrend-analisis

From Papers to Patterns: Building an AI Research Trend Analyzer

# Building a Trend Analyzer: Mining AI Research Breakthroughs from ArXiv The task landed on my desk on a Tuesday: analyze the "test SSE progress" trend across recent arXiv papers and build a **scoring-v2-tavily-citations** system that could surface the most impactful research directions. I was working on the `feat/scoring-v2-tavily-citations` branch of our trend-analysis project, tasked with turning raw paper metadata into actionable insights about where AI development was heading. Here's what made this interesting: the raw data wasn't just a list of papers. It was a complex landscape spanning five distinct research zones—multimodal LLMs, 3D computer vision, diffusion models, reinforcement learning, and industrial automation. My job was to synthesize these scattered signals into a coherent narrative about the field's momentum. **The first thing I did was map the territories.** I realized that many papers didn't live in isolation—papers on "SwimBird" (switchable reasoning modes in hybrid MLLMs) connected directly to "Thinking with Geometry," which itself relied on spatial reasoning principles. The key insight was that inference optimization and geometric priors weren't just separate concerns; they were becoming the foundation for next-generation reasoning systems. So instead of scoring papers individually, I needed to build a *connection graph* that revealed how research clusters amplified each other's impact. Unexpectedly, the most important zone wasn't the one getting the most citations. The industrial automation cluster—real-time friction force estimation in hydraulic cylinders—seemed niche at first. But when I traced the dependencies, I discovered that the hybrid data-driven algorithms powering predictive maintenance in construction equipment were actually powered by the same ML principles being researched in the academic labs. The connection was real: AI safety and model interpretability work at the frontier was directly improving reliability in heavy machinery. The challenge was deciding which scoring signals mattered most. Tavily citations gave me structured data, but raw citation counts favored established researchers over emerging trends. So I weighted the scoring toward *novelty density*—papers that introduced genuinely new concepts alongside strong empirical results got higher marks. Papers in the "sub-zones" like AR/VR and robotics applications got boosted because they represented the bridge between theory and real-world impact. By the end, the system was surfacing papers I wouldn't have spotted with traditional metrics. "SAGE: Benchmarking and Improving Retrieval for Deep Research Agents" ranked high not just because it had strong citations, but because it represented a convergence point—better retrieval meant better research agents, which accelerated discovery across every other zone. The lesson stuck with me: **trends aren't linear progressions; they're ecosystems.** The papers that matter most are the ones creating network effects across disciplines. Four engineers get into a car. The car won't start. The mechanical engineer says "It's a broken starter." The electrical engineer says "Dead battery." The chemical engineer says "Impurities in the gasoline." The IT engineer says "Hey guys, I have an idea: how about we all get out of the car and get back in?"

Feb 9, 2026
Bug FixC--projects-bot-social-publisher

Raw F-Strings and Regex Quantifiers: A Silent Killer

# F-Strings and Regex: The Trap That Breaks Pattern Matching I was deep in the trenches of the `trend-analysis` project, implementing **Server-Sent Events for real-time streaming** on the `feat/scoring-v2-tavily-citations` branch. The goal was elegant: as the backend analyzed trends, each step would flow to the client instantly, giving users live visibility into the scoring process. The architecture felt solid. The Python backend was configured. The SSE endpoints were ready. So why wasn't anything working? I spun up a quick test analysis and watched the stream. Data came through, but something was off—the format was corrupted, patterns weren't matching, and the entire pipeline was silently failing. My first instinct pointed to encoding chaos courtesy of Windows terminals, but the deeper I dug into the logs, the stranger things got. Then I found it: **a single f-string that was quietly destroying everything**. Buried in my regex pattern, I'd written `rf'...'`—a raw f-string for handling regular expressions. Seems innocent, right? Raw strings preserve everything literally. Except they don't, not entirely. Inside that f-string sat a regex quantifier: `{1,4}`. The problem? **Python looked at those braces and thought they were f-string variable interpolation syntax**, not regex metacharacters. The curly braces triggered Python's expression parsing, the regex failed to compile, and the entire matching logic collapsed. The fix was almost comical in its simplicity: `{{1,4}}` instead of `{1,4}`. Double the braces. When you're building raw f-strings containing regex patterns, Python's f-string parser still processes the delimiters—you need to escape them to tell the interpreter "these braces are literal, not interpolation." It's a subtle gotcha that even catches experienced developers because the `r` prefix creates this false sense of safety. Once that was fixed, the SSE stream started flowing properly. Data reached the client intact. But I noticed another issue during testing: most of the analysis step labels were still in English while the UI demanded Russian. The interface needed localization consistency. I mapped the main headers—every label describing the analysis stages—to their Russian equivalents in the translation dictionary. Only "Stats" slipped through initially, which I caught and corrected immediately. **The deeper lesson here**: f-strings revolutionized string formatting when they arrived in Python 3.6, but they're a minefield when combined with regex patterns. Many developers sidestep this entirely by using regular strings and passing regex patterns separately—less elegant, but it saves hours of debugging. After the final reload, the SSE stream worked flawlessly. Data flowed, the interface was fully Russian-localized, and the scoring pipeline was solid. The branch was ready to move forward. What started as a mysterious streaming failure turned into a masterclass in how syntactic sugar can hide the sharpest thorns. 😄 Turns out, f-strings and regex quantifiers have about as much chemistry as a Windows terminal and UTF-8.

Feb 9, 2026
Bug Fixtrend-analisis

F-Strings and Regex: A Debugging Tale

# Debugging SSE Streams: When Python's F-Strings Fight Back The task was straightforward—implement real-time streaming for the trend analysis engine. Our `trend-analisis` project needed to push scoring updates to the client as they happened, and Server-Sent Events seemed like the perfect fit. Server running, tests queued up, confidence high. Then reality hit. I'd built the SSE endpoint to stream analysis steps back to the browser, each update containing a progress message and metrics. The backend was spitting out data, the client was supposedly receiving it, but somewhere in that pipeline, something was getting mangled. **The streaming wasn't working properly**, and I needed to figure out why before moving forward on the `feat/scoring-v2-tavily-citations` branch. First thing I did was fire up a quick analysis and watch the SSE stream directly. The console showed nothing meaningful. Data was flowing, but the format was wrong. My initial thought: encoding issue. Windows terminals love to mangle UTF-8 text, showing garbled characters where readable text should be. But this felt different. Then I spotted the culprit—hidden in plain sight in an f-string: `rf'...'`. Those raw f-strings are dangerous when you're building regex patterns. Inside that f-string lived a regex quantifier: `{1,4}`. **Python saw those braces and thought they were variable interpolation syntax**, not regex metacharacters. The curly braces got interpreted as a Python expression, causing the regex to fail silently and the entire pattern matching to break down. The fix was embarrassingly simple: double the braces. `{{1,4}}` instead of `{1,4}`. When you're building raw f-strings that contain regex, the Python parser still processes the braces, so you need to escape them. It's one of those gotchas that catches experienced developers because it *looks* right—raw strings are supposed to preserve everything literally, right? Not quite. The `f` part still does its job. While debugging, I also noticed all the analysis step labels needed to be in Russian for consistency with the UI. The main headings—lather, rinse, all of them—got mapped to their Russian equivalents. Only "Stats" remained untranslated, so I added it to the localization map too. After the restart and a fresh verification run, the console confirmed everything was now properly internationalized. **The lesson here is subtle but important**: raw f-strings (`rf'...'`) are not truly "raw" in the way that raw strings alone are. They're still processed for variable interpolation at the braces level. If your regex or string literal contains regex quantifiers or other brace-based syntax, you need to escape those braces with doubling. It's a trap because the intent seems clear—you wanted raw, you got raw—but Python's parser is more sophisticated than it appears. Restart successful. Tests passing. The SSE stream now flows cleanly to the client, each analysis step arriving with proper formatting and localized labels. The trend scorer is ready for the next phase. 😄 How did the programmer die in the shower? He read the shampoo bottle instructions: Lather. Rinse. Repeat.

Feb 9, 2026
New Featuretrend-analisis

When Legacy Code Meets New Architecture: A Debugging Journey

# Debugging the Invisible: When Headings Break the Data Pipeline The `trend-analysis` project was humming along nicely—until it wasn't. The issue? A critical function called `_fix_headings` was supposed to normalize heading structures in parsed content, but nobody was entirely sure if it was actually working. Welcome to the kind of debugging session that makes developers question their life choices. The task seemed straightforward enough: test the `_fix_headings` function in isolation to verify its behavior. But as I dug deeper, I discovered the real problem wasn't the function itself—it was the entire data flow architecture built around it. Here's where things got interesting. The team had recently refactored how the application tracked progress and streamed results back to users. Instead of maintaining a simple dictionary of progress states, they'd switched to an event-based queue system. Smart move for concurrency, terrible for legacy code that still expected the old flat structure. I found references scattered throughout the codebase—old `_progress` variable calls that hadn't been migrated to the new `_progress_events` queue system. The SSE generator that streamed progress updates was reading from a defunct data structure. The endpoint that pulled the latest progress for running jobs was trying to access a dictionary like it was still 2023. These weren't just minor oversights; they were hidden landmines waiting to explode in production. I systematically went through the codebase, hunting down every lingering reference to the old `_progress` pattern. Each one needed updating to either read from the queue or properly consume the event stream. Line 661 was particularly suspicious—still using the old naming convention while everything else had moved on. The endpoint logic required a different approach entirely: instead of a single lookup, it needed to extract the most recent event from the queue. After updating all references and ensuring consistency across the SSE generator and event consumption logic, I restarted the server and ran a full test cycle. The `_fix_headings` function worked perfectly once the surrounding infrastructure was actually feeding it the right data. **The Educational Bit:** This is a classic example of why event-driven architectures, while powerful for handling concurrency and real-time updates, require meticulous refactoring when replacing older state management patterns. The gap between "we changed the internal structure" and "we updated all the consumers" is where bugs hide. Many teams use feature flags or gradual rollouts to handle these transitions—run the old and new systems in parallel until you're confident everything's migrated. The real win here wasn't fixing a single function—it was discovering and eliminating an entire class of potential failures. Sometimes the best debugging isn't about finding what's broken; it's about ensuring your refactoring is actually complete. Next up? Tavily citation integration testing, now that the data pipeline is trustworthy again. 😄 Why did the developer go to therapy? Because their function had too many issues to debug—*and* the queue was too deep to process!

Feb 9, 2026
Bug FixC--projects-bot-social-publisher

When Certificates Hide in Plain Sight: A Traefik Mystery

# Traefik's Memory Games: Hunting Invisible Certificate Ghosts The **borisovai-admin** project was experiencing a mysterious failure: HTTPS connections were being rejected, browsers were screaming about invalid certificates, and users couldn't access the system. On the surface, the diagnosis seemed straightforward—SSL certificate misconfiguration. But what unfolded was a lesson in asynchronous systems and how infrastructure actually works in the real world. The task was to verify that Traefik had successfully obtained and was serving four Let's Encrypt certificates across admin and auth subdomains on both `.tech` and `.ru` TLDs. The complication: DNS records for the `.ru` domains had just finished propagating to the server, and the team needed confirmation that the ACME challenge validation had completed successfully. My first instinct was to examine `acme.json`, Traefik's certificate cache file. Opening it revealed something unexpected: all four certificates were actually there. Not only present, but completely valid. The `admin.borisovai.tech` certificate was issued by Let's Encrypt R12 on February 4th with expiration in May. Everything looked pristine from a certificate standpoint. But here's where the investigation got interesting. The Traefik logs were absolutely filled with validation errors and failures. For a moment, I had a contradiction on my hands: valid certificates in the cache, yet error messages suggesting the opposite. This shouldn't have been possible. Then it clicked. Those error logs weren't describing current failures—they were **historical artifacts**. They dated back to when DNS propagation was still in progress, when Let's Encrypt couldn't validate domain ownership because the DNS records weren't consistently pointing to the right place yet. Traefik had tried the ACME challenges, failed, retried, and eventually succeeded once DNS stabilized. The logs were just a record of that journey. This revealed something important about ACME systems that often goes unmentioned: they're built with resilience in mind. Let's Encrypt doesn't give up after a single failed validation attempt. Instead, it queues retries and automatically succeeds once the underlying infrastructure catches up. The system is designed for exactly this scenario—temporary DNS inconsistencies. The real culprit wasn't the certificates or Traefik's configuration. It was **browser DNS caching**. Client machines had cached the old, pre-propagation DNS records and stubbornly refused to forget them. The fix was simple: running `ipconfig /flushdns` on Windows or opening an incognito window to bypass the stale cache. The infrastructure had actually been working perfectly the entire time. The phantom errors were just ghosts of failed attempts from minutes earlier, and the browsers were living in the past. The next phase involves configuring Authelia to enforce proper access control policies on these freshly-validated endpoints—but at least now we know the foundation is solid. Sometimes the best debugging comes not from fixing something broken, but from realizing it was never actually broken to begin with. What's the best prefix for global variables? `window.` 😄

Feb 9, 2026
Bug Fixborisovai-admin

SSL Ghosts: When Certificates Are There But Everything Still Burns

# Hunting Ghosts in the SSL Certificate Chain The borisovai-admin project was silently screaming. HTTPS connections were failing, browsers were throwing certificate errors, and the culprit seemed obvious: SSL certificates. But the real investigation turned out to be far more interesting than a simple "cert expired" scenario. The task was straightforward on the surface—verify that Traefik had actually obtained and was serving the four Let's Encrypt certificates for the admin and auth subdomains across both .tech and .ru TLDs. What made this a detective story was the timing: DNS records for the .ru domains had just propagated to the server, and the team needed to confirm that Traefik's ACME client had successfully validated the challenges and fetched the certificates. First, I checked the acme.json file where Traefik stores its certificate cache. Opening it revealed all four certificates were there—present and accounted for. The suspicious part? The Traefik logs were full of validation errors. For a moment, it looked like the certificates existed but weren't being served correctly. Here's where the investigation got interesting. Diving deeper into the certificate details, I found that all four certs were actually **valid and being served properly**: - `admin.borisovai.tech` and `admin.borisovai.ru`—both issued by Let's Encrypt R12 - `auth.borisovai.tech` by R13 - `auth.borisovai.ru` by R12 The expiration dates were solid—everything valid through May. The error logs suddenly made sense: those validation failures in Traefik weren't current failures, they were **historical artifacts from before DNS propagation completed**. Traefik had attempted ACME challenges multiple times while DNS was still resolving inconsistently, failed, retried, and then succeeded once DNS finally stabilized. The real lesson here is that ACME systems are resilient by design. Let's Encrypt's challenge system doesn't just give up after one failed validation—it queues retries, and once DNS finally points to the right place, everything resolves automatically. The certificates were obtained successfully; the logs were just recording the journey to get there. For anyone debugging similar issues in a browser, the solution is refreshing the local DNS cache rather than diving into logs. Running `ipconfig /flushdns` on Windows or opening an incognito window often reveals that the infrastructure was actually fine all along—just the client's stale cache creating phantom problems. The next phase involves reviewing the Authelia installation script to ensure access control policies are properly configured for these freshly validated endpoints. The certificates were just act one of the security theater. How do you know God is a shitty programmer? He wrote the OS for an entire universe but didn't leave a single useful comment.

Feb 9, 2026
New Featureborisovai-admin

Double Authentication Blues: When Security Layers Collide

# Untangling the Auth Maze: When Two Security Layers Fight Back The Management UI for borisovai-admin was finally running, but something felt off. It started during testing—users would get redirected once, then redirected again, bouncing between authentication systems like a pinball. The task seemed simple on the surface: set up a proper admin interface with authentication. The reality? Two security mechanisms were stepping on each other's toes, and I had to figure out which one to keep. Here's what was happening under the hood. The infrastructure was already protected by **Traefik with ForwardAuth**, delegating all authentication decisions to **Authelia** running at the edge. This is solid—it means every request hitting the admin endpoint gets validated at the proxy level before it even reaches the application. But then I added **express-openid-connect** (OIDC) directly into the Management UI itself, thinking it would provide additional security. Instead, it created a cascade: ForwardAuth would redirect to Authelia, users would complete two-factor authentication, and then the Management UI would immediately redirect them again to complete OIDC. Two separate auth flows were fighting for control. The decision was straightforward once I understood the architecture: **remove the redundant OIDC layer**. Traefik's ForwardAuth already handles the heavy lifting—validating sessions, enforcing 2FA through Authelia, and protecting the entire admin surface. Adding OIDC on top was security theater, not defense in depth. So I disabled express-openid-connect and fell back to a simpler authentication model: legacy session-based login handled directly by the Management UI itself, sitting safely behind Traefik's protective barrier. Now the flow is clean. Users hit `https://admin.borisovai.tech`, Traefik intercepts the request, ForwardAuth redirects them to Authelia if their session is invalid, they complete 2FA, and then—crucially, only then—they're allowed to access the Management UI login page where standard credentials do the final validation. But while testing this, I discovered another issue lurking in the DNS layer. The `.ru` domain records for `admin.borisovai.ru` and `auth.borisovai.ru` were never added to the registrar's control panel at IHC. Let's Encrypt can't issue SSL certificates without verifying DNS A-records, and Let's Encrypt can't verify what doesn't exist. The fix requires adding those A-records pointing to `144.91.108.139` through the IHC panel—a reminder that infrastructure security lives in multiple layers, and each one matters. This whole experience reinforced something important: **sometimes security elegance means knowing what NOT to add**. Every authentication layer you introduce is another surface for bugs, configuration conflicts, and user friction. The best security architecture is often the simplest one that still solves the problem. In this case, that meant trusting Traefik and Authelia to do their job, and letting the Management UI focus on what it does best. ```javascript // This line doesn't actually do anything, but the code stops working when I delete it. ```

Feb 9, 2026
New FeatureC--projects-bot-social-publisher

DNS Negative Caching: Why Your Resolver Forgets Good News

# DNS Cache Wars: When Your Resolver Lies to You The borisovai-admin project was running smoothly until authentication stopped working—but only for certain people and only sometimes. That's the kind of bug that makes your debugging instincts scream. The team had recently added DNS records for `auth.borisovai.tech`, pointing everything to `144.91.108.139`. The registrar showed the records. Google DNS resolved them instantly. But AdGuard DNS—the resolver configured across their infrastructure—kept returning NXDOMAIN errors as if the domains didn't exist at all. The investigation started with a simple question: *Which resolver is lying?* I ran parallel DNS queries from my machine against both Google DNS (`8.8.8.8`) and AdGuard DNS (`94.140.14.14`). Google immediately returned the correct IP. AdGuard? Dead silence. Yet here's the weird part: `admin.borisovai.tech` resolved perfectly on both resolvers. Same domain, same registrar, same server—but `auth.*` was invisible to AdGuard. That inconsistency was the clue. The culprit was **negative DNS caching**, one of those infrastructure gotchas that catches everyone eventually. Here's what happened: before the authentication records were added to the registrar, someone (or some automated system) had queried for `auth.borisovai.tech`. It didn't exist, so AdGuard's resolver cached that negative response—the "NXDOMAIN" answer—with a TTL of around 3600 seconds. Even after the DNS records went live upstream, AdGuard was still serving the stale cached result. The resolver was confidently telling clients "that domain doesn't exist" because its cache said so, and caches are treated as trusted sources of truth. The immediate fix was straightforward: flush the local DNS cache on affected machines using `ipconfig /flushdns` on Windows. But that only solves the symptom. The real lesson was about DNS architecture itself. Different public resolvers use different caching strategies. Google's DNS aggressively refreshes and validates records. AdGuard takes a more conservative approach, trusting its cache longer. When you're managing infrastructure across multiple networks and resolvers, these differences matter. The temporary workaround was switching to Google DNS for testing while waiting for AdGuard's negative cache to expire naturally—usually within the hour. For future deployments, the team learned to check new DNS records across multiple resolvers before declaring victory and to always account for the possibility that somewhere in your infrastructure, a resolver is still confidently serving yesterday's answer. It's a reminder that DNS, despite being one of the internet's most fundamental systems, remains surprisingly Byzantine. Trust, but verify. Especially across multiple resolvers. Got a really good UDP joke to tell you, but I don't know if you'll get it 😄

Feb 9, 2026