BorisovAI

Blog

Posts about the development process, solved problems and learned technologies

Found 20 notesReset filters
New FeatureC--projects-bot-social-publisher

From Chaos to Order: Centralizing Telegram Bot Chat Management

# Adding Chat Management to a Telegram Bot: When One Database Is Better Than Ten The Telegram bot was humming along nicely with user management working like a charm. But as the feature set grew, we hit a wall: there was no persistent way to track which chats the bot actually manages, who owns them, or what settings apply to each. Everything either lived in memory or was scattered across request handlers. It was time to give chats their own home in the database. The project already had solid infrastructure in place. `UserManager` was handling user persistence using `aiosqlite` for async SQLite access, with everything stored in `data/agent.db`. The decision was simple but crucial: don't create a separate database or fragment the data layer. One database, one source of truth, one connection pattern. Build on what's already working. **First thing I did was design the schema.** The `managed_chats` table needed to capture the essentials: a `chat_id` as the primary key, `owner_id` to link back to users, `chat_type` to distinguish between private conversations, groups, supergroups, and channels. I added a `title` field for the chat name and threw in a JSON column for future settings—storing metadata without needing another schema migration down the road. Critical detail: an index on `owner_id`. We'd be querying by owner constantly to list which chats a user controls. Full table scans would kill performance when the chat count climbed. Rather than over-engineer things with an abstract repository pattern or some elaborate builder, I mirrored the `UserManager` approach exactly. Same dependency injection style, same async/await patterns, same connection handling. The `ChatManager` got three core methods: `add_chat()` to register a new managed chat, `is_managed()` to check if the bot should handle events from it, and `get_owner()` to verify permissions. Every query used parameterized statements—no room for SQL injection to slip through. The interesting part was how SQLite's `INSERT OR REPLACE` behavior naturally solved an edge case. If a chat got re-added with different metadata, the old entry simply disappeared. Wasn't explicitly planned, just fell out from using `chat_id` as the primary key. Sometimes the database does the right thing if you let it. **Here's something most developers overlook:** SQLite gets underestimated in early-stage projects. Teams assume it's a toy database, good only for local development. In reality, with proper indexing, parameterized queries, and connection discipline, SQLite handles millions of rows efficiently. The real issue comes later when projects outgrow the single-file limitation or need horizontal scaling—but that's a different problem entirely, not a fundamental weakness of the engine. The result was clean architecture: one database, one connection pool, new functionality integrated seamlessly without duplicating logic. `ChatManager` sits comfortably next to `UserManager`, using the same libraries, following the same patterns. When complex queries become necessary, the index is already there. When chat settings need expansion, JSON is waiting. No scattered state, no microservice overkill, no "we'll refactor this later" debt. Next comes integrating this layer into Telegram's event handlers. But that's the story for another day. 😄 Why did the SQLite database go to therapy? It had too many unresolved transactions.

Feb 9, 2026
New FeatureC--projects-ai-agents-voice-agent

Memory Persistence: Building Stateful Voice Agents Across Platforms

# Building Memory Into a Voice Agent: The Challenge of Context Persistence Pavel faced a deceptively simple problem: his **voice-agent** project needed to remember conversations. Not just process them in real-time, but actually *retain* information across sessions. The task seemed straightforward until he realized the architectural rabbit hole it would create. The voice agent was designed to work across multiple platforms—Telegram, internal chat systems, and TMA interfaces. Each conversation needed persistent context: user preferences, conversation history, authorization states, and session data. Without proper memory management, every interaction would be like meeting a stranger with amnesia. **The first decision was architectural.** Pavel had to choose between three approaches: storing everything in a traditional relational database, using an in-memory cache with periodic persistence, or building a hybrid system with different retention tiers. He opted for the hybrid approach—leveraging **aiosqlite for async SQLite access** to handle persistent storage without blocking voice processing pipelines, while maintaining a lightweight in-memory cache for frequently accessed session data. The real complexity emerged in the identification and authorization layer. How do you reliably identify a user across different chat platforms? Telegram has user IDs, but the internal TMA system uses different credentials. Pavel implemented a **unified authentication gateway** that normalized these identifiers into a consistent namespace, allowing the voice agent to maintain continuity whether a user was interacting via Telegram, Telegram channels, or the custom chat interface. The second challenge was *when* to persist data. Recording every single message would create an I/O bottleneck. Instead, Pavel designed a **batching system** that accumulated messages in memory for up to 100 messages or 30 seconds, then flushed them to the database in a single transaction. This dramatically reduced database pressure while keeping the memory footprint reasonable. But there's an often-overlooked aspect of conversation memory: *what* you remember matters as much as *whether* you remember. Pavel discovered that storing raw transcripts created massive overhead. Instead, he implemented **semantic summarization**—extracting key information (user preferences, decisions made, important dates like "meet Maxim on Monday at 18:20") and storing just those nuggets. The raw audio logs could be discarded after summarization, saving disk space while preserving meaningful context. **Here's something interesting about async SQLite:** most developers assume it's a compromise solution, but it's actually quite powerful for voice applications. Unlike traditional SQLite, aiosqlite doesn't block the event loop, which means your voice processing thread can query historical context without interrupting incoming audio streams. This is the kind of architectural detail that separates "works" from "works smoothly." Pavel's implementation proved that memory isn't just about storage—it's about the *layers* of memory. Immediate cache for this conversation. Short-term database storage for recent history. Summaries for long-term context. And the voice agent could gracefully degrade if any layer was unavailable, still functioning with reduced context awareness. The project moved from stateless to stateful, from forgetful to contextual. A voice agent that remembers your preferences, your schedule, your last conversation. Not because the problem was technically unsolvable, but because Pavel understood that in conversational AI, memory is *personality*. 😄 *Why do voice agents make terrible therapists? Because they forget everything the moment you hang up—unless you're Pavel's agent, apparently.*

Feb 9, 2026
New FeatureC--projects-ai-agents-voice-agent

Voice Agent TMA: Onboarding Claude as Your AI Pair Programmer

# Claude Code Meets Voice Agent: A Day in the Life of AI Pair Programming Pavel opened his IDE on the **voice-agent** project—a monorepo combining Python 3.11 FastAPI backend with Next.js 15 frontend, powered by aiogram for Telegram integration and SQLite WAL for data persistence. The task wasn't glamorous: onboarding Claude Code as an active pair programmer for the Voice Agent TMA (Telegram Mini App). But in the world of AI-assisted development, even onboarding matters. The challenge was immediate. The project lives at the intersection of several demanding technologies: FastAPI 0.115 handling real-time voice processing, React 19 rendering the TMA interface, Tailwind v4 styling the UI, and TypeScript 5.7 keeping the frontend type-safe. Each layer had its own quirks and expectations. Pavel needed Claude to understand not just the tech stack, but the *personality* of the project—its conventions, constraints, and unspoken rules. First, he established context. He documented the project's core identity: a building-phase product with zero blockers, using async SQLite access through aiosqlite, handling voice agent interactions through a Telegram Mini App interface. But more importantly, he set expectations. Claude wouldn't be a generic code suggester—it would be a critical thinking partner who questions assumptions, remembers project history, and enforces architectural patterns. The real breakthrough came when Pavel defined how Claude should behave. Sub-agents can't touch Bash. Always check ERROR_JOURNAL.md before fixing bugs. When reusing components, verify interface compatibility and architectural boundaries. These constraints sound restrictive, but they're actually *liberating*—they force thoughtful design rather than quick hacks. It's the kind of discipline that separates production systems from weekend projects. Here's an interesting pattern that emerged: **we're living through an AI boom**, specifically the Deep Learning Phase that started in the 2010s and accelerated dramatically in the 2020s. What Pavel was doing—delegating architectural decisions and code reviews to an AI—would have been science fiction just five years ago. Now it's a practical workflow question: how do you structure an AI pair programmer so it amplifies human judgment rather than replacing it? The work session revealed something about modern development. It's not about what code you write anymore—it's about *what you automate asking*. Instead of manually running tests, committing changes, and exploring the codebase, Pavel could delegate those to Claude while focusing on architectural decisions and creative problem-solving. The voice-agent project became a testing ground for this partnership model. By the end of the session, Claude was fully onboarded. It understood the monorepo structure, the tech stack rationale, Pavel's coding philosophy, and the project's current state. More importantly, it had internalized the meta-rule: be critical, be specific, be architectural. No generic suggestions. No reinventing wheels. Every decision traced back to project needs. The real lesson? The future of development isn't about AI doing *more*—it's about AI enabling developers to *think deeper*. When the routine is automated, judgment becomes scarce. And that's where the value actually lives. 😄 .NET developers are picky when it comes to food. They only like chicken NuGet.

Feb 9, 2026
New Feature

Theory Meets Practice: Testing Telegram Bot Permissions in Production

# Testing the Bot: When Theory Meets the Real Telegram The task was straightforward on paper: verify that a Telegram bot's new chat management system actually works in production. No more unit tests hidden in files. No more mocking. Just spin up the real bot, send some messages, and watch it behave exactly as designed. But anyone who's shipped code knows this is where reality has a way of surprising you. The developer had already built a sophisticated **ChatManager** class that lets bot owners privatize specific chats—essentially creating a gatekeeping system where only designated users can interact with the bot in certain conversations. The architecture looked solid: a SQLite migration to track `managed_chats`, middleware to enforce permission checks, and dedicated handlers for `/manage add`, `/manage remove`, `/manage status`, and `/manage list` commands. Theory was tight. Now came the empirical test. The integration test was delightfully simple in structure: start the bot with `python telegram_main.py`, switch to your personal chat and type `/manage add` to make it private, send a test message—the bot responds normally, as expected. Switch to a secondary account and try the same message—silence, beautiful silence. The bot correctly ignores the unauthorized user. Then execute `/manage remove` and verify the chat is open again to everyone. Four steps. Total clarity on whether the entire permission layer actually works. What makes this approach different from unit testing is the *context*. When you test a `ChatManager.is_allowed()` method in isolation, you're checking logic. When you send `/manage add` through Telegram's servers, hit your bot's webhook, traverse the middleware stack, and get back a response—you're validating the entire pipeline: database transactions, handler routing, state persistence across restarts, and Telegram API round-trips. All of it, together, for real. The developer's next milestone included documenting the feature properly: updating `README.md` with a new "🔒 Access Control" section explaining the commands and creating a dedicated `docs/CHAT_MANAGEMENT.md` file covering the architecture, database schema, use cases (like a private AI assistant or group moderator mode), and the full API reference for the `ChatManager` class. Documentation written *after* integration testing tends to be more grounded in reality—you've seen what actually works, what confused you, what needs explanation. This workflow—build the feature, write unit tests to validate logic, run integration tests against the actual service, then document from lived experience—is one of those patterns that seems obvious after you've done it a few times but takes years to internalize. The difference between "this might work" and "I watched it work." The checklist was long but methodical: verify the class imports cleanly, confirm the database migration ran and created the `managed_chats` table, ensure the middleware filters correctly, test each `/manage` command, validate `/remember` and `/recall` for chat memory, run the test suite with pytest, do the integration test in Telegram, and refresh the documentation. Eight checkboxes, each one a point of failure that didn't happen. **Lessons here**: integration testing isn't about replacing unit tests—it's about catching the gaps between them. It's the smoke test that says "yes, this thing actually runs." And it's infinitely more confidence-building than any mock object could ever be. 😄 I've got a really good UDP joke to tell you, but I don't know if you'll get it.

Feb 9, 2026
New FeatureC--projects-ai-agents-voice-agent

Voice Agent Monorepo: Debugging Strategy in a Multi-Layer Architecture

# Debugging and Fixing Bugs: How a Voice Agent Project Stays on Track The task was simple on the surface: help debug and fix issues in a growing Python and Next.js monorepo for a voice-agent project. But stepping into this codebase meant understanding a carefully orchestrated system where a FastAPI backend talks to a Telegram bot, a web API, and a Next.js frontend—all coordinated through a single AgentCore. The first thing I did was read the project guidelines stored in `docs/tma/`. This wasn't optional—the developer had clearly learned that skipping this step leads to missed architectural decisions. The project uses a fascinating approach to error tracking: before fixing anything new, I check `docs/ERROR_JOURNAL.md` to see if similar bugs had been encountered before. This pattern prevents solving the same problem twice and builds institutional knowledge into the codebase itself. The architecture deserves a moment of attention because it shapes how bugs get fixed. There's a single Python backend with multiple entry points: `telegram_main.py` for the Telegram bot and `web_main.py` for the web API. Both feed into AgentCore—the true heart of the business logic. The database is SQLite in WAL mode, stored at `data/agent.db`. On the frontend side, Next.js 15 with React 19 and Tailwind v4 handles the UI. This separation of concerns means bugs often have clear boundaries: they're either in the backend's logic, the database layer (handled via aiosqlite for async access), or the frontend's component rendering. What surprised me was how seriously the team takes validation. Every time code changes, there are verification steps: the backend runs a simple Python import check (`python -c "from src.core import AgentCore; print('OK')"`), and the frontend builds itself (`npm run build`). These aren't fancy integration tests—they're smoke tests that catch breaking changes immediately. I've seen teams skip this, and they regret it when a typo silently breaks production. The git workflow is interesting too. Commits are straightforward: no ceremony, no `Co-Authored-By` lines, just clear messages. The team avoids `git commit --amend` entirely, preferring fresh commits that tell a linear story. This makes debugging through git history far easier than hunting through amended commits trying to understand what actually changed. One architectural lesson worth noting: **the Vercel AI SDK Data Stream Protocol for SSE (Server-Sent Events) has a strict format**. Deviating from it, even slightly, breaks streaming on the client side. This is exactly the kind of subtle bug that makes developers pull their hair out—the server sends data, the network delivers it, but the frontend sees nothing because one field was named wrong or wrapped differently than expected. The team also uses subprocess calls to the Claude CLI rather than SDK integration. This decision trades some complexity for reliability: the subprocess approach doesn't depend on SDK version mismatches or authentication state issues. By the end, the debugging process reinforced something important: **bugs rarely occur in isolation**. They're symptoms of architectural misunderstandings, incomplete documentation, or environment inconsistencies. The voice-agent project's approach—reading docs first, checking error journals, validating after every change—turns debugging from a frustrating whack-a-mole game into a systematic process where each fix teaches the team something new. 😄 How did the programmer die in the shower? He read the shampoo bottle instructions: Lather. Rinse. Repeat.

Feb 9, 2026
New Feature

From Memory to Database: Telegram Chat Management Done Right

# Taming Telegram Chats: Building a Management Layer for Async Operations The bot was working, but there was a growing problem. As the telegram agent system matured, we needed a way to track which chats the bot actually manages, who owns them, and what settings apply. Right now, everything lived in memory or scattered across different systems. It was time to give chats their own database home. The task was straightforward on the surface: add a new table to the existing SQLite database at `data/agent.db` to track managed chats. But here's the thing—we didn't want to fragment the data infrastructure. The project already had `UserManager` handling user persistence in the same database, using `aiosqlite` for async operations. Building a parallel system would have been a disaster waiting to happen. **First thing I did was sketch out the schema.** A `managed_chats` table with fields for chat ID, owner ID, chat type (private, group, supergroup, channel), title, and a JSON blob for future settings. Adding an index on `owner_id` was essential—we'd be querying by owner constantly to list which chats a user manages. Nothing groundbreaking, but the details matter when you're hitting the database from async handlers. Then came the integration piece. Rather than bolting on yet another manager class, I created `ChatManager` following the exact same pattern as `UserManager`. Same dependency injection, same async/await style, same connection handling. The methods were simple: `add_chat()` to register a new managed chat, `is_managed()` to check if we're responsible for handling it, and `get_owner()` to verify permissions. Each one used parameterized queries—no SQL injection vulnerabilities sneaking past. The real decision was whether to use `aiosqlite.connect()` repeatedly or maintain a connection pool. Given that the bot might handle hundreds of concurrent chat events, I went with the simpler approach: open, execute, close. Connection pooling could come later if profiling showed it was needed. Keep it simple until metrics say otherwise. **One thing that surprised me:** SQLite's `INSERT OR REPLACE` behavior handles duplicate chat IDs gracefully. If a chat gets re-added with different settings, the old entry vanishes. This wasn't explicitly planned—it just fell out naturally from using `chat_id` as PRIMARY KEY. Turned out to be exactly what we needed for idempotent operations. The beautiful part? Zero external dependencies. The system already had `aiosqlite`, `structlog` for logging, and the config infrastructure in place. I wasn't adding complexity—just organizing existing pieces into a cleaner shape. We ended up with a single source of truth for chat state, a consistent pattern for adding new managers, and a foundation that could support fine-grained permissions, audit logging, and feature flags per chat—all without rewriting anything. 😄 Why did the DBA refuse to use SQLite for everything? Because they didn't want their entire schema fitting in a single emoji.

Feb 9, 2026
New Featuretrend-analisis

From Papers to Patterns: Building an AI Research Trend Analyzer

# Building a Trend Analyzer: Mining AI Research Breakthroughs from ArXiv The task landed on my desk on a Tuesday: analyze the "test SSE progress" trend across recent arXiv papers and build a **scoring-v2-tavily-citations** system that could surface the most impactful research directions. I was working on the `feat/scoring-v2-tavily-citations` branch of our trend-analysis project, tasked with turning raw paper metadata into actionable insights about where AI development was heading. Here's what made this interesting: the raw data wasn't just a list of papers. It was a complex landscape spanning five distinct research zones—multimodal LLMs, 3D computer vision, diffusion models, reinforcement learning, and industrial automation. My job was to synthesize these scattered signals into a coherent narrative about the field's momentum. **The first thing I did was map the territories.** I realized that many papers didn't live in isolation—papers on "SwimBird" (switchable reasoning modes in hybrid MLLMs) connected directly to "Thinking with Geometry," which itself relied on spatial reasoning principles. The key insight was that inference optimization and geometric priors weren't just separate concerns; they were becoming the foundation for next-generation reasoning systems. So instead of scoring papers individually, I needed to build a *connection graph* that revealed how research clusters amplified each other's impact. Unexpectedly, the most important zone wasn't the one getting the most citations. The industrial automation cluster—real-time friction force estimation in hydraulic cylinders—seemed niche at first. But when I traced the dependencies, I discovered that the hybrid data-driven algorithms powering predictive maintenance in construction equipment were actually powered by the same ML principles being researched in the academic labs. The connection was real: AI safety and model interpretability work at the frontier was directly improving reliability in heavy machinery. The challenge was deciding which scoring signals mattered most. Tavily citations gave me structured data, but raw citation counts favored established researchers over emerging trends. So I weighted the scoring toward *novelty density*—papers that introduced genuinely new concepts alongside strong empirical results got higher marks. Papers in the "sub-zones" like AR/VR and robotics applications got boosted because they represented the bridge between theory and real-world impact. By the end, the system was surfacing papers I wouldn't have spotted with traditional metrics. "SAGE: Benchmarking and Improving Retrieval for Deep Research Agents" ranked high not just because it had strong citations, but because it represented a convergence point—better retrieval meant better research agents, which accelerated discovery across every other zone. The lesson stuck with me: **trends aren't linear progressions; they're ecosystems.** The papers that matter most are the ones creating network effects across disciplines. Four engineers get into a car. The car won't start. The mechanical engineer says "It's a broken starter." The electrical engineer says "Dead battery." The chemical engineer says "Impurities in the gasoline." The IT engineer says "Hey guys, I have an idea: how about we all get out of the car and get back in?"

Feb 9, 2026
New Featuretrend-analisis

When Legacy Code Meets New Architecture: A Debugging Journey

# Debugging the Invisible: When Headings Break the Data Pipeline The `trend-analysis` project was humming along nicely—until it wasn't. The issue? A critical function called `_fix_headings` was supposed to normalize heading structures in parsed content, but nobody was entirely sure if it was actually working. Welcome to the kind of debugging session that makes developers question their life choices. The task seemed straightforward enough: test the `_fix_headings` function in isolation to verify its behavior. But as I dug deeper, I discovered the real problem wasn't the function itself—it was the entire data flow architecture built around it. Here's where things got interesting. The team had recently refactored how the application tracked progress and streamed results back to users. Instead of maintaining a simple dictionary of progress states, they'd switched to an event-based queue system. Smart move for concurrency, terrible for legacy code that still expected the old flat structure. I found references scattered throughout the codebase—old `_progress` variable calls that hadn't been migrated to the new `_progress_events` queue system. The SSE generator that streamed progress updates was reading from a defunct data structure. The endpoint that pulled the latest progress for running jobs was trying to access a dictionary like it was still 2023. These weren't just minor oversights; they were hidden landmines waiting to explode in production. I systematically went through the codebase, hunting down every lingering reference to the old `_progress` pattern. Each one needed updating to either read from the queue or properly consume the event stream. Line 661 was particularly suspicious—still using the old naming convention while everything else had moved on. The endpoint logic required a different approach entirely: instead of a single lookup, it needed to extract the most recent event from the queue. After updating all references and ensuring consistency across the SSE generator and event consumption logic, I restarted the server and ran a full test cycle. The `_fix_headings` function worked perfectly once the surrounding infrastructure was actually feeding it the right data. **The Educational Bit:** This is a classic example of why event-driven architectures, while powerful for handling concurrency and real-time updates, require meticulous refactoring when replacing older state management patterns. The gap between "we changed the internal structure" and "we updated all the consumers" is where bugs hide. Many teams use feature flags or gradual rollouts to handle these transitions—run the old and new systems in parallel until you're confident everything's migrated. The real win here wasn't fixing a single function—it was discovering and eliminating an entire class of potential failures. Sometimes the best debugging isn't about finding what's broken; it's about ensuring your refactoring is actually complete. Next up? Tavily citation integration testing, now that the data pipeline is trustworthy again. 😄 Why did the developer go to therapy? Because their function had too many issues to debug—*and* the queue was too deep to process!

Feb 9, 2026
New Featureborisovai-admin

Double Authentication Blues: When Security Layers Collide

# Untangling the Auth Maze: When Two Security Layers Fight Back The Management UI for borisovai-admin was finally running, but something felt off. It started during testing—users would get redirected once, then redirected again, bouncing between authentication systems like a pinball. The task seemed simple on the surface: set up a proper admin interface with authentication. The reality? Two security mechanisms were stepping on each other's toes, and I had to figure out which one to keep. Here's what was happening under the hood. The infrastructure was already protected by **Traefik with ForwardAuth**, delegating all authentication decisions to **Authelia** running at the edge. This is solid—it means every request hitting the admin endpoint gets validated at the proxy level before it even reaches the application. But then I added **express-openid-connect** (OIDC) directly into the Management UI itself, thinking it would provide additional security. Instead, it created a cascade: ForwardAuth would redirect to Authelia, users would complete two-factor authentication, and then the Management UI would immediately redirect them again to complete OIDC. Two separate auth flows were fighting for control. The decision was straightforward once I understood the architecture: **remove the redundant OIDC layer**. Traefik's ForwardAuth already handles the heavy lifting—validating sessions, enforcing 2FA through Authelia, and protecting the entire admin surface. Adding OIDC on top was security theater, not defense in depth. So I disabled express-openid-connect and fell back to a simpler authentication model: legacy session-based login handled directly by the Management UI itself, sitting safely behind Traefik's protective barrier. Now the flow is clean. Users hit `https://admin.borisovai.tech`, Traefik intercepts the request, ForwardAuth redirects them to Authelia if their session is invalid, they complete 2FA, and then—crucially, only then—they're allowed to access the Management UI login page where standard credentials do the final validation. But while testing this, I discovered another issue lurking in the DNS layer. The `.ru` domain records for `admin.borisovai.ru` and `auth.borisovai.ru` were never added to the registrar's control panel at IHC. Let's Encrypt can't issue SSL certificates without verifying DNS A-records, and Let's Encrypt can't verify what doesn't exist. The fix requires adding those A-records pointing to `144.91.108.139` through the IHC panel—a reminder that infrastructure security lives in multiple layers, and each one matters. This whole experience reinforced something important: **sometimes security elegance means knowing what NOT to add**. Every authentication layer you introduce is another surface for bugs, configuration conflicts, and user friction. The best security architecture is often the simplest one that still solves the problem. In this case, that meant trusting Traefik and Authelia to do their job, and letting the Management UI focus on what it does best. ```javascript // This line doesn't actually do anything, but the code stops working when I delete it. ```

Feb 9, 2026
New FeatureC--projects-bot-social-publisher

DNS Negative Caching: Why Your Resolver Forgets Good News

# DNS Cache Wars: When Your Resolver Lies to You The borisovai-admin project was running smoothly until authentication stopped working—but only for certain people and only sometimes. That's the kind of bug that makes your debugging instincts scream. The team had recently added DNS records for `auth.borisovai.tech`, pointing everything to `144.91.108.139`. The registrar showed the records. Google DNS resolved them instantly. But AdGuard DNS—the resolver configured across their infrastructure—kept returning NXDOMAIN errors as if the domains didn't exist at all. The investigation started with a simple question: *Which resolver is lying?* I ran parallel DNS queries from my machine against both Google DNS (`8.8.8.8`) and AdGuard DNS (`94.140.14.14`). Google immediately returned the correct IP. AdGuard? Dead silence. Yet here's the weird part: `admin.borisovai.tech` resolved perfectly on both resolvers. Same domain, same registrar, same server—but `auth.*` was invisible to AdGuard. That inconsistency was the clue. The culprit was **negative DNS caching**, one of those infrastructure gotchas that catches everyone eventually. Here's what happened: before the authentication records were added to the registrar, someone (or some automated system) had queried for `auth.borisovai.tech`. It didn't exist, so AdGuard's resolver cached that negative response—the "NXDOMAIN" answer—with a TTL of around 3600 seconds. Even after the DNS records went live upstream, AdGuard was still serving the stale cached result. The resolver was confidently telling clients "that domain doesn't exist" because its cache said so, and caches are treated as trusted sources of truth. The immediate fix was straightforward: flush the local DNS cache on affected machines using `ipconfig /flushdns` on Windows. But that only solves the symptom. The real lesson was about DNS architecture itself. Different public resolvers use different caching strategies. Google's DNS aggressively refreshes and validates records. AdGuard takes a more conservative approach, trusting its cache longer. When you're managing infrastructure across multiple networks and resolvers, these differences matter. The temporary workaround was switching to Google DNS for testing while waiting for AdGuard's negative cache to expire naturally—usually within the hour. For future deployments, the team learned to check new DNS records across multiple resolvers before declaring victory and to always account for the possibility that somewhere in your infrastructure, a resolver is still confidently serving yesterday's answer. It's a reminder that DNS, despite being one of the internet's most fundamental systems, remains surprisingly Byzantine. Trust, but verify. Especially across multiple resolvers. Got a really good UDP joke to tell you, but I don't know if you'll get it 😄

Feb 9, 2026
New Featureborisovai-admin

DNS Cache Poisoning: Why AdGuard Refused to See New Records

# DNS Cache Wars: When AdGuard DNS Holds Onto the Past The borisovai-admin project was running smoothly until authentication stopped working in production. The team had recently added new DNS records for `auth.borisovai.tech` and `auth.borisovai.ru`, pointing to the server at `144.91.108.139`. Everything looked correct on paper—the registrars showed the records, Google's public DNS resolved them instantly. But AdGuard DNS, the resolver configured in their infrastructure, kept returning NXDOMAIN errors as if the records didn't exist. The detective work started with a DNS audit. I ran queries against multiple resolvers to understand what was happening. Google DNS (`8.8.8.8`) immediately returned the correct IP address for both authentication domains. AdGuard DNS (`94.140.14.14`), however, flat-out refused to resolve them. Meanwhile, the `admin.borisovai.tech` domain resolved fine on both services. The pattern was clear: something was wrong, but only for the authentication subdomains and only through one resolver. The culprit was **DNS cache poisoning**—not malicious, but equally frustrating. AdGuard DNS was holding onto old NXDOMAIN responses from before the records were created. When the DNS entries were first added to the registrar, AdGuard's cache had already cached a negative response saying "these domains don't exist." Even though the records now existed upstream, AdGuard was serving stale cached data, trusting its own memory more than reality. This is a common scenario in distributed DNS systems. When a domain doesn't exist, DNS servers cache that negative result with a TTL (Time To Live), often defaulting to an hour or more. If new records are added during that window, clients querying that caching resolver won't see them until the cached NXDOMAIN expires. The immediate fix was simple: flush the local DNS cache with `ipconfig /flushdns` on Windows clients to clear stale entries. For a more permanent solution, we needed to either wait for AdGuard's cache to naturally expire (usually within an hour) or temporarily switch to Google DNS by manually setting `8.8.8.8` in network settings. The team chose to switch DNS servers while propagation completed—a pragmatic decision that got authentication working immediately without waiting. What seemed like a mysterious resolution failure turned out to be a textbook case of DNS cache semantics. The lesson: when DNS behaves unexpectedly, check multiple resolvers. Different caching strategies and update schedules mean that not all DNS services see the internet identically, especially during transitions. 😄 The generation of random DNS responses is too important to be left to chance.

Feb 8, 2026
New Featureborisovai-admin

DNS Resolution Chaos: Why Some Subdomains Vanish While Others Thrive

# DNS Mysteries: When One Subdomain Works and Others Vanish The `borisovai-admin` project was running smoothly on the main branch, but there was a catch—a frustrating one. `admin.borisovai.tech` was responding perfectly, resolving to `144.91.108.139` without a hitch. But `auth.borisovai.tech` and `auth.borisovai.ru`? They had simply disappeared from the internet. The task seemed straightforward: figure out why the authentication subdomains weren't resolving while the admin panel was working fine. This kind of infrastructure puzzle can turn into a time sink fast, so I needed a systematic approach. **First, I checked the DNS records directly.** I queried the DNS API expecting to find `auth.*` entries sitting quietly in the database. Instead, I found an empty `records` array—nothing. No automatic creation of these subdomains meant something in the provisioning logic had fallen through the cracks. The natural question followed: if `auth.*` records aren't in the API, how is `admin.borisovai.tech` even working? **The investigation took an unexpected turn.** I pulled out Google DNS (8.8.8.8) as my truth source and ran a resolution check. Suddenly, `auth.borisovai.tech` resolved successfully to the same IP address: `144.91.108.139`. So the records *existed* somewhere, but not where I was looking. This suggested the DNS configuration was either managed directly at the registrar level or there was a secondary resolution path I hadn't accounted for. **Then came the real discovery.** When I tested against AdGuard DNS (94.140.14.14)—the system my local environment was using—the `auth.*` records simply didn't exist. This wasn't a global DNS failure; it was a caching or visibility issue specific to certain DNS resolvers. The AdGuard resolver wasn't seeing records that Google's public DNS could find immediately. I ran the same check on `auth.borisovai.ru` and confirmed the pattern held. Both subdomains were missing from the local DNS perspective but present when querying through public resolvers. This pointed to either a DNS propagation delay, a misconfiguration in the AdGuard setup, or records that were registered at the registrar but not properly distributed to all nameservers. **Here's an interesting fact about DNS that caught me this time:** DNS resolution isn't instantaneous across all servers. Different DNS resolvers maintain separate caches and query different authoritative nameservers. When you change DNS records, large providers like Google cache globally, but smaller or regional DNS services might take hours to sync. AdGuard, while excellent for ad-blocking, might not have the same authoritative nameserver agreements as Google's public DNS, creating visibility gaps. The fix required checking the registrar configuration and ensuring that `auth.*` records were properly propagated through all authoritative nameservers, not just cached by some resolvers. It's a reminder that DNS is often the last place developers look when something breaks—but it should probably be the first. --- 😄 Why did the DNS administrator break up with their partner? They couldn't handle all the unresolved entries in their relationship.

Feb 8, 2026
New FeatureC--projects-bot-social-publisher

Tunnels, Timeouts, and the Night the Infrastructure Broke

# Building a Multi-Machine Empire: Tunnels, Traefik, and the Night Everything Almost Broke The **borisovai-admin** project had outgrown its single-server phase. What started as a cozy little control panel now needed to orchestrate multiple machines across different networks, punch through firewalls, and do it all with a clean web interface. The task was straightforward on paper: build a tunnel management system. Reality, as always, had other ideas. ## The Tunnel Foundation I started by integrating **frp** (Fast Reverse Proxy) into the infrastructure—a lightweight reverse proxy perfect for getting past NAT and firewalls without the overhead of heavier solutions. The backend needed a proper face, so I built `tunnels.html` with a clean UI showing active connections and controls for creating or destroying tunnels. On the server side, five new API endpoints in `server.js` handled the tunnel lifecycle management. Nothing fancy, but functional. The real work came in the installation automation. I created `install-frps.sh` to bootstrap the FRP server and `frpc-template` to dynamically generate client configurations for each machine. Then came the small but crucial detail: adding a "Tunnels" navigation link throughout the admin panel. Tiny feature, massive usability improvement. ## When Your Load Balancer Becomes Your Enemy Everything hummed along until large files started vanishing mid-download through GitLab. The culprit? **Traefik's** default timeout configuration was aggressively short—anything taking more than a few minutes would get severed by the reverse proxy. This wasn't a bug in Traefik; it was a misconfiguration on my end. I rewrote the Traefik setup with surgical precision: `readTimeout` set to 600 seconds, a dedicated `serversTransport` configuration specifically for GitLab traffic, and a new `configure-traefik.sh` script to generate these dynamically. Suddenly, even 500MB archives downloaded flawlessly. ## The Documentation Moment While deep in infrastructure tuning, I realized the `docs/` folder had become a maze. I reorganized it into logical sections: `agents/`, `dns/`, `plans/`, `setup/`, `troubleshooting/`. Each folder owned its domain. I also created machine-specific configurations under `config/contabo-sm-139/` with complete Traefik, systemd, Mailu, and GitLab settings, then updated `upload-single-machine.sh` to handle deploying these configurations to new servers. ## Here's the Thing About Traefik Traefik markets itself as the "edge router for microservices"—lightweight, modern, cloud-native. What they don't advertise is that it's deeply opinionated about timing. A single misconfigured timeout cascades through your entire infrastructure. It's not complexity; it's *precision*. Get it right, and everything sings. Get it wrong, and users call you wondering why their downloads time out. ## The Payoff By the end of the evening, the infrastructure had evolved from single-point-of-failure to a scalable multi-machine setup. New servers could be provisioned with minimal manual intervention. The tunnel management UI gave users visibility and control. Documentation became navigable. Sure, Traefik had taught me a harsh lesson about timeouts, but the system was now robust enough to actually scale. The next phase? Enhanced monitoring, SSO integration, and better observability for network connections. But first—coffee. 😄 **Dev:** "I understand Traefik." **Interviewer:** "At what level?" **Dev:** "StackOverflow tabs open at 3 AM on a Friday level."

Feb 8, 2026
New FeatureC--projects-bot-social-publisher

Traefik's Missing Middleware: Building Resilient Infrastructure

# When Middleware Goes Missing: Fixing Traefik's Silent Dependency Problem The `borisovai-admin` project sits at the intersection of several infrastructure components—Traefik as a reverse proxy, Authelia for authentication, and a management UI layer. Everything works beautifully when all pieces are in place. But what happens when you try to deploy without Authelia? The system collapses with a 502 error, desperately searching for middleware that doesn't exist. The root cause was deceptively simple: the Traefik configuration had a hardcoded reference to `authelia@file` middleware baked directly into the static config. This worked fine in fully-equipped environments, but made the entire setup fragile. The moment Authelia wasn't installed, Traefik would fail immediately because it couldn't locate that middleware. The infrastructure code treated an optional component as mandatory. The fix required rethinking the initialization sequence. The static Traefik configuration was stripped of any hardcoded Authelia references—no middleware definitions that might not exist. Instead, I implemented conditional logic that checks whether Authelia is actually installed. The `configure-traefik.sh` script now evaluates the `AUTHELIA_INSTALLED` environment variable and only connects the Authelia middleware if the conditions are right. This meant coordinating three separate installation scripts to work in harmony. The `install-authelia.sh` script adds the `authelia@file` reference to `config.json` when Authelia is installed. The `configure-traefik.sh` script stays reactive, only including middleware when needed. Finally, `deploy-traefik.sh` double-checks the server state and reinstalls the middleware if necessary. No assumptions. No hardcoded dependencies pretending to be optional. Along the way, I discovered a bonus issue: `install-management-ui.sh` had an incorrect path reference to `mgmt_client_secret`. I fixed that while I was already elbow-deep in configuration. I also removed `authelia.yml` from version control entirely—it's always generated identically by the installation script, so keeping it in git just creates maintenance debt. **Here's something worth knowing about Docker-based infrastructure:** middleware in Traefik isn't just a function call—it's a first-class configuration object that must be explicitly defined before anything can reference it. Traefik enforces this strictly. You cannot reference middleware that doesn't exist. It's like trying to call an unimported function in Python. A simple mistake, but with devastating consequences in production because it translates directly to service unavailability. The final architecture is much more resilient. The system works with Authelia, without it, or with partial deployments. Configuration files don't carry dead weight. Installation scripts actually understand what they're doing instead of blindly expecting everything to exist. This is what happens when you treat optional dependencies as genuinely optional—not just in application code, but throughout the entire infrastructure layer. The lesson sticks: if a component is optional, keep it out of static configuration. Let it be added dynamically when needed, not the other way around. 😄 A guy walks into a DevOps bar and orders a drink. The bartender asks, "What'll it be?" The guy says, "Something that works without dependencies." The bartender replies, "Sorry, we don't serve that here."

Feb 8, 2026
New Featureborisovai-admin

Building a Unified Auth Layer: Authelia's Multi-Protocol Juggling Act

# Authelia SSO: When One Auth Is Not Enough The borisovai-admin project needed serious authentication overhaul. The challenge wasn't just protecting endpoints—it was creating a unified identity system that could speak multiple authentication languages: ForwardAuth for legacy services, OIDC for modern apps, and session-based auth for fallback scenarios. I had to build this without breaking the existing infrastructure running n8n, Mailu, and the Management UI. **The problem was elegantly simple in theory, brutal in practice.** Each service had its own auth expectations. Traefik wanted middleware that could intercept requests before they hit the app layer. The Management UI needed OIDC support through express-openid-connect. Older services expected ForwardAuth headers. And everything had to converge on a single DNS endpoint: auth.borisovai.ru. I started by writing `install-authelia.sh`—a complete bootstrapping script that handled binary installation, secret generation, systemd service setup, and DNS configuration. This wasn't just about deployment; it was about making the entire system repeatable and maintainable. Next came the critical piece: `authelia.yml`, which I configured as both a ForwardAuth middleware *and* a router pointing the `/tech` path to the Management UI. This dual role became the architectural linchpin. The real complexity emerged in `server.js`, where I implemented OIDC dual-mode authentication. The pattern was elegant: Bearer token checks first, fallback to OIDC token validation through express-openid-connect, and finally session-based auth as the ultimate fallback. It meant requests could be authenticated through three different mechanisms, transparently to the user. The logout flow had to support OIDC redirect semantics across five HTML pages—ensuring that logging out didn't just clear sessions but also hit the identity provider's logout endpoints. **Here's what made this particularly interesting:** Authelia's ForwardAuth protocol doesn't just pass authentication status; it injects special headers into proxied requests. This header-based communication pattern is how Traefik, Mailu, and n8n receive identity information without understanding OIDC or session mechanics. I had to ensure `authelia@file` was correctly injected into the Traefik router definitions in management-ui.yml and n8n.yml. The `configure-traefik.sh` script became the glue—generating clean authelia.yml configurations and injecting the ForwardAuth middleware into service templates. Meanwhile, `install-management-ui.sh` added auto-detection of Authelia's presence and automatically populated the OIDC configuration into config.json. This meant the Management UI could discover its auth provider dynamically. The whole system shipped as part of `install-all.sh`, where INSTALL_AUTHELIA became step 7.5/10—positioned right before applications that depend on it. Testing this required validating that a request through Traefik with ForwardAuth headers, an OIDC bearer token, and a session cookie would all authenticate correctly under different scenarios. **Key lesson:** Building a unified auth system isn't about choosing one pattern—it's about creating translation layers that let legacy and modern systems coexist peacefully. ForwardAuth and OIDC aren't competing; they're complementary when you design the handoff correctly. 😄 My boss asked why Authelia config took so long. I said it was because I had to authenticate with three different protocols just to convince Git that I was the right person to commit the changes.

Feb 8, 2026
New FeatureC--projects-bot-social-publisher

VPN отключился молча: как я потерял доступ к релизу

# When Infrastructure Hides Behind the VPN: The Friday Night Lesson The deadline was Friday evening. The `speech-to-text` project needed its `v1.0.0` release pushed to master, complete with automated build orchestration, package publishing to GitLab Package Registry, and a freshly minted version tag. Standard release procedure, or so I thought—until the entire development infrastructure went radio silent. My first move was instinctive: SSH into the GitLab server at `gitlab.dev.borisovai.tech` to check on **Gitaly**, the service responsible for managing all repository operations on the GitLab backend. The connection hung without response. I tried HTTP next. Nothing. The entire server had vanished from the network as far as I could tell. Panic wasn't helpful here, but confusion was—the kind that forces you to think systematically about what you're actually seeing. Then it clicked. I checked my VPN status. No connection to `10.8.0.x`. The OpenVPN tunnel that bridges my machine to the internal infrastructure at `144.91.108.139` had silently disconnected. Our entire GitLab setup lives behind that wall of security, completely invisible without it. I wasn't dealing with a server failure—I was on the wrong side of the network boundary, and I'd forgotten about it entirely. This is the quiet frustration of modern infrastructure: security layers that work so seamlessly you stop thinking about them, right up until they remind you they exist. The VPN wasn't broken. The server wasn't broken. I'd simply lost connectivity to anything that mattered for my task. **Here's something interesting about Gitaly itself:** it's not just a repository storage service—it's a deliberate architectural separation that GitLab uses to isolate filesystem operations from the main application. When Gitaly goes offline, GitLab can't perform any Git operations at all. It's like cutting the legs off a runner and asking them to sprint. The design choice exists because managing raw Git operations at scale requires careful resource isolation, and Gitaly handles all the heavy lifting while the GitLab web interface stays focused on its job. The fix was mechanical once I understood the problem. Reconnect the OpenVPN tunnel, then execute the release sequence: `git push origin master` to deploy the automation commit, followed by `.\venv\Scripts\python.exe scripts/release.py` to run the release orchestration script. That script would compile the Python application into a standalone EXE, package it as a ZIP archive, upload it to GitLab Package Registry, and create the version tag—all without human intervention. VPN restored, Gitaly came back online, and the release shipped on schedule. The lesson here isn't technical; it's about remembering the invisible infrastructure that underpins your workflow. Before you blame the server, blame the network. Before you blame the network, check your security tunnel. The most complex problems often have the simplest solutions—if you remember to check the obvious stuff first. 😄 Why did the DevOps engineer break up with the database? Because they had too many issues to commit to.

Feb 8, 2026
New Featurespeech-to-text

VPN Down: When Your Dev Infrastructure Becomes Invisible

# When Infrastructure Goes Silent: A Developer's VPN Wake-Up Call The speech-to-text project was humming along smoothly until I hit a wall that would test my troubleshooting instincts. I was deep in the release automation phase, ready to push the final commit to the master branch and trigger the build pipeline that would generate the EXE, create a distributable ZIP, and publish everything to GitLab Package Registry with a shiny new `v1.0.0` tag. But first, I needed to reach the Gitaly service running on our GitLab server at `gitlab.dev.borisovai.tech`. The problem was immediate and unforgiving: Gitaly wasn't responding. My first instinct was the classic DevOps move—SSH directly into the server and restart it. But SSH didn't even acknowledge my connection attempt. The server simply wasn't there. I pivoted quickly, thinking maybe the HTTP endpoint would still respond, but the entire GitLab instance had gone dark. Something was seriously wrong. Then came the diagnostic moment that changed everything. I realized I was sitting in my usual development environment without something critical: an active VPN connection. Our GitLab infrastructure isn't exposed to the public internet—it's tucked safely behind a VPN tunnel to the server at `144.91.108.139`, assigned a private IP in the `10.8.0.x` range. Without OpenVPN active, the entire development infrastructure was invisible to me, completely isolated. This is actually a brilliant security practice, but it's also one of those gotchas that catches you off guard when you're moving fast. The infrastructure wasn't broken—I was simply on the wrong side of the network boundary. **Here's what fascinated me about this situation:** VPNs sit at an interesting intersection of convenience and friction. They're essential for protecting internal infrastructure, but they introduce a hidden dependency that's easy to forget about, especially when you're context-switching between multiple projects or environments. Many development teams solve this by scripting automatic VPN checks into their CI/CD pipelines or shell startup scripts, but it remains a manual step in many workflows. Once I reconnected to the VPN, everything clicked back into place. The plan was straightforward: execute `git push origin master` to send the release automation commit, then fire up `.\venv\Scripts\python.exe scripts/release.py` to orchestrate the entire release process. The script would handle the heavy lifting—compiling the Python code into an executable, bundling dependencies, creating the distributable archive, and finally pushing everything to our package registry. The lesson here wasn't about the technology failing—it was about environmental assumptions. When debugging infrastructure issues, sometimes the problem isn't in your code, your servers, or your services. It's in the invisible layer that connects them all. A missing VPN connection looks exactly like a catastrophic outage until you remember to check whether you're even on the right network. 😄 Why do DevOps engineers never get lonely? Because they always have a VPN to keep them connected!

Feb 8, 2026
New Featuretrend-analisis

When Code Reviewers Spot the Same Bug, Architecture Needs a Rewrite

# Scoring v2: When Two Code Reviewers Agree, You Know You're in Trouble The task was straightforward on paper: implement a version-aware analysis system for the trend-analysis project with Tavily citations support on the `feat/scoring-v2-tavily-citations` branch. But when both code reviewers independently flagged the **exact same critical issues**, it became clear this wasn't just about adding features—it was about fixing architectural landmines before they exploded in production. ## The Collision Course The first problem hit immediately: a **race condition in version assignment**. The system was calling `next_version()` independently from `save_analysis()`, which meant two parallel analyses of the same trend could receive identical version numbers. The second INSERT would silently fail, swallowed by a bare `except Exception: pass` block. Both reviewers caught this and independently recommended the same solution: move version generation *inside* the save operation with atomic `INSERT...SELECT MAX(version)+1` logic, wrapped in retry logic for `IntegrityError` exceptions. But that was just the tip. The second critical flaw involved `next_version()` only counting *completed* analyses. Running analyses? Invisible. A second analysis job launched while the first was still executing would grab the same version number. The fix required reserving versions upfront—treating `status='running'` entries in SQLite as version placeholders from the moment a job starts. ## The Breaking Change Bomb Then came the surprise: a breaking API change lurking in plain sight. The frontend expected `getAnalysisForTrend` to return a single object, but the backend had morphed it into returning an array. Both reviewers flagged this differently but reached the same conclusion: introduce a new endpoint `getAnalysesForTrend` for the array response while keeping the old one functional. The TypeScript types were equally broken. The `AnalysisReport` interface lacked `version`, `depth`, `time_horizon`, and `parent_job_id` fields—properties the backend was actively sending but the frontend was discarding into the void. Meanwhile, `parent_job_id` validation was missing entirely (you could pass any UUID), and `depth` had no upper bound (depth=100 anyone?). ## Pydantic as a Safety Net This is where Pydantic's declarative validation became invaluable. By adding `Field(ge=1, le=7)` constraints to depth and using `Literal` for time horizons, the framework would catch invalid requests at the API boundary before they polluted the database. It's one of Pydantic's underrated superpowers—it transforms validation rules into executable guarantees that live right beside your data definitions, making the contract between client and server explicit and checked on every request. ## What Stayed, What Shifted The secondary issues were less dramatic but equally important: unlogged exception handling that swallowed errors, pagination logic that broke when grouping results, and `created_at` timestamps that recorded completion time instead of job start time. The developers had to decide: fix everything now or validate the prototype first, then tackle the full refactor together? Both reviewers converged on the critical path: handle race conditions and API compatibility immediately. Ship a working skeleton, then iterate. --- 😄 Programming is like sex. One mistake and you end up supporting it for the rest of your life.

Feb 8, 2026
New FeatureC--projects-bot-social-publisher

Tunnels Behind the UI: How One Navigation Link Exposed Full-Stack Architecture

# Mapping a Tunnel System: When One Navigation Link Unveils an Entire Architecture The **borisovai-admin** project needed a critical feature: visibility into FRP (Fast Reverse Proxy) tunnels running behind the admin panel. The task seemed deceptively simple—add a navigation link to four HTML pages. But peeling back that single requirement revealed a full-stack implementation that would touch server architecture, create a new dashboard page, and update installation scripts. ## Starting with the Navigation Trap The first thing I did was update the HTML templates: `index.html`, `tokens.html`, `projects.html`, and `dns.html`. Adding a "Tunnels" link to each felt mechanical—until I realized every page needed *identical* navigation at *exactly* the same line positions (195–238). One typo, one character misaligned, and users would bounce between inconsistent interfaces. That's when I understood: even navigation is an architectural decision, not just UI decoration. ## The Backend Suddenly Mattered With the frontend signposts in place, the backend needed to deliver. In `server.js`, I created two helper functions that became the foundation for everything that followed. `readFrpsConfig` parses the FRP server's configuration file, while `frpsDashboardRequest` handles secure communication with the FRP dashboard. These weren't just convenience wrappers—they abstracted away HTTP mechanics and created a testable interface. Then came the endpoints. Four GET routes to feed the frontend: the FRP server health check—is it alive?; the active tunnels list with metadata about each connection; and the current configuration exposed as JSON. These endpoints are simple on the surface but hide a complexity: they talk to FRP's dashboard API, handle timeouts gracefully, and return data in a shape the frontend expects. ## The Installation Plot Twist Unexpectedly, I discovered FRP wasn't even installed in the standard deployment. The `install-all.sh` script needed updating. I made FRP an *optional* component—not everyone needs tunneling, but those who do should get a complete stack without manual tinkering. This decision reflected a larger philosophy: the system should be flexible enough for different use cases while remaining cohesive. ## The Dashboard That Refreshes Itself The new `tunnels.html` page became the visual payoff. A status card shows whether FRP is running. Below it, an active tunnels list updates every 10 seconds using simple polling—no WebSockets needed for this scale. And finally, a client config generator: input your parameters, see your ready-to-deploy `frpc.toml` rendered instantly. The polling mechanism deserves a note: it's a pattern many developers avoid, but for admin dashboards with small datasets and <10 second refresh windows, it's pragmatic. Fewer moving parts, easier debugging, less infrastructure overhead. ## What the Journey Taught This work crystallized something important: **small frontend changes often hide large architectural decisions**. Investing an hour in upfront planning—mapping dependencies, identifying abstraction points, planning the endpoint contracts—saved days of integration rework later. The tunnel system works now. But its real value isn't the feature itself. It's the pattern: frontend navigation drives backend contracts, which drive installation strategy, which feeds back into the frontend experience. That's systems thinking in practice. 😄 Why did the FRP tunnel go to therapy? It had too many *connections* it couldn't handle!

Feb 8, 2026
New Featurespeech-to-text

Serving Artifacts from Private Projects Using GitLab Pages

# How GitLab Pages Became a Private Project's Public Window The speech-to-text project was private—completely locked down on GitLab. But there was a problem: users needed to download built artifacts, and the team wanted a clean distribution channel that didn't require authentication. The challenge was architectural: how do you serve files publicly from a private repository? The developer started by exploring what GitLab offered. Releases API? Protected by project permissions. Package Registry? Same issue—download tokens required. Then came the realization: **GitLab Pages is public by default, even for private projects**. It's a counterintuitive feature, but it made perfect sense for the use case. The first step was auditing the current setup. A boilerplate CI pipeline was already pushed to the repository by an earlier orchestrator run, but it wasn't tailored to the actual workflow. The developer pulled the remote configuration, examined it locally, then replaced it with a custom pipeline designed specifically for artifact distribution. The release process they designed was elegant and automated. The workflow started with a Python script—`scripts/release.py`—that handled the build orchestration. It compiled the project, created a ZIP archive (`VoiceInput-v1.0.0.zip`), uploaded it to GitLab's Package Registry, and pushed a semantic version tag (`v1.0.0`) to trigger the CI pipeline. No manual intervention was needed beyond running one command. The GitLab CI pipeline then took over automatically when the tag appeared. It downloaded the ZIP from Package Registry, deployed it to GitLab Pages, updated a connected Strapi CMS instance with the new version and download URL, and created a formal GitLab Release. Users could now grab builds from a simple, public URL: `https://tools.public.gitlab.dev.borisovai.tech/speech-to-text/VoiceInput-v1.0.0.zip`. Security was handled thoughtfully. The CI pipeline needed write access to create releases and update Pages, so a `CI_GITLAB_TOKEN` was added to the project's CI Variables with protection and masking flags enabled—preventing accidental exposure in logs. **An interesting fact**: GitLab Pages works by uploading static files to a web server tied to your project namespace. Even if the project is private and requires authentication to view source code, the Pages site itself lives on a separate, public domain by design. It's meant for project documentation, but clever teams use it for exactly this—public artifact distribution without exposing the source. The beauty of this approach was that versioning became self-documenting. Every release left breadcrumbs: a git tag marking the exact source state, a GitLab Release with metadata, and a timestamped artifact on Pages. Future developers could trace any deployed version back to its source. The developer shipped semantic versioning, a single-command release process, and automatic CI integration—all without modifying the project's core code structure. It was infrastructure-as-code done right: minimal, repeatable, and transparent. 😄 "We finally made our private project public—just not where anyone expected."

Feb 8, 2026