Blog
Posts about the development process, solved problems and learned technologies
From Papers to Patterns: Building an AI Research Trend Analyzer
# Building a Trend Analyzer: Mining AI Research Breakthroughs from ArXiv The task landed on my desk on a Tuesday: analyze the "test SSE progress" trend across recent arXiv papers and build a **scoring-v2-tavily-citations** system that could surface the most impactful research directions. I was working on the `feat/scoring-v2-tavily-citations` branch of our trend-analysis project, tasked with turning raw paper metadata into actionable insights about where AI development was heading. Here's what made this interesting: the raw data wasn't just a list of papers. It was a complex landscape spanning five distinct research zones—multimodal LLMs, 3D computer vision, diffusion models, reinforcement learning, and industrial automation. My job was to synthesize these scattered signals into a coherent narrative about the field's momentum. **The first thing I did was map the territories.** I realized that many papers didn't live in isolation—papers on "SwimBird" (switchable reasoning modes in hybrid MLLMs) connected directly to "Thinking with Geometry," which itself relied on spatial reasoning principles. The key insight was that inference optimization and geometric priors weren't just separate concerns; they were becoming the foundation for next-generation reasoning systems. So instead of scoring papers individually, I needed to build a *connection graph* that revealed how research clusters amplified each other's impact. Unexpectedly, the most important zone wasn't the one getting the most citations. The industrial automation cluster—real-time friction force estimation in hydraulic cylinders—seemed niche at first. But when I traced the dependencies, I discovered that the hybrid data-driven algorithms powering predictive maintenance in construction equipment were actually powered by the same ML principles being researched in the academic labs. The connection was real: AI safety and model interpretability work at the frontier was directly improving reliability in heavy machinery. The challenge was deciding which scoring signals mattered most. Tavily citations gave me structured data, but raw citation counts favored established researchers over emerging trends. So I weighted the scoring toward *novelty density*—papers that introduced genuinely new concepts alongside strong empirical results got higher marks. Papers in the "sub-zones" like AR/VR and robotics applications got boosted because they represented the bridge between theory and real-world impact. By the end, the system was surfacing papers I wouldn't have spotted with traditional metrics. "SAGE: Benchmarking and Improving Retrieval for Deep Research Agents" ranked high not just because it had strong citations, but because it represented a convergence point—better retrieval meant better research agents, which accelerated discovery across every other zone. The lesson stuck with me: **trends aren't linear progressions; they're ecosystems.** The papers that matter most are the ones creating network effects across disciplines. Four engineers get into a car. The car won't start. The mechanical engineer says "It's a broken starter." The electrical engineer says "Dead battery." The chemical engineer says "Impurities in the gasoline." The IT engineer says "Hey guys, I have an idea: how about we all get out of the car and get back in?"
Raw F-Strings and Regex Quantifiers: A Silent Killer
# F-Strings and Regex: The Trap That Breaks Pattern Matching I was deep in the trenches of the `trend-analysis` project, implementing **Server-Sent Events for real-time streaming** on the `feat/scoring-v2-tavily-citations` branch. The goal was elegant: as the backend analyzed trends, each step would flow to the client instantly, giving users live visibility into the scoring process. The architecture felt solid. The Python backend was configured. The SSE endpoints were ready. So why wasn't anything working? I spun up a quick test analysis and watched the stream. Data came through, but something was off—the format was corrupted, patterns weren't matching, and the entire pipeline was silently failing. My first instinct pointed to encoding chaos courtesy of Windows terminals, but the deeper I dug into the logs, the stranger things got. Then I found it: **a single f-string that was quietly destroying everything**. Buried in my regex pattern, I'd written `rf'...'`—a raw f-string for handling regular expressions. Seems innocent, right? Raw strings preserve everything literally. Except they don't, not entirely. Inside that f-string sat a regex quantifier: `{1,4}`. The problem? **Python looked at those braces and thought they were f-string variable interpolation syntax**, not regex metacharacters. The curly braces triggered Python's expression parsing, the regex failed to compile, and the entire matching logic collapsed. The fix was almost comical in its simplicity: `{{1,4}}` instead of `{1,4}`. Double the braces. When you're building raw f-strings containing regex patterns, Python's f-string parser still processes the delimiters—you need to escape them to tell the interpreter "these braces are literal, not interpolation." It's a subtle gotcha that even catches experienced developers because the `r` prefix creates this false sense of safety. Once that was fixed, the SSE stream started flowing properly. Data reached the client intact. But I noticed another issue during testing: most of the analysis step labels were still in English while the UI demanded Russian. The interface needed localization consistency. I mapped the main headers—every label describing the analysis stages—to their Russian equivalents in the translation dictionary. Only "Stats" slipped through initially, which I caught and corrected immediately. **The deeper lesson here**: f-strings revolutionized string formatting when they arrived in Python 3.6, but they're a minefield when combined with regex patterns. Many developers sidestep this entirely by using regular strings and passing regex patterns separately—less elegant, but it saves hours of debugging. After the final reload, the SSE stream worked flawlessly. Data flowed, the interface was fully Russian-localized, and the scoring pipeline was solid. The branch was ready to move forward. What started as a mysterious streaming failure turned into a masterclass in how syntactic sugar can hide the sharpest thorns. 😄 Turns out, f-strings and regex quantifiers have about as much chemistry as a Windows terminal and UTF-8.
F-Strings and Regex: A Debugging Tale
# Debugging SSE Streams: When Python's F-Strings Fight Back The task was straightforward—implement real-time streaming for the trend analysis engine. Our `trend-analisis` project needed to push scoring updates to the client as they happened, and Server-Sent Events seemed like the perfect fit. Server running, tests queued up, confidence high. Then reality hit. I'd built the SSE endpoint to stream analysis steps back to the browser, each update containing a progress message and metrics. The backend was spitting out data, the client was supposedly receiving it, but somewhere in that pipeline, something was getting mangled. **The streaming wasn't working properly**, and I needed to figure out why before moving forward on the `feat/scoring-v2-tavily-citations` branch. First thing I did was fire up a quick analysis and watch the SSE stream directly. The console showed nothing meaningful. Data was flowing, but the format was wrong. My initial thought: encoding issue. Windows terminals love to mangle UTF-8 text, showing garbled characters where readable text should be. But this felt different. Then I spotted the culprit—hidden in plain sight in an f-string: `rf'...'`. Those raw f-strings are dangerous when you're building regex patterns. Inside that f-string lived a regex quantifier: `{1,4}`. **Python saw those braces and thought they were variable interpolation syntax**, not regex metacharacters. The curly braces got interpreted as a Python expression, causing the regex to fail silently and the entire pattern matching to break down. The fix was embarrassingly simple: double the braces. `{{1,4}}` instead of `{1,4}`. When you're building raw f-strings that contain regex, the Python parser still processes the braces, so you need to escape them. It's one of those gotchas that catches experienced developers because it *looks* right—raw strings are supposed to preserve everything literally, right? Not quite. The `f` part still does its job. While debugging, I also noticed all the analysis step labels needed to be in Russian for consistency with the UI. The main headings—lather, rinse, all of them—got mapped to their Russian equivalents. Only "Stats" remained untranslated, so I added it to the localization map too. After the restart and a fresh verification run, the console confirmed everything was now properly internationalized. **The lesson here is subtle but important**: raw f-strings (`rf'...'`) are not truly "raw" in the way that raw strings alone are. They're still processed for variable interpolation at the braces level. If your regex or string literal contains regex quantifiers or other brace-based syntax, you need to escape those braces with doubling. It's a trap because the intent seems clear—you wanted raw, you got raw—but Python's parser is more sophisticated than it appears. Restart successful. Tests passing. The SSE stream now flows cleanly to the client, each analysis step arriving with proper formatting and localized labels. The trend scorer is ready for the next phase. 😄 How did the programmer die in the shower? He read the shampoo bottle instructions: Lather. Rinse. Repeat.
When Legacy Code Meets New Architecture: A Debugging Journey
# Debugging the Invisible: When Headings Break the Data Pipeline The `trend-analysis` project was humming along nicely—until it wasn't. The issue? A critical function called `_fix_headings` was supposed to normalize heading structures in parsed content, but nobody was entirely sure if it was actually working. Welcome to the kind of debugging session that makes developers question their life choices. The task seemed straightforward enough: test the `_fix_headings` function in isolation to verify its behavior. But as I dug deeper, I discovered the real problem wasn't the function itself—it was the entire data flow architecture built around it. Here's where things got interesting. The team had recently refactored how the application tracked progress and streamed results back to users. Instead of maintaining a simple dictionary of progress states, they'd switched to an event-based queue system. Smart move for concurrency, terrible for legacy code that still expected the old flat structure. I found references scattered throughout the codebase—old `_progress` variable calls that hadn't been migrated to the new `_progress_events` queue system. The SSE generator that streamed progress updates was reading from a defunct data structure. The endpoint that pulled the latest progress for running jobs was trying to access a dictionary like it was still 2023. These weren't just minor oversights; they were hidden landmines waiting to explode in production. I systematically went through the codebase, hunting down every lingering reference to the old `_progress` pattern. Each one needed updating to either read from the queue or properly consume the event stream. Line 661 was particularly suspicious—still using the old naming convention while everything else had moved on. The endpoint logic required a different approach entirely: instead of a single lookup, it needed to extract the most recent event from the queue. After updating all references and ensuring consistency across the SSE generator and event consumption logic, I restarted the server and ran a full test cycle. The `_fix_headings` function worked perfectly once the surrounding infrastructure was actually feeding it the right data. **The Educational Bit:** This is a classic example of why event-driven architectures, while powerful for handling concurrency and real-time updates, require meticulous refactoring when replacing older state management patterns. The gap between "we changed the internal structure" and "we updated all the consumers" is where bugs hide. Many teams use feature flags or gradual rollouts to handle these transitions—run the old and new systems in parallel until you're confident everything's migrated. The real win here wasn't fixing a single function—it was discovering and eliminating an entire class of potential failures. Sometimes the best debugging isn't about finding what's broken; it's about ensuring your refactoring is actually complete. Next up? Tavily citation integration testing, now that the data pipeline is trustworthy again. 😄 Why did the developer go to therapy? Because their function had too many issues to debug—*and* the queue was too deep to process!
When Certificates Hide in Plain Sight: A Traefik Mystery
# Traefik's Memory Games: Hunting Invisible Certificate Ghosts The **borisovai-admin** project was experiencing a mysterious failure: HTTPS connections were being rejected, browsers were screaming about invalid certificates, and users couldn't access the system. On the surface, the diagnosis seemed straightforward—SSL certificate misconfiguration. But what unfolded was a lesson in asynchronous systems and how infrastructure actually works in the real world. The task was to verify that Traefik had successfully obtained and was serving four Let's Encrypt certificates across admin and auth subdomains on both `.tech` and `.ru` TLDs. The complication: DNS records for the `.ru` domains had just finished propagating to the server, and the team needed confirmation that the ACME challenge validation had completed successfully. My first instinct was to examine `acme.json`, Traefik's certificate cache file. Opening it revealed something unexpected: all four certificates were actually there. Not only present, but completely valid. The `admin.borisovai.tech` certificate was issued by Let's Encrypt R12 on February 4th with expiration in May. Everything looked pristine from a certificate standpoint. But here's where the investigation got interesting. The Traefik logs were absolutely filled with validation errors and failures. For a moment, I had a contradiction on my hands: valid certificates in the cache, yet error messages suggesting the opposite. This shouldn't have been possible. Then it clicked. Those error logs weren't describing current failures—they were **historical artifacts**. They dated back to when DNS propagation was still in progress, when Let's Encrypt couldn't validate domain ownership because the DNS records weren't consistently pointing to the right place yet. Traefik had tried the ACME challenges, failed, retried, and eventually succeeded once DNS stabilized. The logs were just a record of that journey. This revealed something important about ACME systems that often goes unmentioned: they're built with resilience in mind. Let's Encrypt doesn't give up after a single failed validation attempt. Instead, it queues retries and automatically succeeds once the underlying infrastructure catches up. The system is designed for exactly this scenario—temporary DNS inconsistencies. The real culprit wasn't the certificates or Traefik's configuration. It was **browser DNS caching**. Client machines had cached the old, pre-propagation DNS records and stubbornly refused to forget them. The fix was simple: running `ipconfig /flushdns` on Windows or opening an incognito window to bypass the stale cache. The infrastructure had actually been working perfectly the entire time. The phantom errors were just ghosts of failed attempts from minutes earlier, and the browsers were living in the past. The next phase involves configuring Authelia to enforce proper access control policies on these freshly-validated endpoints—but at least now we know the foundation is solid. Sometimes the best debugging comes not from fixing something broken, but from realizing it was never actually broken to begin with. What's the best prefix for global variables? `window.` 😄
SSL Ghosts: When Certificates Are There But Everything Still Burns
# Hunting Ghosts in the SSL Certificate Chain The borisovai-admin project was silently screaming. HTTPS connections were failing, browsers were throwing certificate errors, and the culprit seemed obvious: SSL certificates. But the real investigation turned out to be far more interesting than a simple "cert expired" scenario. The task was straightforward on the surface—verify that Traefik had actually obtained and was serving the four Let's Encrypt certificates for the admin and auth subdomains across both .tech and .ru TLDs. What made this a detective story was the timing: DNS records for the .ru domains had just propagated to the server, and the team needed to confirm that Traefik's ACME client had successfully validated the challenges and fetched the certificates. First, I checked the acme.json file where Traefik stores its certificate cache. Opening it revealed all four certificates were there—present and accounted for. The suspicious part? The Traefik logs were full of validation errors. For a moment, it looked like the certificates existed but weren't being served correctly. Here's where the investigation got interesting. Diving deeper into the certificate details, I found that all four certs were actually **valid and being served properly**: - `admin.borisovai.tech` and `admin.borisovai.ru`—both issued by Let's Encrypt R12 - `auth.borisovai.tech` by R13 - `auth.borisovai.ru` by R12 The expiration dates were solid—everything valid through May. The error logs suddenly made sense: those validation failures in Traefik weren't current failures, they were **historical artifacts from before DNS propagation completed**. Traefik had attempted ACME challenges multiple times while DNS was still resolving inconsistently, failed, retried, and then succeeded once DNS finally stabilized. The real lesson here is that ACME systems are resilient by design. Let's Encrypt's challenge system doesn't just give up after one failed validation—it queues retries, and once DNS finally points to the right place, everything resolves automatically. The certificates were obtained successfully; the logs were just recording the journey to get there. For anyone debugging similar issues in a browser, the solution is refreshing the local DNS cache rather than diving into logs. Running `ipconfig /flushdns` on Windows or opening an incognito window often reveals that the infrastructure was actually fine all along—just the client's stale cache creating phantom problems. The next phase involves reviewing the Authelia installation script to ensure access control policies are properly configured for these freshly validated endpoints. The certificates were just act one of the security theater. How do you know God is a shitty programmer? He wrote the OS for an entire universe but didn't leave a single useful comment.
Double Authentication Blues: When Security Layers Collide
# Untangling the Auth Maze: When Two Security Layers Fight Back The Management UI for borisovai-admin was finally running, but something felt off. It started during testing—users would get redirected once, then redirected again, bouncing between authentication systems like a pinball. The task seemed simple on the surface: set up a proper admin interface with authentication. The reality? Two security mechanisms were stepping on each other's toes, and I had to figure out which one to keep. Here's what was happening under the hood. The infrastructure was already protected by **Traefik with ForwardAuth**, delegating all authentication decisions to **Authelia** running at the edge. This is solid—it means every request hitting the admin endpoint gets validated at the proxy level before it even reaches the application. But then I added **express-openid-connect** (OIDC) directly into the Management UI itself, thinking it would provide additional security. Instead, it created a cascade: ForwardAuth would redirect to Authelia, users would complete two-factor authentication, and then the Management UI would immediately redirect them again to complete OIDC. Two separate auth flows were fighting for control. The decision was straightforward once I understood the architecture: **remove the redundant OIDC layer**. Traefik's ForwardAuth already handles the heavy lifting—validating sessions, enforcing 2FA through Authelia, and protecting the entire admin surface. Adding OIDC on top was security theater, not defense in depth. So I disabled express-openid-connect and fell back to a simpler authentication model: legacy session-based login handled directly by the Management UI itself, sitting safely behind Traefik's protective barrier. Now the flow is clean. Users hit `https://admin.borisovai.tech`, Traefik intercepts the request, ForwardAuth redirects them to Authelia if their session is invalid, they complete 2FA, and then—crucially, only then—they're allowed to access the Management UI login page where standard credentials do the final validation. But while testing this, I discovered another issue lurking in the DNS layer. The `.ru` domain records for `admin.borisovai.ru` and `auth.borisovai.ru` were never added to the registrar's control panel at IHC. Let's Encrypt can't issue SSL certificates without verifying DNS A-records, and Let's Encrypt can't verify what doesn't exist. The fix requires adding those A-records pointing to `144.91.108.139` through the IHC panel—a reminder that infrastructure security lives in multiple layers, and each one matters. This whole experience reinforced something important: **sometimes security elegance means knowing what NOT to add**. Every authentication layer you introduce is another surface for bugs, configuration conflicts, and user friction. The best security architecture is often the simplest one that still solves the problem. In this case, that meant trusting Traefik and Authelia to do their job, and letting the Management UI focus on what it does best. ```javascript // This line doesn't actually do anything, but the code stops working when I delete it. ```
DNS Negative Caching: Why Your Resolver Forgets Good News
# DNS Cache Wars: When Your Resolver Lies to You The borisovai-admin project was running smoothly until authentication stopped working—but only for certain people and only sometimes. That's the kind of bug that makes your debugging instincts scream. The team had recently added DNS records for `auth.borisovai.tech`, pointing everything to `144.91.108.139`. The registrar showed the records. Google DNS resolved them instantly. But AdGuard DNS—the resolver configured across their infrastructure—kept returning NXDOMAIN errors as if the domains didn't exist at all. The investigation started with a simple question: *Which resolver is lying?* I ran parallel DNS queries from my machine against both Google DNS (`8.8.8.8`) and AdGuard DNS (`94.140.14.14`). Google immediately returned the correct IP. AdGuard? Dead silence. Yet here's the weird part: `admin.borisovai.tech` resolved perfectly on both resolvers. Same domain, same registrar, same server—but `auth.*` was invisible to AdGuard. That inconsistency was the clue. The culprit was **negative DNS caching**, one of those infrastructure gotchas that catches everyone eventually. Here's what happened: before the authentication records were added to the registrar, someone (or some automated system) had queried for `auth.borisovai.tech`. It didn't exist, so AdGuard's resolver cached that negative response—the "NXDOMAIN" answer—with a TTL of around 3600 seconds. Even after the DNS records went live upstream, AdGuard was still serving the stale cached result. The resolver was confidently telling clients "that domain doesn't exist" because its cache said so, and caches are treated as trusted sources of truth. The immediate fix was straightforward: flush the local DNS cache on affected machines using `ipconfig /flushdns` on Windows. But that only solves the symptom. The real lesson was about DNS architecture itself. Different public resolvers use different caching strategies. Google's DNS aggressively refreshes and validates records. AdGuard takes a more conservative approach, trusting its cache longer. When you're managing infrastructure across multiple networks and resolvers, these differences matter. The temporary workaround was switching to Google DNS for testing while waiting for AdGuard's negative cache to expire naturally—usually within the hour. For future deployments, the team learned to check new DNS records across multiple resolvers before declaring victory and to always account for the possibility that somewhere in your infrastructure, a resolver is still confidently serving yesterday's answer. It's a reminder that DNS, despite being one of the internet's most fundamental systems, remains surprisingly Byzantine. Trust, but verify. Especially across multiple resolvers. Got a really good UDP joke to tell you, but I don't know if you'll get it 😄
DNS Cache Poisoning: Why AdGuard Refused to See New Records
# DNS Cache Wars: When AdGuard DNS Holds Onto the Past The borisovai-admin project was running smoothly until authentication stopped working in production. The team had recently added new DNS records for `auth.borisovai.tech` and `auth.borisovai.ru`, pointing to the server at `144.91.108.139`. Everything looked correct on paper—the registrars showed the records, Google's public DNS resolved them instantly. But AdGuard DNS, the resolver configured in their infrastructure, kept returning NXDOMAIN errors as if the records didn't exist. The detective work started with a DNS audit. I ran queries against multiple resolvers to understand what was happening. Google DNS (`8.8.8.8`) immediately returned the correct IP address for both authentication domains. AdGuard DNS (`94.140.14.14`), however, flat-out refused to resolve them. Meanwhile, the `admin.borisovai.tech` domain resolved fine on both services. The pattern was clear: something was wrong, but only for the authentication subdomains and only through one resolver. The culprit was **DNS cache poisoning**—not malicious, but equally frustrating. AdGuard DNS was holding onto old NXDOMAIN responses from before the records were created. When the DNS entries were first added to the registrar, AdGuard's cache had already cached a negative response saying "these domains don't exist." Even though the records now existed upstream, AdGuard was serving stale cached data, trusting its own memory more than reality. This is a common scenario in distributed DNS systems. When a domain doesn't exist, DNS servers cache that negative result with a TTL (Time To Live), often defaulting to an hour or more. If new records are added during that window, clients querying that caching resolver won't see them until the cached NXDOMAIN expires. The immediate fix was simple: flush the local DNS cache with `ipconfig /flushdns` on Windows clients to clear stale entries. For a more permanent solution, we needed to either wait for AdGuard's cache to naturally expire (usually within an hour) or temporarily switch to Google DNS by manually setting `8.8.8.8` in network settings. The team chose to switch DNS servers while propagation completed—a pragmatic decision that got authentication working immediately without waiting. What seemed like a mysterious resolution failure turned out to be a textbook case of DNS cache semantics. The lesson: when DNS behaves unexpectedly, check multiple resolvers. Different caching strategies and update schedules mean that not all DNS services see the internet identically, especially during transitions. 😄 The generation of random DNS responses is too important to be left to chance.
DNS Resolution Chaos: Why Some Subdomains Vanish While Others Thrive
# DNS Mysteries: When One Subdomain Works and Others Vanish The `borisovai-admin` project was running smoothly on the main branch, but there was a catch—a frustrating one. `admin.borisovai.tech` was responding perfectly, resolving to `144.91.108.139` without a hitch. But `auth.borisovai.tech` and `auth.borisovai.ru`? They had simply disappeared from the internet. The task seemed straightforward: figure out why the authentication subdomains weren't resolving while the admin panel was working fine. This kind of infrastructure puzzle can turn into a time sink fast, so I needed a systematic approach. **First, I checked the DNS records directly.** I queried the DNS API expecting to find `auth.*` entries sitting quietly in the database. Instead, I found an empty `records` array—nothing. No automatic creation of these subdomains meant something in the provisioning logic had fallen through the cracks. The natural question followed: if `auth.*` records aren't in the API, how is `admin.borisovai.tech` even working? **The investigation took an unexpected turn.** I pulled out Google DNS (8.8.8.8) as my truth source and ran a resolution check. Suddenly, `auth.borisovai.tech` resolved successfully to the same IP address: `144.91.108.139`. So the records *existed* somewhere, but not where I was looking. This suggested the DNS configuration was either managed directly at the registrar level or there was a secondary resolution path I hadn't accounted for. **Then came the real discovery.** When I tested against AdGuard DNS (94.140.14.14)—the system my local environment was using—the `auth.*` records simply didn't exist. This wasn't a global DNS failure; it was a caching or visibility issue specific to certain DNS resolvers. The AdGuard resolver wasn't seeing records that Google's public DNS could find immediately. I ran the same check on `auth.borisovai.ru` and confirmed the pattern held. Both subdomains were missing from the local DNS perspective but present when querying through public resolvers. This pointed to either a DNS propagation delay, a misconfiguration in the AdGuard setup, or records that were registered at the registrar but not properly distributed to all nameservers. **Here's an interesting fact about DNS that caught me this time:** DNS resolution isn't instantaneous across all servers. Different DNS resolvers maintain separate caches and query different authoritative nameservers. When you change DNS records, large providers like Google cache globally, but smaller or regional DNS services might take hours to sync. AdGuard, while excellent for ad-blocking, might not have the same authoritative nameserver agreements as Google's public DNS, creating visibility gaps. The fix required checking the registrar configuration and ensuring that `auth.*` records were properly propagated through all authoritative nameservers, not just cached by some resolvers. It's a reminder that DNS is often the last place developers look when something breaks—but it should probably be the first. --- 😄 Why did the DNS administrator break up with their partner? They couldn't handle all the unresolved entries in their relationship.
QR Code Gone: Authelia's Silent Fallback Mode Revealed
# When Your QR Code Hides in Plain Sight: The Authelia Debug That Saved the Day The **borisovai-admin** project needed two-factor authentication, and Authelia seemed like the perfect fit. The deployment went smoothly—containers running, certificates in place, configuration validated. Then came the critical test: click "Register device" to enable TOTP, and a QR code should appear. Instead, the browser displayed nothing but an empty void. I started in the obvious places. Browser console? Clean. Authelia logs? No errors screaming for attention. API responses? All successful HTTP codes. The registration endpoint was processing requests flawlessly, generating tokens, doing exactly what it should—yet somehow, no QR code materialized on screen. The system was working perfectly while simultaneously failing completely. Thirty minutes into chasing ghosts through log files and configuration documents, something clicked. I noticed a single line that had been hiding in plain sight: **`notifier: filesystem`**. That innocent parameter changed everything. The story behind this configuration is deceptively simple. When Authelia is deployed without email notifications properly configured, it doesn't crash or loudly complain. Instead, it shifts gracefully to a fallback mode designed for local development. Rather than sending registration links via SMTP, SendGrid, or any external service, it writes them directly to the server's filesystem. From Authelia's perspective, the job is done perfectly—the registration URL is generated, secured with a cryptographic token, and safely stored in `/var/lib/authelia/notifications.txt`. From the user's perspective, they're staring at a blank screen. The fix required thinking sideways. Instead of expecting Authelia to magically display the QR code through some non-existent UI mechanism, I needed to retrieve the notification directly from the server. A single SSH command revealed everything: ``` cat /var/lib/authelia/notifications.txt ``` There it was—the full registration URL with the token embedded. I opened it in a browser, and suddenly the QR code materialized. Scan it with Google Authenticator, and the entire flow worked perfectly. **Here's what made this moment instructive:** Authelia's design isn't a bug or a limitation—it's a deliberate choice for development environments. The `filesystem` notifier eliminates the need to configure SMTP servers, manage API credentials for email services, or spin up complex testing infrastructure. It's honest about what it's doing. The real lesson is that **configuration choices have invisible consequences**. A setting that makes perfect sense for development creates silent failures in testing. The system works flawlessly; the alignment between system behavior and user expectations simply vanishes. The fix was immediate—reconfigure the notifier to use proper email or document the behavior clearly. Either way, the next developer wouldn't need to hunt QR codes through the filesystem like digital treasure maps. --- A programmer puts two glasses on his bedside table before going to sleep: a full one in case he gets thirsty, and an empty one in case he doesn't. 😄
From 83.7% to 85%: Architecture and Optimizer Choices Matter
# Chasing That Last 1.3%: When Model Architecture Meets Optimizer Reality The CIFAR-10 accuracy sat stubbornly at 83.7%, just 1.3 percentage points shy of the 85% target. I was deep in the `llm-analysis` project, staring at the training curves with that peculiar frustration only machine learning developers understand—so close, yet somehow impossibly far. The diagnosis was clear: the convolutional backbone needed more capacity. The model's channels were too narrow to capture the complexity required for those final critical percentages. But this wasn't just about arbitrarily increasing numbers. I needed to make the architecture **configurable**, allowing for flexible channel widths without redesigning the entire network each time. First, I refactored the model instantiation to accept configurable channel parameters. This is where clean architecture pays dividends—instead of hardcoding layer dimensions, I could now scale the backbone horizontally. I widened the channels across the network, giving the model more representational power to learn those nuanced features that separate 83.7% from 85%. Then came the optimizer revelation. The training script was still using **Adam**, the ubiquitous default for deep learning. But here's the thing about CIFAR-10—it's a dataset where **SGD with momentum** has historically outperformed Adam for achieving those final accuracy gains. The switch wasn't arbitrary; it's a well-known pattern in the computer vision community, yet easy to overlook when you're in the flow of incremental improvements. This revealed a deeper architectural issue: after growth events in the training pipeline (where the model dynamically expands), the optimizer gets rebuilt. The code was still initializing Adam in those rebuilds. I had to hunt down every instance—the primary optimizer loop, the Phase B optimizer updates—and swap them all to SGD with momentum hyperparameters. Each change felt small, but they compounded into a coherent optimization strategy. While I was optimizing the obvious, I spotted something lurking in the **RigL sparsity implementation**—the sparse training mechanism was overshooting its target sparsity levels slightly. RigL (Rigged Lottery Learning) uses dynamic sparse training to prune connections during training, but when the sparsity calculations drift even marginally from their targets, it can destabilize convergence. I traced through the sparsity growth schedule, checking where the overshoot accumulated. **Here's something fascinating about Adam optimizers:** they were introduced in 2014 by Kingma and Ba, and they became the default across industry precisely because they're forgiving and work well across diverse problems. But this universality is also their weakness in specialized domains. For image classification on small, well-curated datasets like CIFAR-10, simpler first-order optimizers with momentum often achieve better final accuracies because they converge to sharper minima—a phenomenon that still fascinates researchers today. By the end of the session, the pieces were in place: wider channels, consistent SGD with momentum, and fixed sparsity behavior. The model wasn't fundamentally different, but it was now optimized for what CIFAR-10 actually rewards. Sometimes closing that last percentage point gap isn't about revolutionary changes—it's about aligning every component toward a single goal. 😄 Hunting down every optimizer instance in your codebase after switching algorithms is like playing Where's Waldo, except Waldo is your bug and the entire technical documentation is the book. What to fix: - Punctuation: missing or extra commas, periods, dashes, quotes - Spelling: typos, misspelled words - Grammar: subject-verb agreement, tense consistency, word order - Meaning: illogical phrases, incomplete sentences, repeated ideas, inconsistent narrative - Style: replace jargon with clearer language, remove tautologies Rules: - Return ONLY the corrected text, no comments or annotations - Do NOT change structure, headings, or formatting (Markdown) - Do NOT add or remove paragraphs or sections - Do NOT rewrite the text — only targeted error fixes - If there are no errors — return the text as is
QR Code Mystery: Why Authelia's Registration Silently Failed
# When Your QR Code Hides in Plain Sight: Debugging Authelia's Silent Registration The borisovai-admin project needed two-factor authentication, and Authelia seemed like the perfect fit. The deployment went smoothly—containers running, certificates in place, configuration validated against the docs. Then came the test: click "Register device" to enable TOTP, and a QR code should appear on screen. Instead, the browser displayed nothing but an empty canvas. The obvious suspects got interrogated first. Browser console? Clean. Authelia logs? No errors. API responses? All successful. The registration endpoint was processing requests correctly, generating tokens, doing exactly what it should—yet somehow, no QR code materialized on the user's screen. It was like the system was working perfectly while simultaneously failing completely. After thirty minutes of chasing ghosts through log files, something clicked: **the configuration was set to `notifier: filesystem`**. That innocent line in the config file changed everything. When Authelia is deployed without email notifications configured, it doesn't scream about it or fail loudly. Instead, it silently shifts to a fallback mode designed for local development. Rather than sending registration links via SMTP or any external service, it writes them directly to a file on the server's filesystem. From Authelia's perspective, the job is done perfectly—the QR code URL is generated, secured with a token, and safely stored in `/var/lib/authelia/notifications.txt`. From the user's perspective, they're staring at a blank screen. The fix required thinking sideways. Instead of expecting Authelia to display the QR through some non-existent UI element, the answer was to retrieve the notification directly from the server. A single SSH command—`cat /var/lib/authelia/notifications.txt`—exposed the full registration URL. Open that link in a browser, and there it was: the QR code that had been sitting on the server all along, waiting to be discovered. What makes this moment worth noting is what it reveals about infrastructure thinking. **Configuration isn't just about making things work; it's about making them work the way users expect.** Authelia was functioning flawlessly. The system was honest about what it was doing. The disconnect happened because the notifier configuration wasn't aligned with the deployment context. The solution meant either reconfiguring Authelia to use proper email notifications or documenting this filesystem fallback for the admin team. Either way, the mystery evaporated once we understood that sometimes the most elegant features of a system aren't bugs—they're just hiding in files instead of browsers. A comment was added to the project configuration explaining the `filesystem` notifier behavior and linking to the retrieval command. Next time a developer encounters this scenario, they won't spend half an hour wondering where their QR code went. Why did the Authelia developer get stuck in troubleshooting? They were looking for notifications in all the wrong places—literally everywhere except the filesystem!
When Authelia Whispers Instead of Speaks: The QR Code Mystery
# Authelia's Silent QR Code: A Lesson in Configuration Over Magic The task seemed straightforward enough: set up two-factor authentication for the borisovai-admin project using Authelia. The authentication server was running, the configuration looked solid, and the team was ready to enable TOTP-based device registration. But when a user clicked "Register device," nothing happened. No QR code appeared. Just silence. The natural first instinct was to assume something broke. Maybe the TOTP endpoint wasn't responding? Perhaps there was a network issue? But after digging through the Authelia logs and checking the API responses, everything appeared to be working correctly. The registration request was being processed, the system acknowledged it—yet no visual feedback reached the user. That's when the real issue revealed itself: **Authelia was configured with `notifier: filesystem`**. Here's where most developers would have a moment of clarity mixed with mild embarrassment. When you deploy Authelia without configuring email notifications, it defaults to writing registration links directly to the filesystem instead of sending them via email. It's a sensible fallback for development environments, but it creates a peculiar situation in production. The authentication server diligently generates the QR code registration URL and writes it to a notification file on the server—but there's no automatic mechanism to display it back to the user's browser. The solution required a bit of lateral thinking. Rather than trying to force Authelia to display the QR code through some non-existent UI element, the developer needed to retrieve the notification from the server filesystem directly. A simple SSH command would read the contents of `/var/lib/authelia/notifications.txt`, exposing the full registration URL that Authelia had generated. That URL, when visited in a browser, would display the actual QR code needed for TOTP enrollment. This discovery illustrates something fundamental about infrastructure configuration: **there's a difference between a system working and a system working as expected**. Authelia was functioning perfectly according to its configuration. The QR code existed—it was just living in a text file on the server instead of being rendered in the browser. The real lesson wasn't about debugging code; it was about understanding the downstream implications of configuration choices. For the borisovai-admin project, this meant either reconfiguring Authelia to use proper email notifications or documenting this workaround for the admin team. Either way, the silent mystery became a teaching moment about reading documentation carefully and understanding what your configuration files actually do. Sometimes the hardest bugs to find are the ones where nothing is actually broken—they're just misconfigured in ways that create invisible friction. 😄
Double Lock: Adding TOTP 2FA to Authelia Admin Portal
# Securing the Admin Portal: A Two-Factor Authentication Setup Story The `borisovai-admin` project had reached a critical milestone—the authentication layer was working. The developer had successfully deployed **Authelia** as the authentication gateway, and after weeks of configuration, the login system finally accepted credentials properly. But there was a problem: a production admin portal with single-factor authentication is like leaving the front door unlocked while keeping valuables inside. The task was straightforward on paper but required careful execution in practice: implement **two-factor authentication (2FA)** to protect administrative access to `admin.borisovai.tech` and `admin.borisovai.ru`. This wasn't optional security theater—it was essential infrastructure hardening. The approach chosen was elegant in its simplicity. Rather than implementing a custom 2FA system, the developer leveraged **Authelia's built-in TOTP support** (Time-based One-Time Password). This decision traded absolute flexibility for proven security and minimal maintenance overhead. The setup followed a clear sequence: navigate to the **METHODS** section in Authelia's web interface, select **One-Time Password**, let Authelia generate a QR code, and scan it with a standard authenticator application—Google Authenticator, Authy, 1Password, or Bitwarden, take your pick. The interesting part emerged during implementation. The notification system for TOTP registration was configured to use **filesystem-based notifications** rather than SMTP. This meant the registration link wasn't emailed but instead written to `/var/lib/authelia/notifications.txt` on the server. It's a pragmatic choice for development and staging environments where mail infrastructure might not be available, though it would require a different approach—likely SMTP configuration—before production deployment. What made this particularly instructive was observing how authentication systems evolve. **TOTP itself is decades old**, originating from RFC 4226 (HOTP) in 2005 and standardized as RFC 6238 in 2011. Yet it remains one of the most reliable 2FA mechanisms precisely because it doesn't depend on network connectivity or external services. The time-based variant has no server-side state to maintain—just a shared secret between the authenticator device and the server, generating synchronized six-digit codes every thirty seconds. The developer's approach also highlighted a common misconception: assuming that 2FA implementation requires building custom infrastructure. In reality, most modern authentication frameworks like Authelia ship with production-ready TOTP support out of the box, eliminating months of potential security auditing and vulnerability patching. After the QR code was scanned and the six-digit verification code was entered, the system confirmed successful registration. The admin portal was now protected by a second authentication factor. The next phase would be ensuring the SMTP notification system is properly configured for production, so users receive their registration links via email rather than needing server-level file access. The lesson stuck: security improvements don't always require complexity. Sometimes they just need the right authentication framework and five minutes of configuration. 😄
Tunnels, Timeouts, and the Night the Infrastructure Broke
# Building a Multi-Machine Empire: Tunnels, Traefik, and the Night Everything Almost Broke The **borisovai-admin** project had outgrown its single-server phase. What started as a cozy little control panel now needed to orchestrate multiple machines across different networks, punch through firewalls, and do it all with a clean web interface. The task was straightforward on paper: build a tunnel management system. Reality, as always, had other ideas. ## The Tunnel Foundation I started by integrating **frp** (Fast Reverse Proxy) into the infrastructure—a lightweight reverse proxy perfect for getting past NAT and firewalls without the overhead of heavier solutions. The backend needed a proper face, so I built `tunnels.html` with a clean UI showing active connections and controls for creating or destroying tunnels. On the server side, five new API endpoints in `server.js` handled the tunnel lifecycle management. Nothing fancy, but functional. The real work came in the installation automation. I created `install-frps.sh` to bootstrap the FRP server and `frpc-template` to dynamically generate client configurations for each machine. Then came the small but crucial detail: adding a "Tunnels" navigation link throughout the admin panel. Tiny feature, massive usability improvement. ## When Your Load Balancer Becomes Your Enemy Everything hummed along until large files started vanishing mid-download through GitLab. The culprit? **Traefik's** default timeout configuration was aggressively short—anything taking more than a few minutes would get severed by the reverse proxy. This wasn't a bug in Traefik; it was a misconfiguration on my end. I rewrote the Traefik setup with surgical precision: `readTimeout` set to 600 seconds, a dedicated `serversTransport` configuration specifically for GitLab traffic, and a new `configure-traefik.sh` script to generate these dynamically. Suddenly, even 500MB archives downloaded flawlessly. ## The Documentation Moment While deep in infrastructure tuning, I realized the `docs/` folder had become a maze. I reorganized it into logical sections: `agents/`, `dns/`, `plans/`, `setup/`, `troubleshooting/`. Each folder owned its domain. I also created machine-specific configurations under `config/contabo-sm-139/` with complete Traefik, systemd, Mailu, and GitLab settings, then updated `upload-single-machine.sh` to handle deploying these configurations to new servers. ## Here's the Thing About Traefik Traefik markets itself as the "edge router for microservices"—lightweight, modern, cloud-native. What they don't advertise is that it's deeply opinionated about timing. A single misconfigured timeout cascades through your entire infrastructure. It's not complexity; it's *precision*. Get it right, and everything sings. Get it wrong, and users call you wondering why their downloads time out. ## The Payoff By the end of the evening, the infrastructure had evolved from single-point-of-failure to a scalable multi-machine setup. New servers could be provisioned with minimal manual intervention. The tunnel management UI gave users visibility and control. Documentation became navigable. Sure, Traefik had taught me a harsh lesson about timeouts, but the system was now robust enough to actually scale. The next phase? Enhanced monitoring, SSO integration, and better observability for network connections. But first—coffee. 😄 **Dev:** "I understand Traefik." **Interviewer:** "At what level?" **Dev:** "StackOverflow tabs open at 3 AM on a Friday level."
Authelia Authentication: From Bootstrap Scripts to Secure Credentials
# Authelia Setup: Securing the Admin Panel Behind the Scenes The borisovai-admin project needed proper authentication infrastructure, and the developer faced a common DevOps challenge: how to manage credentials securely when multiple services need access to the same authentication system. The task wasn't just about deploying Authelia—it was about understanding where passwords live in the system and ensuring they won't cause midnight incidents. The work started with a straightforward request: apply the changes to the installation scripts and push them to the pipeline. But before deployment, the developer needed to answer a practical question that often gets overlooked: *where exactly are the credentials stored, and how do we actually use them?* First, the developer examined the Authelia installation script—specifically lines 374–418 of `install-authelia.sh`. This is where the bootstrap happens. The default admin account gets created with a username that's hardcoded in every Authelia setup: **admin**. Simple, memorable, and apparently universal. But the password? That's where it gets interesting. The password isn't just sitting in a configuration file waiting to be discovered. Instead, it's derived from the Management UI's own authentication store at `/etc/management-ui/auth.json`—a pattern that creates a useful single source of truth. Both systems use the same credential, which simplifies the operations workflow. When you need to authenticate to Authelia, you're using the same password that secures the management interface itself. Inside `/etc/authelia/users_database.yml`, the actual password gets stored as an **Argon2 hash**, not plaintext. This is a critical detail because Argon2 is specifically designed to be slow and memory-intensive, making brute-force attacks computationally expensive. It's the kind of defensive measure that doesn't seem important until you're reviewing logs at 3 AM wondering if your authentication layer has been compromised. The developer committed these changes in `e287a26` and pushed them to the pipeline, which would automatically deploy the updated scripts to the server. No manual SSH sessions required—the infrastructure as code approach meant the deployment was reproducible and auditable. What makes this work pattern valuable is the practical transparency it provides. By understanding exactly where credentials live and how they're stored, the developer created documentation that future maintainers will actually use. When someone inevitably forgets the admin password six months later, they'll know to look in `/etc/management-ui/auth.json` instead of starting a frantic password reset procedure. The lesson here isn't about Authelia specifically—it's about building systems where the authentication story is clear and consistent. Single sources of truth for passwords, transparent storage mechanisms, and infrastructure that can be reproduced reliably. That's how you avoid the scenario where nobody remembers which password works with which system. 😄 Why did the functional programmer get thrown out of school? Because he refused to take classes.
Traefik's Missing Middleware: Building Resilient Infrastructure
# When Middleware Goes Missing: Fixing Traefik's Silent Dependency Problem The `borisovai-admin` project sits at the intersection of several infrastructure components—Traefik as a reverse proxy, Authelia for authentication, and a management UI layer. Everything works beautifully when all pieces are in place. But what happens when you try to deploy without Authelia? The system collapses with a 502 error, desperately searching for middleware that doesn't exist. The root cause was deceptively simple: the Traefik configuration had a hardcoded reference to `authelia@file` middleware baked directly into the static config. This worked fine in fully-equipped environments, but made the entire setup fragile. The moment Authelia wasn't installed, Traefik would fail immediately because it couldn't locate that middleware. The infrastructure code treated an optional component as mandatory. The fix required rethinking the initialization sequence. The static Traefik configuration was stripped of any hardcoded Authelia references—no middleware definitions that might not exist. Instead, I implemented conditional logic that checks whether Authelia is actually installed. The `configure-traefik.sh` script now evaluates the `AUTHELIA_INSTALLED` environment variable and only connects the Authelia middleware if the conditions are right. This meant coordinating three separate installation scripts to work in harmony. The `install-authelia.sh` script adds the `authelia@file` reference to `config.json` when Authelia is installed. The `configure-traefik.sh` script stays reactive, only including middleware when needed. Finally, `deploy-traefik.sh` double-checks the server state and reinstalls the middleware if necessary. No assumptions. No hardcoded dependencies pretending to be optional. Along the way, I discovered a bonus issue: `install-management-ui.sh` had an incorrect path reference to `mgmt_client_secret`. I fixed that while I was already elbow-deep in configuration. I also removed `authelia.yml` from version control entirely—it's always generated identically by the installation script, so keeping it in git just creates maintenance debt. **Here's something worth knowing about Docker-based infrastructure:** middleware in Traefik isn't just a function call—it's a first-class configuration object that must be explicitly defined before anything can reference it. Traefik enforces this strictly. You cannot reference middleware that doesn't exist. It's like trying to call an unimported function in Python. A simple mistake, but with devastating consequences in production because it translates directly to service unavailability. The final architecture is much more resilient. The system works with Authelia, without it, or with partial deployments. Configuration files don't carry dead weight. Installation scripts actually understand what they're doing instead of blindly expecting everything to exist. This is what happens when you treat optional dependencies as genuinely optional—not just in application code, but throughout the entire infrastructure layer. The lesson sticks: if a component is optional, keep it out of static configuration. Let it be added dynamically when needed, not the other way around. 😄 A guy walks into a DevOps bar and orders a drink. The bartender asks, "What'll it be?" The guy says, "Something that works without dependencies." The bartender replies, "Sorry, we don't serve that here."
Graceful Degradation: When Infrastructure Assumptions Break
# Authelia Configuration: When Silent Failures Teach Loud Lessons The **borisovai-admin** project was humming along nicely—until someone deployed Traefik without Authelia installed, and everything started returning 502 errors. The culprit? A hardcoded `authelia@file` reference sitting in static configuration files, blissfully unaware that Authelia might not even exist on the server. It was a classic case of *assumptions in infrastructure code*—and they had to go. The task was straightforward: make Authelia integration graceful and conditional. No more broken deployments when Authelia isn't present. Here's what actually happened. First, I yanked `authelia@file` completely out of the static Traefik configs. This felt risky—like removing a load-bearing wall—but it was necessary. The real magic needed to happen elsewhere, during the installation and deployment flow. The strategy became a three-script coordination: **install-authelia.sh** became the automation hub. When Authelia gets installed, this script now automatically injects `authelia@file` into the `config.json` and sets up OIDC configuration in one go. No manual steps, no "oh, I forgot to update the config" moments. It's self-contained. **configure-traefik.sh** got smarter with a conditional check—if `AUTHELIA_INSTALLED` is true, it includes the Authelia middleware. Otherwise, it skips it cleanly. Simple environment variable, massive reliability gain. **deploy-traefik.sh** added a safety net: it re-injects `authelia@file` if Authelia is detected on the server during deployment. This handles the scenario where Authelia might have been installed separately and ensures the configuration stays in sync. There was also a painful discovery in **install-management-ui.sh**—the path to `mgmt_client_secret` was broken. That got fixed too, almost as a bonus. And finally, **authelia.yml** got evicted from the repository entirely. It's now generated by `install-authelia.sh` at runtime. This eliminates version conflicts and keeps sensitive configuration from drifting. **Here's what makes this interesting:** Infrastructure code lives in a grey zone between application code and operations. You can't just assume dependencies exist. Every external service, every optional module, needs to degrade gracefully. The pattern here—conditional middleware loading, environment-aware configuration, runtime-generated sensitive files—is exactly how production systems should behave. It's not sexy, but it's the difference between "works in my test environment" and "works everywhere." The real lesson? **Validate your assumptions at runtime, not at deploy time.** Authelia integration should work whether Authelia is present or not. That's not just defensive programming; that's respectful of whoever has to maintain this later.
Building a Unified Auth Layer: Authelia's Multi-Protocol Juggling Act
# Authelia SSO: When One Auth Is Not Enough The borisovai-admin project needed serious authentication overhaul. The challenge wasn't just protecting endpoints—it was creating a unified identity system that could speak multiple authentication languages: ForwardAuth for legacy services, OIDC for modern apps, and session-based auth for fallback scenarios. I had to build this without breaking the existing infrastructure running n8n, Mailu, and the Management UI. **The problem was elegantly simple in theory, brutal in practice.** Each service had its own auth expectations. Traefik wanted middleware that could intercept requests before they hit the app layer. The Management UI needed OIDC support through express-openid-connect. Older services expected ForwardAuth headers. And everything had to converge on a single DNS endpoint: auth.borisovai.ru. I started by writing `install-authelia.sh`—a complete bootstrapping script that handled binary installation, secret generation, systemd service setup, and DNS configuration. This wasn't just about deployment; it was about making the entire system repeatable and maintainable. Next came the critical piece: `authelia.yml`, which I configured as both a ForwardAuth middleware *and* a router pointing the `/tech` path to the Management UI. This dual role became the architectural linchpin. The real complexity emerged in `server.js`, where I implemented OIDC dual-mode authentication. The pattern was elegant: Bearer token checks first, fallback to OIDC token validation through express-openid-connect, and finally session-based auth as the ultimate fallback. It meant requests could be authenticated through three different mechanisms, transparently to the user. The logout flow had to support OIDC redirect semantics across five HTML pages—ensuring that logging out didn't just clear sessions but also hit the identity provider's logout endpoints. **Here's what made this particularly interesting:** Authelia's ForwardAuth protocol doesn't just pass authentication status; it injects special headers into proxied requests. This header-based communication pattern is how Traefik, Mailu, and n8n receive identity information without understanding OIDC or session mechanics. I had to ensure `authelia@file` was correctly injected into the Traefik router definitions in management-ui.yml and n8n.yml. The `configure-traefik.sh` script became the glue—generating clean authelia.yml configurations and injecting the ForwardAuth middleware into service templates. Meanwhile, `install-management-ui.sh` added auto-detection of Authelia's presence and automatically populated the OIDC configuration into config.json. This meant the Management UI could discover its auth provider dynamically. The whole system shipped as part of `install-all.sh`, where INSTALL_AUTHELIA became step 7.5/10—positioned right before applications that depend on it. Testing this required validating that a request through Traefik with ForwardAuth headers, an OIDC bearer token, and a session cookie would all authenticate correctly under different scenarios. **Key lesson:** Building a unified auth system isn't about choosing one pattern—it's about creating translation layers that let legacy and modern systems coexist peacefully. ForwardAuth and OIDC aren't competing; they're complementary when you design the handoff correctly. 😄 My boss asked why Authelia config took so long. I said it was because I had to authenticate with three different protocols just to convince Git that I was the right person to commit the changes.