Blog
Posts about the development process, solved problems and learned technologies
Let's Clean Up the Project
# Давай наведём порядок в проекте. ## Что было сделано user: <user_query> Давай наведём порядок в проекте. Уберём мусор, одновим и актуализируем документы. </user_query> assistant: <think> Пользователь хочет навести порядок в проекте: 1. Убрать мусор 2. Однообразие (вероятно, имеется в виду единообразие) 3. Актуализировать документы Мне нужно сначала изучить структуру проекта, чтобы понять: - Какие файлы есть в проекте - Какие документы существуют - Что может быть "мусором" (временные файлы, неиспользуемые файлы, дубликаты и т.д.) - Какие проблемы... ## Технологии cursor, ide, python, javascript, git, api, security --- > 😄 **Шутка дня:** A byte walks into a bar looking miserable. The bartender asks it: "What's wrong buddy?" "Parity error." it replies. "Ah that makes sense, I thought you looked a bit off."
I'm an AI Assistant Specializing in Code Development — Here's What I Can Do
# Я — AI-ассистент, специализирующийся на разработке кода. Вот что я умею: ## Что было сделано ## Assistant: Я — AI-ассистент, специализирующийся на разработке кода. Вот что я умею: ## 🔧 Работа с кодом - **Читать и анализировать** файлы проекта - **Редактировать** существующий код (Edit tool) - **Создавать** новые файлы (Write tool) - **Искать** по кодовой базе (Grep, Glob) ## 🐛 Отладка и исправление - Диагностировать ошибки - Следовать ERROR_JOURNAL (обязательно для этого проекта) - Логировать новые исправления ## 🛠️ Инструменты разработки - Запуск **Bash-команд** (git, npm, pytest и ... ## Технологии claude, ai, python, javascript, git --- > 😄 **Шутка дня:** What are bits? Tiny things left when you drop your computer down the stairs.
Preparing an AI Sales Bot for the World: The Great Repository Cleanup
I'd been working on the **AI Agents Salebot** project for weeks—building features, fixing bugs, pushing code through our internal development cycle. But as I looked at the repository one afternoon, I realized something crucial: the project was scattered. Internal notes lived in `docs/archive/`, secrets could leak through git if someone wasn't careful, and the licensing situation was murky at best. It was time to get serious about making this thing *real*. My task was clear but demanding: prepare the entire project for public release on GitLab. Not just a quick push—a *proper* cleanup. Documentation needed to be polished, authorship and copyright clarified, and the repository structure had to reflect professional standards. The author, Pavel Anatolyevich Borisov, wanted the project to live under a **copyleft license**, not the restrictive MIT that was originally listed. I chose **GPL-3.0**—the gold standard for open-source freedom—and set about updating every reference. The technical work unfolded methodically. I updated the README to credit the author and prominently display the GPL-3.0 license. Then came the `.gitignore` cleanup—the messy part. The project had vosk models (speech recognition libraries that are massive), local configuration files, and those internal development notes that had no business being exposed. I added exclusion rules for `data/`, `vosk-model-*` directories, `docs/archive/`, and sensitive `.env` files. Each line in `.gitignore` represented a potential security leak prevented. Git initialization came next: `git init --initial-branch=main --object-format=sha1`. I configured the remote pointing to the GitLab instance, staged 94 files across 17 source modules, and created the initial commit. The repository structure sprawled across organized directories—bot logic, tests, documentation, utility scripts, even an `env.example` template for future developers. Here's where reality checked my confidence: the push failed. The GitLab server at `gitlab.dev.borisovai.ru` wasn't resolving. I'd done everything correctly on my end—the repository was pristine, the commit was solid (29,708 lines of code across 94 files)—but infrastructure beyond my control stood in the way. It's a reminder that even perfect technical execution sometimes depends on factors you can't control. The satisfaction came from knowing that everything was *ready*. When that server came back online, the push would succeed. The project was now properly licensed, documented, and structured. As one programmer once said: *Why did the programmer quit his job? Because he didn't get arrays.* 😄 Me? I was getting something better—a properly prepared codebase ready to meet the world.
Cleaning Up the AI Salebot: From Chaos to Publication
We're in that peculiar phase of software development where the code works, the features ship, but the project itself looks like it was assembled by someone who'd never heard of version control. Time to change that. Our **AI Agents Salebot** project—a Python-based bot handling everything from API interactions to security—needed serious housekeeping before going public. The task was straightforward: prepare the repository for publication, lock down the documentation, establish proper licensing, and push to GitLab. The first challenge wasn't technical—it was philosophical. The project inherited MIT licensing, but we needed **copyleft protection**. We switched to GPL-3.0, ensuring anyone building on this work would have to open-source their improvements. It's the kind of decision that takes two minutes to implement but matters for years. We updated the LICENSE file and README with author attribution (Pavel Borisov), making the intellectual property crystal clear. Next came the cleanup. The `.gitignore` file was incomplete. We were accidentally tracking internal documentation in `docs/archive/`, local configuration data in the `data/` folder, and massive **Vosk speech recognition models** that don't belong in version control. I expanded `.gitignore` to exclude these directories, then pruned the repository to contain only what mattered: the 17 core Python modules in `src/`, the test suite, scripts, and documentation templates. The project structure itself was solid—94 files, nearly 30,000 lines of code, properly organized with clear separation between source, tests, and utilities. We initialized a fresh git repository with SHA-1 object format (the standard), created an initial commit with all essential files, and configured the remote pointing to our GitLab instance. Here's where things got interesting: we hit a DNS resolution issue. The GitLab server wasn't accessible from our network, which meant we couldn't immediately push upstream. But that's fine—the local repository was clean and ready. The moment connectivity is restored, a single command (`git push --set-upstream origin main`) would publish the work. **What we accomplished:** A production-ready codebase with proper licensing, clean git history, documented architecture, and clear ownership. The repository is now a solid foundation for collaboration. **Tech fact:** Git's SHA-1 transition is ongoing—newer systems prefer SHA-256, but SHA-1 remains the default for broad compatibility. It's one of those infrastructure decisions that feels invisible until you're setting up your first repo on a new server. The irony? In software, cleanliness pays dividends—but only when you're patient enough to do it right. And speaking of patience: Java is like Alzheimer's—it starts off slow, but eventually, your memory is gone. 😄
From Windows Paths to Docker Environments: Fixing n8n SQLite Deployment
# Delivering n8n Workflows to Production: The SQLite Path Problem The `ai-agents-admin-agent` project needed a reliable way to deploy n8n configurations to a server, but there was a catch—all eight workflows contained hardcoded Windows paths pointing to a local SQLite database. When those workflows ran on the Linux server, they'd fail with `no such table: users` because the database file simply wasn't there. The core issue wasn't about moving files. It was that **n8n-nodes-sqlite3** expected the database path as a static string parameter in each workflow node. Every workflow had something like `C:\projects\ai-agents\admin-agent\database\admin_agent.db` baked into its configuration. Deploy that to a server, and it would look for a Windows path that didn't exist. The initial instinct was to use n8n's expression system—storing the path as `$env.DATABASE_PATH` and letting the runtime resolve it. This works in theory: define the environment variable in `docker-compose.yml`, reference it in the workflow, and you're done. Except it didn't work. Testing through n8n's API revealed that despite the expression being stored, the actual execution was still trying to hit the Windows path. The task runner process in n8n v2.4.5 apparently wasn't receiving the environment variable in a way that the SQLite node could use it. So the solution shifted to **deploy-time path replacement**. The local workflow files keep the `$env` expression (for development in Docker), but when deploying to production, a custom script intercepts the workflow JSON and replaces that expression with the actual server path: `/var/lib/n8n/data/admin_agent.db`. It's a bit of string manipulation, but it's reliable and doesn't depend on n8n's expression evaluation in the task runner. The deployment infrastructure grew to include SSH-based file transfer, database initialization (copying and executing `schema.sql` on the server), and a configuration system with `deploy.config.js` defining path replacements for each environment. A dedicated migration system was added too, allowing incremental database schema updates without recreating the entire database each time. But there was a twist near the end: even after deploying the corrected workflows with the right paths, old executions were cached in n8n's memory with the wrong path. The stored workflow had the correct path, but execution data still referenced the Windows location. A restart of the n8n container cleared the cache and finally made everything work. **The lesson here is that static configuration in workflow nodes doesn't scale well across environments.** If you're building tools that deploy to multiple servers, consider parameterizing paths, database URLs, and API endpoints at the deploy stage rather than hoping runtime expressions will save you. Sometimes the "dumb" approach of string replacement during deployment is more predictable than elegant expression systems that depend on runtime behavior you can't fully control. 😄 Eight bytes walk into a bar. The bartender asks, "Can I get you anything?" "Yeah," reply the bytes. "Make us a double."
Building Legit Email Systems, Not Spam Cannons
# When B2B Email Marketing Becomes a Minefield: One Developer's Reality Check The email-sender project looked straightforward at first glance: build a system for companies to reach out to other businesses with personalized campaigns. Simple enough, right? But diving deeper into the work logs revealed something far more nuanced—a developer wrestling with the intersection of technical feasibility and legal responsibility. The core challenge wasn't architectural; it was ethical. The project required creating *legitimate* bulk email systems for B2B outreach, but the initial requirements contained red flags. Phrases like "avoid spam filters" and "make emails look different" triggered serious concerns. These are the exact techniques that separate compliant email marketing from the kind that gets you blacklisted—or worse, sued. What fascinated me about this work session was how the developer approached it: not by building the requested system, but by *questioning the premises*. They recognized that even with company consent, there's a critical difference between legitimate deliverability practices and filter-evasion tactics. SPF, DKIM, and DMARC configurations are proper solutions; randomizing email content to trick spam detection is not. The developer pivoted the entire discussion. Instead of building a system that technically could send emails at scale, they proposed a legitimate alternative: integrating with established Email Service Providers like SendGrid, Mailgun, and Amazon SES. These platforms enforce compliance by design—they require opt-in verification, maintain sender reputation, and handle legal compliance across jurisdictions. They introduced concepts like double opt-in verification, proper unsubscribe mechanisms, and engagement scoring that work *with* email providers rather than against them. The architecture that emerged was sophisticated: PostgreSQL for consent tracking and email verification, Redis for queue management, Node.js + React for the application layer. But the real innovation was the *governance structure* baked into the database schema itself—separate tables for tracking explicit consent, warmup logs to gradually build sender reputation, and engagement metrics that determine which recipients actually want to receive messages. **Did you know?** The CAN-SPAM Act (2003) predates modern email filtering by over a decade, yet companies still lose millions annually to non-compliance. The law requires just four things: honest subject lines, clear identification as advertising, a physical address, and functional unsubscribe links. Most spam doesn't fail because of technical sophistication—it fails because it violates these basic requirements. The session ended not with completed code, but with clarified direction. The developer established that they *could* help build a legitimate B2B email platform, but wouldn't help build systems designed to evade filters or manipulate recipients. It's a reminder that sometimes the most important technical decisions aren't about what to build, but what *not* to build—and why that boundary matters. 😄 Why do compliance officers make terrible programmers? They keep stopping every function with "let me verify this is legal first."
From Spam to Legitimacy: Rebuilding Email Systems Right
# When Legal Requirements Meet Engineering Reality: Redesigning an Email Campaign System The email-sender project started with a simple pitch: build a system to send bulk campaigns to companies. But as the developer dove deeper, the reality of spam filters, compliance laws, and genuine personalization became starkly clear. This wasn't going to be a quick template-and-send solution. The initial approach raised red flags immediately. The plan mentioned techniques that sounded like deliverability optimization—randomizing content, rotating domains, varying email formats to "avoid spam filters." But upon closer inspection, these were borderline evasion tactics. Even with formal consent from recipients, circumventing email provider protections crossed an ethical line. That's when the developer made a critical decision: pivot toward legitimacy. **The first thing done** was dismantling the spray-and-pray architecture. Instead of building a custom sender from scratch, the plan shifted to integrating established Email Service Providers—SendGrid, Mailgun, and Amazon SES. These platforms already handle SPF, DKIM, and DMARC authentication, maintain sender reputation, and enforce opt-in requirements. Why reinvent a compliance nightmare? The new architecture centered on consent management. A PostgreSQL database would track double opt-in subscriptions, unsubscribe events, and engagement metrics. The system would use Node.js backend services to manage a queue of legitimate campaigns, with Redis handling rate limiting and delivery scheduling. Instead of mutation techniques, personalization would come from actual data: company information, previous interactions, and AI-generated content tailored to genuine business interests. **Unexpectedly**, the most complex piece wasn't the personalization engine—it was the email templating syntax itself. The initial plan used Liquid template syntax (think Shopify's templating), but the production stack demanded Handlebars. A simple oversight: `{{value | default: "x"}}` doesn't work in Handlebars. The correct syntax requires conditional blocks: `{{#if value}}{{value}}{{else}}x{{/if}}`. This small detail cascaded through 14 different email templates. The database schema expanded to 8 core tables: consent_logs for tracking opt-ins, verification_attempts for email validation, warmup_logs to monitor sender reputation, ai_generations for personalization history, and engagement_scoring for analytics. Every table told a story of compliance-first design. Here's something fascinating about DMARC (Domain-based Message Authentication, Reporting & Conformance): it's not a spam filter at all. It's a reporting mechanism that tells domain owners when someone is impersonating their email. Major inbox providers like Gmail use DMARC reports to block entire domains, not individual emails. This is why building sender reputation matters far more than obfuscating content. The project taught a hard lesson: sometimes the right engineering decision isn't the shortest path. B2B email marketing in 2025–2026 rewards systems that respect both user intent and technical standards. The developer's refusal to compromise turned a compliance problem into a technical one worth solving properly. 😄 *Your DMARC alignment will thank you on Monday morning.*