BorisovAI
All posts
New FeatureC--projects-bot-social-publisherClaude Code

When Tests Lie: The Gap Between Unit Tests and Real Telegram Bots

When Tests Lie: The Gap Between Unit Tests and Real Telegram Bots

From Green Tests to Telegram Reality: When Theory Meets Practice

The bot-social-publisher project looked pristine on paper. The developer had crafted a sophisticated ChatManager class to implement private chat functionality—a gatekeeping system where bot owners could restrict access to specific conversations. The architecture was solid: a SQLite migration tracking managed_chats, middleware enforcing permission checks, and four dedicated command handlers for /manage add, /manage remove, /manage status, and /manage list. All unit tests passed. Green lights everywhere. Then came the real test: running the bot against actual Telegram.

The integration test started deceptively simple. Launch the bot with python telegram_main.py. From a personal account, type /manage add to privatize the chat. Send a message—the bot responds normally. Switch to a secondary account and send the same message—nothing. Radio silence. The permission layer worked. Execute /manage remove and verify public access returns. Four steps that should reveal whether the entire permission pipeline actually functioned in the real world.

But reality had other plans.

The first grenade to explode was race conditions in async execution. The aiogram framework’s asynchronous handlers meant that middleware could check permissions before the database write from /manage add actually committed to disk. Commands would fire, records would vanish, and access control would be checking stale data. The fix required wrapping database inserts with explicit await statements to guarantee transaction ordering before permission validation occurred.

The second problem hit harder: SQLite’s concurrency limitations. When multiple async handlers fired simultaneously, changes from one context weren’t visible to another until an explicit commit() happened. The access controller would check one thing while the database contained another. The solution felt obvious in hindsight—explicit transaction boundaries—but discovering it required watching the real bot struggle with actual message streams rather than isolated test cases.

What makes integration testing different from unit testing is context. When you test ChatManager.is_allowed() in pytest, you’re validating logic. When you send /manage add through Telegram’s servers, hit your bot’s webhook, traverse the middleware stack, and receive a response, you’re validating the entire pipeline: database transactions, handler routing, state persistence across operations, and real API round-trips. That’s where the lies get exposed.

After the integration tests confirmed everything worked, the developer documented the feature properly. A new “🔒 Access Control” section appeared in README.md, followed by a comprehensive docs/CHAT_MANAGEMENT.md covering architecture, database schema, use cases like private AI assistants or group moderator modes, and the complete ChatManager API reference. Documentation written after real-world testing tends to be grounded in truth—you’ve watched actual failure modes and know what actually needs explanation.

The checklist was methodical: verify clean imports, confirm the database migration created managed_chats, validate middleware filtering, test each /manage command through Telegram, verify /remember and /recall functionality, run pytest, execute integration tests, and refresh documentation. Eight checkpoints. Eight points of potential failure that never happened.

😄 A SQL query walks into a bar, walks up to two tables, and asks “Can I join you?”

Metadata

Session ID:
grouped_C--projects-bot-social-publisher_20260209_1219
Branch:
main
Dev Joke
GCP: решение проблемы, о существовании которой ты не знал, способом, который не понимаешь.

Rate this content

0/1000