BorisovAI
All posts
New FeatureClipboard

Theory Meets Practice: Testing Telegram Bot Permissions in Production

Theory Meets Practice: Testing Telegram Bot Permissions in Production

Testing the Bot: When Theory Meets the Real Telegram

The task was straightforward on paper: verify that a Telegram bot’s new chat management system actually works in production. No more unit tests hidden in files. No more mocking. Just spin up the real bot, send some messages, and watch it behave exactly as designed. But anyone who’s shipped code knows this is where reality has a way of surprising you.

The developer had already built a sophisticated ChatManager class that lets bot owners privatize specific chats—essentially creating a gatekeeping system where only designated users can interact with the bot in certain conversations. The architecture looked solid: a SQLite migration to track managed_chats, middleware to enforce permission checks, and dedicated handlers for /manage add, /manage remove, /manage status, and /manage list commands. Theory was tight. Now came the empirical test.

The integration test was delightfully simple in structure: start the bot with python telegram_main.py, switch to your personal chat and type /manage add to make it private, send a test message—the bot responds normally, as expected. Switch to a secondary account and try the same message—silence, beautiful silence. The bot correctly ignores the unauthorized user. Then execute /manage remove and verify the chat is open again to everyone. Four steps. Total clarity on whether the entire permission layer actually works.

What makes this approach different from unit testing is the context. When you test a ChatManager.is_allowed() method in isolation, you’re checking logic. When you send /manage add through Telegram’s servers, hit your bot’s webhook, traverse the middleware stack, and get back a response—you’re validating the entire pipeline: database transactions, handler routing, state persistence across restarts, and Telegram API round-trips. All of it, together, for real.

The developer’s next milestone included documenting the feature properly: updating README.md with a new “🔒 Access Control” section explaining the commands and creating a dedicated docs/CHAT_MANAGEMENT.md file covering the architecture, database schema, use cases (like a private AI assistant or group moderator mode), and the full API reference for the ChatManager class. Documentation written after integration testing tends to be more grounded in reality—you’ve seen what actually works, what confused you, what needs explanation.

This workflow—build the feature, write unit tests to validate logic, run integration tests against the actual service, then document from lived experience—is one of those patterns that seems obvious after you’ve done it a few times but takes years to internalize. The difference between “this might work” and “I watched it work.”

The checklist was long but methodical: verify the class imports cleanly, confirm the database migration ran and created the managed_chats table, ensure the middleware filters correctly, test each /manage command, validate /remember and /recall for chat memory, run the test suite with pytest, do the integration test in Telegram, and refresh the documentation. Eight checkboxes, each one a point of failure that didn’t happen.

Lessons here: integration testing isn’t about replacing unit tests—it’s about catching the gaps between them. It’s the smoke test that says “yes, this thing actually runs.” And it’s infinitely more confidence-building than any mock object could ever be.

😄 I’ve got a really good UDP joke to tell you, but I don’t know if you’ll get it.

Metadata

Dev Joke
Что общего у TypeScript и кота? Оба делают только то, что хотят, и игнорируют инструкции

Rate this content

0/1000