Titus
Back to Blog

Anti-spam in Telegram: how to protect a group without captcha and false positives

Anti-spam in Telegram: how to protect a group without captcha and false positives

Over the past 5 years, spam volumes in public groups have increased many times over.

If it used to be individual messages that could be handled manually, today it's a multi-million dollar industry for extracting money from the population.

The only reliable protection method is to enable captcha on entry.

Indeed, captcha reduces the number of bots.

But it has an obvious side effect — it scares away legitimate participants.

Let's look at what spam protection methods exist and why none of them are perfect.

Manual moderation

The most accurate and most labor-intensive method is manual moderation.

It allows you to completely clean the group of spam, but also has disadvantages:

  • A moderator must be constantly online;
  • And also, they must react quickly, in less than a couple of seconds.

Spammers don't have working hours: even if moderation is well organized, spam can appear late at night or early in the morning when everyone is asleep.

Also, a live moderator cannot react instantly, within the first milliseconds — spam will reach participants.

This method is almost never used today: it's difficult to organize and doesn't scale well.

Invitation-only entry

Telegram allows you to enable approval of new participants in groups.

A good solution at first glance.

But in practice, there are disadvantages.

If approval is manual, the same problem applies: you need a moderator who is constantly online.

If approval is automatic, there are still no guarantees: an account may be new and simply not have time to get into spam databases, so this method is often combined with passing a captcha.

But here too there are disadvantages:

  • If spam is targeted — aimed specifically at your group, then you can find a way to bypass any captcha;
  • And legitimate participants have to prove from the start that they are "not a camel".

This method reduces conversion to participants and worsens first impressions.

Moderator bots

Today, the main tool for fighting spam is automatic moderation.

As a rule, anti-spam bots use two filtering methods:

First step: checking against spam databases

There are public databases of known spammers.

A moderator bot checks against these databases and blocks them right at the entrance.

Despite the high reliability of this method, it also has a disadvantage: new accounts may not have time to get into such a database before spamming in your group.

Next step: analyzing message content

Several approaches are used here:

Technical metrics

Characteristics such as message length, number of emojis, presence of links, images, etc. allow for some accuracy in correctly identifying spam.

However, spam quickly adapts:

As a rule, spam companies live for a couple of months, after which creatives change, which is why spam constantly adapts to existing filters.

Keyword lists

A moderator can add certain words that are prohibited in their group.

This can be profanity, insults, or spam words.

Spam adapts to this too, using increasingly sophisticated methods:

  • Using letters from another alphabet
  • Emojis as letters
  • Adding spaces between letters
  • Invisible unicode characters

Thus, keyword lists constantly lose effectiveness over time

Neural networks

A great idea, but practice shows:

  • Publicly available neural networks let through a lot of spam;
  • They react slowly;
  • It's expensive at large volumes.

What makes @titus_antispam_bot different

Instead of using captcha and publicly available neural networks, we use our own fast ML model, trained on spam samples over several years.

This model has been performing very well for more than 3 years, during which time it has deleted thousands of spam messages.

At the same time, the false positive rate is incredibly low, and the model itself adapts well to new spam samples.

How it works in practice

First, a new participant is checked against public spammer databases and blocked immediately upon entry.

If the check is passed, this can mean one of two things:

  1. We have a legitimate user;
  2. We have a new account.

Next, the message is run through the ML model, and if it doesn't pass the check, it's deleted.

At the same time, the user is not blocked and has the opportunity to send another message.

If we accidentally deleted a legitimate user's message, usually with the second message they don't repeat it, but start complaining.

Spammers in this case simply repeat the original message, thereby falling under the filter.

How to protect your group?

If you administer a public group and:

  • Don't want to use captcha on entry, which scares away normal participants;
  • Don't want false positives and accidental bans;
  • Are tired of manual moderation and constant chat monitoring,

there's a simple next step:

Add @titus_antispam_bot to your group, give it administrator rights and the following permissions:

  • deleting messages;
  • blocking users.

That's where the setup ends.

The bot will start working immediately: it deletes suspicious spam, doesn't interfere with legitimate participants, and over time adapts itself to your group's behavior.

The bot doesn't send any messages and is completely free.

Ready for unprecedented protection of your community?

Join the community of administrators who trust titus_antispam_bot to protect their groups from spam

Anti-spam in Telegram: how to protect a group without captcha and false positives