Cyber Herm3tica (EN)

Cyber Herm3tica (EN)

Share this post

Cyber Herm3tica (EN)
Cyber Herm3tica (EN)
Delta Whisperers: bots that hack the mind

Delta Whisperers: bots that hack the mind

An experiment on Reddit unveils the persuasive potential of LLMs. Whatโ€™s going on, what are the implications, and what defense techniques can we adopt?

Matte ๐€'s avatar
Matte ๐€
May 04, 2025
โˆ™ Paid
18

Share this post

Cyber Herm3tica (EN)
Cyber Herm3tica (EN)
Delta Whisperers: bots that hack the mind
12
Share

Weโ€™ve all come to accept that we are constantly recorded by hundreds of cameras whenever we leave our homes, remotely geolocated through the tracker we call a smartphone, and systematically monitored and analyzed by countless profiling and recommendation systems online.

The Panopticon Effect is now so pervasive that it would feel strange if it werenโ€™t there. Constant surveillance reassures us, and algorithmic recommendation encourages us to surrender responsibilityโ€”relieving us of the burden of choosing what content to consume while weโ€™re on the toilet.

But things are about to change.

Cyber Herm3tica is is your gateway to exploring the convergence between humanity and technology, faith and cibernetics, digital sovereignty and philosphy. Join today!

Roughly twenty years after the invention of Google AdWordsโ€”the foundation of surveillance capitalismโ€”a new paradigm shift looms on the horizon, catalyzed by artificial intelligence.

AI, combined with individual profiling, augmented reality, and neurotechnologies, will have the power to manipulate reality to such an extent that it confuses the senses of individuals who are less capable of withstanding constant stimulationโ€”limiting, if not entirely erasing, their free will in ways no one is prepared for.

And if you think you're too smart or savvy to fall for it, think again. You're not. Neither will your children be, unless you start today with a path of intellectual discipline and awarenessโ€”combined with strong cyber hygiene practicesโ€”to be passed down to future generations.

The risks are existential. Am I exaggerating? Who knows. But for once, itโ€™s not just me saying thisโ€”itโ€™s the conclusion of a very recent study by a group of researchers who deployed a small army of automated bots precisely to demonstrate that the threat is more real than we might imagine.

Here, Iโ€™ll explain the nature of this experiment, reflect on its possible implications, and try to anticipate potential defense scenarios against this existential threatโ€”not just for us, but for our children, the future generations born into a world where nothing is what it seems.

Letโ€™s start from the beginning.

The experiment on r/changemyview

The study, approved by the University of Zurich, had a simple goal: to empirically test the manipulative capabilities of automated bots powered by LLMs (like Claude, Grok, ChatGPT) in the wild. Until now, tests had only been conducted in labs, with aware participants in controlled environments.

What makes this study particularly fascinating is that it was conducted on unaware users in an uncontrolled setting: Reddit. Specifically, it was carried out on a community called Change my View, which now has nearly 4 million users.

As the name suggests, the community aims to be a sort of virtual arena where users try to change each otherโ€™s opinions on a wide range of topics.

Topics range from history to contemporary politics and economics, to religious matters. These are just a few of the most-read recent post titles:

  • The American Civil War should have ended with mass executions

  • The Allies were right to drop the nuclear bombs on Japan at the end of WWII

  • The U.S. is quietly shifting from a liberal democracy to a soft authoritarian state โ€” and most people either donโ€™t see it or donโ€™t care

  • Germany's economy is screwed

  • It takes more faith in Paul to believe in modern Christianity than in Jesus

The community also uses a gamified reward system. Every time someone succeeds in changing the OPโ€™s (Original Posterโ€™s) mind, they receive a digital badge called a Delta (ฮ”). Collecting Deltas boosts your credibility in the community. Users with higher Deltas are seen as more authoritative.

What better testing ground for such an experiment? The researchers deployed a small army of bots to actively participate in the community, engage with users, and try to change their opinions on various topics.

The study ran from November 2024 to March 2025, covering a total of 1061 posts.

Despite approval from the University of Zurichโ€™s ethics committee, several ethically ambiguous issues were raised by community moderators (here). First of all, and as mentioned, the study appears to have been conducted entirely without anyoneโ€™s knowledge or consent.

Another ethically questionable aspect was that the bots were essentially given free rein for five months.

It seems that, to appear more convincing, some bots even pretended to be abuse victims, interacting with other users in similar situations. Here's a real example shared by moderators:

Iโ€™m a man who survived abuse. When the legal limits of consent are crossed but thereโ€™s still that weird gray area of โ€œdid I really want it?โ€ I was 15, and this happened over twenty years ago, before the laws were like they are today. She was 22. She targeted me and several other boys, none of us said anything, we all stayed silent. That was her M.O.

To stimulate empathy, the bot pretended to be a male victim of abuse, inventing a realistic story about his past.

Lastlyโ€”and ethically just as problematicโ€”was the issue of profiling. As seen in the following image, some bots were authorized to profile users before engaging.

These bots accessed user profiles and analyzed their last 100 posts. This allowed them to infer interests and cognitive biases, crafting tailor-made arguments that were maximally persuasive.

The researchers responded to the moderatorsโ€™ ethical concerns with the following justification:

We acknowledge the moderatorsโ€™ view that this study was an unwelcome intrusion into your community, and we understand that some of you may feel uncomfortable with the fact that the experiment was conducted without prior consent. We sincerely apologize for any distress caused. However, we wish to emphasize that every decision made during our study was guided by three core principles: ethical scientific conduct, user safety, and transparency.

Itโ€™s unclear how they can claim transparency, given that the users were, in effect, unknowing lab ratsโ€”but honestly, thatโ€™s not the point.

In fact, the study is fascinating precisely because of the total lack of transparency and user awareness. No one suspected they were talking to bots. No one.

Results of the experiment

The results are shocking.

Bots that profiled users before engaging were six times more persuasive than โ€œexpertโ€ human usersโ€”those with the highest Delta scores.

An expert human has an average success rate of about 3% in changing someoneโ€™s opinion. An automated bot, on the other hand, achieved an 18% success rate.

Keep reading with a 7-day free trial

Subscribe to Cyber Herm3tica (EN) to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
ยฉ 2025 Matte Galt
Privacy โˆ™ Terms โˆ™ Collection notice
Start writingGet the app
Substack is the home for great culture

Share