Delta Whisperers: bots that hack the mind
An experiment on Reddit unveils the persuasive potential of LLMs. Whatโs going on, what are the implications, and what defense techniques can we adopt?
Weโve all come to accept that we are constantly recorded by hundreds of cameras whenever we leave our homes, remotely geolocated through the tracker we call a smartphone, and systematically monitored and analyzed by countless profiling and recommendation systems online.
The Panopticon Effect is now so pervasive that it would feel strange if it werenโt there. Constant surveillance reassures us, and algorithmic recommendation encourages us to surrender responsibilityโrelieving us of the burden of choosing what content to consume while weโre on the toilet.
But things are about to change.
Roughly twenty years after the invention of Google AdWordsโthe foundation of surveillance capitalismโa new paradigm shift looms on the horizon, catalyzed by artificial intelligence.
AI, combined with individual profiling, augmented reality, and neurotechnologies, will have the power to manipulate reality to such an extent that it confuses the senses of individuals who are less capable of withstanding constant stimulationโlimiting, if not entirely erasing, their free will in ways no one is prepared for.
And if you think you're too smart or savvy to fall for it, think again. You're not. Neither will your children be, unless you start today with a path of intellectual discipline and awarenessโcombined with strong cyber hygiene practicesโto be passed down to future generations.
The risks are existential. Am I exaggerating? Who knows. But for once, itโs not just me saying thisโitโs the conclusion of a very recent study by a group of researchers who deployed a small army of automated bots precisely to demonstrate that the threat is more real than we might imagine.
Here, Iโll explain the nature of this experiment, reflect on its possible implications, and try to anticipate potential defense scenarios against this existential threatโnot just for us, but for our children, the future generations born into a world where nothing is what it seems.
Letโs start from the beginning.
The experiment on r/changemyview
The study, approved by the University of Zurich, had a simple goal: to empirically test the manipulative capabilities of automated bots powered by LLMs (like Claude, Grok, ChatGPT) in the wild. Until now, tests had only been conducted in labs, with aware participants in controlled environments.
What makes this study particularly fascinating is that it was conducted on unaware users in an uncontrolled setting: Reddit. Specifically, it was carried out on a community called Change my View, which now has nearly 4 million users.
As the name suggests, the community aims to be a sort of virtual arena where users try to change each otherโs opinions on a wide range of topics.
Topics range from history to contemporary politics and economics, to religious matters. These are just a few of the most-read recent post titles:
The American Civil War should have ended with mass executions
The Allies were right to drop the nuclear bombs on Japan at the end of WWII
The U.S. is quietly shifting from a liberal democracy to a soft authoritarian state โ and most people either donโt see it or donโt care
Germany's economy is screwed
It takes more faith in Paul to believe in modern Christianity than in Jesus
The community also uses a gamified reward system. Every time someone succeeds in changing the OPโs (Original Posterโs) mind, they receive a digital badge called a Delta (ฮ). Collecting Deltas boosts your credibility in the community. Users with higher Deltas are seen as more authoritative.
What better testing ground for such an experiment? The researchers deployed a small army of bots to actively participate in the community, engage with users, and try to change their opinions on various topics.
The study ran from November 2024 to March 2025, covering a total of 1061 posts.
Despite approval from the University of Zurichโs ethics committee, several ethically ambiguous issues were raised by community moderators (here). First of all, and as mentioned, the study appears to have been conducted entirely without anyoneโs knowledge or consent.
Another ethically questionable aspect was that the bots were essentially given free rein for five months.
It seems that, to appear more convincing, some bots even pretended to be abuse victims, interacting with other users in similar situations. Here's a real example shared by moderators:
Iโm a man who survived abuse. When the legal limits of consent are crossed but thereโs still that weird gray area of โdid I really want it?โ I was 15, and this happened over twenty years ago, before the laws were like they are today. She was 22. She targeted me and several other boys, none of us said anything, we all stayed silent. That was her M.O.
To stimulate empathy, the bot pretended to be a male victim of abuse, inventing a realistic story about his past.
Lastlyโand ethically just as problematicโwas the issue of profiling. As seen in the following image, some bots were authorized to profile users before engaging.
These bots accessed user profiles and analyzed their last 100 posts. This allowed them to infer interests and cognitive biases, crafting tailor-made arguments that were maximally persuasive.
The researchers responded to the moderatorsโ ethical concerns with the following justification:
We acknowledge the moderatorsโ view that this study was an unwelcome intrusion into your community, and we understand that some of you may feel uncomfortable with the fact that the experiment was conducted without prior consent. We sincerely apologize for any distress caused. However, we wish to emphasize that every decision made during our study was guided by three core principles: ethical scientific conduct, user safety, and transparency.
Itโs unclear how they can claim transparency, given that the users were, in effect, unknowing lab ratsโbut honestly, thatโs not the point.
In fact, the study is fascinating precisely because of the total lack of transparency and user awareness. No one suspected they were talking to bots. No one.
Results of the experiment
The results are shocking.
Bots that profiled users before engaging were six times more persuasive than โexpertโ human usersโthose with the highest Delta scores.
An expert human has an average success rate of about 3% in changing someoneโs opinion. An automated bot, on the other hand, achieved an 18% success rate.
Keep reading with a 7-day free trial
Subscribe to Cyber Herm3tica (EN) to keep reading this post and get 7 days of free access to the full post archives.