EITHERuring the last 11 months, someone created thousands of fake and automated Twitter accounts, perhaps hundreds of thousands of them, to offer a series of eulogies to Donald Trump.
In addition to posting words of adoration about the former president, the fake accounts ridiculed Trump critics from both parties and attacked Nikki Haley, the former South Carolina governor and UN ambassador who is challenging her former boss for the Republican presidential nomination. of 2024.
When it came to Ron DeSantis, the bots aggressively suggested that the Florida governor couldn’t beat Trump, but would make a great running mate.
As Republican voters weigh their 2024 candidates, whoever created the botnet is looking to tip the balance, using online manipulation techniques pioneered by the Kremlin to influence the digital platform’s conversation about the candidates while exploiting Twitter’s algorithms to maximize your reach.
The extensive botnet was discovered by researchers at Cyabra, an Israeli technology company that shared its findings with The Associated Press. While the identity of those behind the fake account network is unknown, Cyabra analysts determined that it was likely created within the US.
“One account will say: ‘Biden is trying to take our guns; Trump was the best’, and another will say: ‘Jan. 6 was a lie and Trump was innocent,’” said Jules Gross, the Cyabra engineer who first discovered the network. “Those voices are not people. For the sake of democracy, I want people to know that this is happening.”
Bots, as they are commonly called, are fake, automated accounts that became notorious after Russia employed them in an effort to meddle in the 2016 election. While big tech companies have improved their detection of fake accounts , the network identified by Cyabra shows that they remain a powerful force in shaping online political discussion.
The new pro-Trump network is actually three different networks of Twitter accounts, all created in large batches in April, October and November of 2022. In all, researchers believe hundreds of thousands of accounts could be involved.
All accounts include personal photos of the alleged account holder, as well as a name. Some of the accounts posted their own content, often in response to real users, while others republished content from real users, helping to expand it further.
“McConnell… traitor!” wrote one of the accounts, in response to an article in a conservative publication about Senate Republican leader Mitch McConnell, one of several Republican Trump critics targeted by the network.
One way to measure the impact of bots is to measure the percentage of posts on any topic generated by accounts that appear to be fake. The typical online discussion rate is often in the low single digits. Twitter itself has said that less than 5% of its daily active users are fake or spam accounts.
However, when Cyabra researchers examined negative posts about specific Trump critics, they found much higher levels of inauthenticity. Nearly three-quarters of the negative posts about Haley, for example, can be traced back to fake accounts.
Read more: A Brief History Of Nikki Haley’s Biggest Flip Flops On Trump
The network also helped popularize a call for DeSantis to join Trump as his vice-presidential running mate, an outcome that would serve Trump well and allow him to avoid a potentially bitter showdown if DeSantis enters the race.
The same network of accounts shared overwhelmingly positive content about Trump and contributed to a general misrepresentation of his support online, the researchers found.
“Our understanding of what is mainstream Republican sentiment by 2024 is being manipulated by the prevalence of online bots,” the Cyabra researchers concluded.
The triple network was discovered after Gross analyzed tweets about different national political figures and noted that many of the accounts posting the content were created on the same day. Most of the accounts remain active, though they have a relatively modest number of followers.
A message left with a Trump campaign spokesperson was not immediately returned.
Most bots aren’t designed to persuade people, but to amplify certain content for more people to see, according to Samuel Woolley, a University of Texas professor and disinformation researcher whose most recent book focuses on propaganda. automated.
When a human user sees a hashtag or piece of content from a bot and reposts it, they are doing the network’s work and also sending a signal to Twitter’s algorithms to further drive the spread of the content.
Bots can also be successful in convincing people that a candidate or idea is more or less popular than reality, he said. More pro-Trump bots may cause people to exaggerate their overall popularity, for example.
“Bots have an absolute impact on the flow of information,” Woolley said. “They are built to fabricate the illusion of popularity. Replay is the central weapon of propaganda and bots are really good at replay. They’re really good at getting information in front of people’s eyes.”
Until recently, most bots were easily identifiable thanks to awkward handwriting or account names that included nonsense words or long strings of random numbers. As social media platforms got better at detecting these accounts, the bots got more sophisticated.
So-called cyborg accounts are one example: a bot that is periodically taken over by a human user that can post original content and respond to users in a human manner, making it much harder to spot them.
Bots could soon get a lot of sneaker thanks to advances in artificial intelligence. New AI programs can create realistic profile photos and posts that sound much more authentic. Bots that sound like a real person and implement deepfake video technology can challenge platforms and users in new ways, according to Katie Harbath, a fellow at the Bipartisan Policy Center and former director of public policy at Facebook.
“Platforms have gotten a lot better at fighting bots since 2016,” Harbath said. “But the guys that we’re starting to see now, with AI, can create fake personas. fake video”.
These technological advances are likely to ensure that bots have a long future in American politics, both as digital foot soldiers in online campaigns and as potential problems for both voters and candidates trying to fend off anonymous attacks online.
“There has never been more noise online,” said Tyler Brown, a political consultant and former digital director for the Republican National Committee. “How much of this is malicious or even unintentional without facts? It’s easy to imagine people being able to manipulate that.”
More TIME must-reads