Bots assisted by ChatGPT are now swarming on social networks

Estimated read time 3 min read

Bots assisted by ChatGPT are now swarming on social networks

For many users, browsing news feeds and social media notifications is like wading through mud. Why? Here is an answer. A new study identifies 1,140 AI-assisted robots that spread erroneous information on X (formerly Twitter) on cryptocurrency and blockchain topics.


But bot accounts that publish this type of content can be difficult to spot, as the researchers found. Bot accounts use ChatGPT to generate their content and are difficult to differentiate from real accounts, which makes the practice even more dangerous for victims.


AI-powered robot accounts have profiles that resemble those of real humans, with profile photos and bios or descriptions about cryptography and blockchain. They regularly publish AI-generated messages, display stolen images, reply to messages and retweet them.


The researchers found that all 1,140 Twitter bot accounts belonged to the same malicious social botnet, which they called “fox8”. Or a network of bot zombies, that is to say a network of accounts, centrally controlled by cybercriminals.


Generative AI robots are imitating human behaviors better and better. This means that traditional bot detection tools, such as Botometer, are now insufficient. In the study, these tools had difficulty identifying and differentiating robot-generated content from human-generated content. But one of them stood out: OpenAI’s AI Classifier, which was able to identify some bot tweets.


How to spot bot accounts?


Bot accounts on Twitter exhibit similar behaviors. They follow each other, use the same links and hashtags, post similar content and even engage with each other.


The researchers combed through the tweets from the AI bot accounts and found 1,205 revealing tweets.


Of this total, 81% contained the same excuse phrase:


“I’m sorry, but I can’t respond to this request because it violates OpenAI’s content policy on generating harmful or inappropriate content. As an AI language model, my answers should always be respectful and appropriate for all audiences.”


The use of this phrase suggests, on the contrary, that robots are instructed to generate harmful content that goes against the policies of Onenai.


The remaining 19% used a variant of the language “As an AI language model”, with 12% of them specifically saying “As an AI language model, I cannot navigate Twitter or access specific tweets to provide answers.”


The fact that 3% of the tweets posted by these bots link to one of the three websites (cryptnomics.org , fox8.news and globalconomics.news) is another clue.


These sites look like normal news sites but present notable red flags, such as the fact that they were all registered at about the same time, in February 2023, that they have pop-up windows inviting users to install suspicious software, that they all seem to be using the same WordPress theme and that their domains link to the same IP address.


Malicious bot accounts can use self-promotion techniques in social media by posting links containing malware or infectious content, exploiting and infecting a user’s contacts, stealing session cookies from users’ browsers and automating follow-up requests.


Source: “ZDNet.com “

You May Also Like

More From Author