HomeIndustriesNetworks linked to Russia and China use OpenAI tools to spread disinformation

Networks linked to Russia and China use OpenAI tools to spread disinformation

Stay up up to now with free updates

OpenAI has uncovered that operations linked to Russia, China, Iran and Israel are using the corporate's artificial intelligence to create and spread misinformation. In an election 12 months, the technology is becoming a robust weapon in the knowledge war.

The San Francisco-based maker of chatbot ChatGPT said in a report on Thursday that five covert influence operations had used its AI models to generate text and pictures on a big scale with fewer speech errors than before and to generate comments or replies to their very own posts. OpenAI's policies prohibit using its models to deceive or mislead others.

The content focused on topics “including Russia's invasion of Ukraine, the conflict in Gaza, elections in India, politics in Europe and the United States, and criticism of the Chinese government by Chinese dissidents and foreign governments,” OpenAI's report said.

The networks also used AI to extend their very own productivity by applying it to tasks similar to debugging code or examining public social media activity, it said.

Social media platforms similar to Meta and Google's YouTube attempted to curb the spread of disinformation campaigns following Donald Trump's victory within the 2016 U.S. presidential election after investigators found evidence that a Russian troll farm had attempted to rig the election.

Pressure is mounting on fast-growing AI firms like OpenAI because the rapid advances of their technology make it cheaper and easier than ever for disinformation perpetrators to create realistic deepfakes, manipulate media, after which distribute that content mechanically.

With around two billion people going to the polls this 12 months, policymakers are calling on firms to introduce and implement appropriate guardrails.

Ben Nimmo, senior investigator for intelligence and investigations at OpenAI, said in a call with reporters that the campaigns didn’t appear to have increased engagement or reach “significantly” by utilizing OpenAI’s models.

However, he added: “Now shouldn’t be the time for complacency. History shows that influence operations which have been unsuccessful for years can suddenly erupt when nobody is in search of them.”

Microsoft-backed OpenAI said it was committed to uncovering such disinformation campaigns and was developing its own AI-powered tools to make detection and evaluation “more practical.” It added that its security systems were already making it harder for perpetrators to operate, with its models refusing to generate the specified text or images in several cases.

In the report, OpenAI revealed that several well-known state-affiliated disinformation actors had used its tools. These included a Russian operation called Doppelganger, first discovered in 2022, which generally seeks to undermine support for Ukraine, and a Chinese network called Spamouflage, which advances Beijing's interests abroad. Both campaigns used its models to generate text or comments in multiple languages ​​before posting them on platforms like Elon Musk's X.

It pointed to a previously unreported Russian operation called “Bad Grammar,” which used OpenAI models to debug code to run a Telegram bot and create short political comments in Russian and English that were then posted on the Telegram messaging platform.

X and Telegram were contacted for comment.

It also said it had thwarted a paid campaign to spread pro-Israel disinformation allegedly run by STOIC, a Tel Aviv-based political campaign management firm, which used its models to generate articles and comments on X and on Meta's Instagram and Facebook pages.

Meta published a report on Wednesday stating that the STOIC content has been removed and the accounts related to these operations have been closed by OpenAI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read