OpenAI has banned a gaggle of ChatGPT accounts linked to an Iranian influence operation that generated content concerning the US presidential election. Blog post on Friday. The company says the campaign resulted in AI-generated articles and social media posts, even though it appears they didn’t reach a big audience.
This will not be the primary time OpenAI has suspended accounts linked to state actors maliciously using ChatGPT. In May, the corporate suspended five campaigns Using ChatGPT to control public opinion.
These incidents are harking back to state actors using social media like Facebook and Twitter to influence previous election cycles. Now, similar groups (or perhaps the identical) are using generative AI to flood social channels with misinformation. Much like social media firms, OpenAI appears to be taking a whack-a-mole approach, shutting down accounts related to these efforts as soon as they surface.
OpenAI says its investigation into this group of accounts was Microsoft Threat Intelligence Report The study, published last week, identified the group (which it calls Storm-2035) as a part of a broad campaign to influence the U.S. election that has been ongoing since 2020.
According to Microsoft, Storm-2035 is an Iranian network with several web sites that mimic news outlets and “actively goal U.S. voting groups at opposite ends of the political spectrum with polarizing messages on issues resembling the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The script, as has been shown in other operations, will not be necessarily to advertise one policy or the opposite, but to sow discord and conflict.
OpenAI identified five website fronts for Storm-2035 that presented themselves as each progressive and conservative news outlets and had compelling domains like “evenpolitics.com.” The group used ChatGPT to put in writing several long articles, including one claiming that “X is censoring Trump's tweets,” something Elon Musk's platform definitely hasn't done (if anything, Musk is encouraging former President Donald Trump to interact more on X).
On social media, OpenAI identified a dozen X accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments that were then posted on these platforms. One of those tweets falsely and confusingly claimed that Kamala Harris was attributing “increased immigration costs” to climate change, followed by “#DumpKamala.”
OpenAI says it has seen no evidence that Storm-2035's articles were widely shared, and noted that the vast majority of its social media posts received few or no likes, shares, or comments. This is common with such operations, which will be arrange quickly and cheaply using AI tools like ChatGPT. Expect to see many more such posts because the election approaches and partisan bickering online intensifies.