OpenAI says Russian and Israeli groups used its tools to spread disinformation


OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.

Related: $10m prize launched for team that can truly talk to the animals

As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried with mixed results to assuage these concerns and place guardrails on their technology.

OpenAI’s 39-page report is one of the most detailed accounts from an artificial intelligence company on the use of its software for propaganda. OpenAI claimed its researchers found and banned accounts associated with five covert influence operations over the past three months, which were from a mix of state and private actors.

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

Several of the disinformation spreaders that OpenAI banned from its platform were already known to researchers and authorities. The US treasury sanctioned two Russian men in March who were allegedly behind one of the campaigns that OpenAI detected, while Meta also banned Stoic from its platform this year for violating its policies.

The report also highlights how generative AI is being incorporated into disinformation campaigns as a means of improving certain aspects of content generation, such as making more convincing foreign language posts, but that it is not the sole tool for propaganda.

“All of these operations used AI to some degree, but none used it exclusively,” the report stated. “Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts, or memes copied from across the internet.”

While none of the campaigns resulted in any notable impact, their use of the technology shows how malicious actors are finding that generative AI allows them to scale up production of propaganda. Writing, translating and posting content can now all be done more efficiently through the use of AI tools, lowering the bar for creating disinformation campaigns.

Over the past year, malicious actors have used generative AI in countries around the world to attempt to influence politics and public opinion. Deepfake audio, AI-generated images and text-based campaigns have all been employed to disrupt election campaigns, leading to increased pressure on companies like OpenAI to restrict the use of their tools.

OpenAI stated that it plans to periodically release similar reports on covert influence operations, as well as remove accounts that violate its policies.

Signup bonus from $125 to $3000 | Signup now Football & Online Casino

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

You Might Also Like: