OpenAI halted five political influence ops over the last three months


0

A face in profile in front of the OpenAI logo.

OpenAI is weeding out more bad actors using its AI models. And, in a first for the company, they’ve identified and removed Russian, Chinese, and Israeli accounts used in political influence operations.

According to a new report from the platform’s threat detection team, the platform discovered and terminated five accounts engaging in covert influence operations, such as propaganda-laden bots, social media scrubbers, and fake article generators.

“OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content,” the company wrote. “That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”

Terminated accounts include those behind a Russian Telegram operation dubbed “Bad Grammar” and those facilitating Israeli company STOIC. STOIC was discovered to be using OpenAI models to generate articles and comments praising Israel’s current military siege, that were then posted across Meta platforms, X, and more.

OpenAI says the group of covert actors were using a variety of tools for a “range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts.”

In February, OpenAI announced it had terminated several “foreign bad actor” accounts found engaging in similarly suspicious behavior, including using OpenAI’s translation and coding services to bolster potential cyber attacks. The effort was in collaboration with Microsoft Threat Intelligence.

As communities rev up for a series of global elections, many are keeping a close eye on AI-boosted disinformation campaigns. In the U.S., deep-faked AI video and audio of celebrities, and even presidential candidates, led to a federal call on tech leaders to stop their spread. And a report from the Center for Countering Digital Hate found that — despite electoral integrity commitments from many AI leaders — AI voice cloning is still easily manipulated by bad actors.

Learn more about how AI might be at play in this year’s election, and how you can respond to it.


Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win

0 Comments

Your email address will not be published. Required fields are marked *

Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube and Vimeo Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format