NEW YORK (Reuters) – Meta said on Wednesday it had identified “likely AI-generated” content being used deceptively on its Facebook (NASDAQ:) and Instagram platforms, including comments praising Israel’s performance in the Gaza war posted under reports from global news organizations. and US legislators.
The social media company said in its quarterly security report that the accounts impersonated Jewish students, African Americans and other concerned citizens and targeted audiences in the United States and Canada. It attributed the campaign to Tel Aviv political marketing firm STOIC.
STOIC did not immediately respond to requests for comment on the allegations.
WHY IS IT IMPORTANT
While Meta has identified basic AI-generated profile photos in influence operations since 2019, the report is the first to reveal the use of more sophisticated generative AI technologies since their introduction in late 2022.
Researchers are concerned that generative artificial intelligence, which can quickly and cheaply produce human text, images and audio, could lead to more effective disinformation campaigns and influence elections.
During the press conference, Meta Security executives said they do not believe new artificial intelligence technologies have hampered their ability to disrupt influence networks, which are coordinated attempts to spread messages.
The executives said they had not seen AI-generated images of politicians so realistic that they could be confused with genuine photographs.
KEY QUOTE
“These networks have several examples of how they are using likely generative AI tools to create content. Perhaps this gives them the ability to do it faster or do it at a higher volume. But this didn’t really impact our ability to detect them. “said Meta Threat Investigations Director Mike Dvilansky.
IN NUMBERS
The report highlights six covert influence operations that Meta disrupted in the first quarter.
In addition to the STOIC network, Meta shut down an Iran-based network focusing on the conflict between Israel and Hamas, although it did not identify any use of generative AI in the campaign.
CONTEXT
Meta and other tech giants are grappling with how to combat the potential abuse of new artificial intelligence technologies, especially in elections.
Researchers have found examples of image generators from companies like OpenAI and Microsoft (NASDAQ:) producing photos with voting-related misinformation, despite those companies having policies against such content.
Companies have focused on digital tagging systems to label AI-generated content as it is created, although these tools do not work with text and researchers question their effectiveness.
WHAT’S NEXT
Meta faces key tests of his defense in the European Union elections in early June and the US elections in November.