Meta, the parent company of Facebook and Instagram, said Tuesday it is ramping up its capability to detect and label images generated by artificial intelligence (AI) ahead of elections in the U.S. and abroad.
Meta said it is working with other companies to develop common standards for identifying the use of AI to help the platform label images that come from other sources, such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.
The images will be detected by including metadata in images created with certain AI tools.
“We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app. We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” Meta’s president of global affairs Nick Clegg said in the blog post.
Those labels will be applied across Meta’s platforms, which also include Threads, the text-based platform that launched in July.
Clegg said similar labeling efforts for AI tools that generate audio and video at the same scale, have not yet begun.
As the industry “works toward this capability,” Clegg said Meta will add a feature for people to disclose when they share AI-generated video or audio so the company can add a label to it.
For AI-generated or altered images, video or audio that “creates a particularly high risk of materially deceiving the public on a matter of importance,” Meta may add a “more prominent label if appropriate” to provide users with additional information and context, he said.
Meta’s update comes as AI election-related content has been increasing as the 2024 race heats up. In addition to AI use in social media posts, a robocall in New Hampshire that seemingly used a deepfake audio of President Biden’s voice raised concerns about the impact of AI-generated content on the election.
Congress is also weighing measures to add guardrails for AI, but has yet to pass any bills that put rules in place.
Asked about government regulation on Good Morning America, Clegg did not back any specific proposals but said there is “definitely a role for governments.”
“I think it’s right that you have certain guardrails in place to make sure there’s proper transparency about how these big AI models are built, make sure they are properly stress tested so they’re as safe as they can be,” Clegg said.
— Updated at 10:56 a.m.