The use of generative AI tools, which can create text, audio and video content, has been on the rise over the past year since the explosive public release of OpenAI’s ChatGPT.
Lawmakers on both sides of the aisle have shared concerns about how AI could amplify the spread of misinformation, especially regarding critical current events or elections.
The Senate held its fifth AI Insight Forum last week, covering the impact of AI on elections and democracy.
As Congress considers proposals to regulate AI, leading tech companies are crafting their own policies that aim to police the use of generative AI in political ads.
Meta, the parent company of Facebook and Instagram, and Google have announced policies that would require political ads to disclose when they have used AI.
Robert Weissman, president of the consumer advocacy group Public Citizen, said the policies are “good steps” but are “not enough from the companies and not a substitute for government action.”
“The platforms can obviously only cover themselves; they can’t cover all outlets,” Weissman said.
Read more in a full report at TheHill.com.