AP Technology

Consumers would be notified of AI-generated content under Pennsylvania bill

Pennsylvania state representatives held a voting session on Wednesday, April 10, 2024, in the state Capitol in Harrisburg, Pennsylvania. The House voted 146 to 54 to send the Senate a bill to require that consumers be notified when artificial intelligence has been used to generate content. (AP Photo/Mark Pynes)

HARRISBURG, Pa. (AP) — Pennsylvania consumers would be notified when content has been generated by artificial intelligence, and defendants couldn’t argue that child sexual abuse material created by artificial intelligence isn’t illegal, under a bill the House passed Wednesday.

The bill’s prime sponsor, Rep. Chris Pielli, said it was designed to place guardrails around the use of artificial intelligence to protect consumers.

“This bill is simple,” Pielli, a Democrat from Chester County, said in floor remarks. “If it’s AI, it has to say it’s AI. Buyer beware.”

Lawmakers voted 146-54 to send the measure to the state Senate for its consideration. All Democrats were in favor, while Republicans were roughly split.

The bill would change the state’s Unfair Trade Practices and Consumer Protection Law to require “clear and conspicuous disclosure” when artificial intelligence has been used to create written text, images, audio or video.

The notice would have to be displayed when the content is first shown to consumers. Violators would have to knowingly or recklessly post AI content, which Pielli said would help protect news organizations that unwittingly publish AI content.

It is opposed by the Pennsylvania Chamber of Business and Industry on the grounds that it could expose businesses to civil litigation and would not be limited to deceptive material. The group is specifically opposed to the consumer notification portion of the bill, a chamber spokesman said.

Another provision of the bill prohibits defendants from arguing that child sexual abuse material created by artificial intelligence isn’t illegal under criminal laws.

Public disclosure of AI’s use is an emerging theme across hundreds of state bills in U.S. legislatures that seek to regulate the new technology.

AI filters job and rental applications, determines medical care in some cases and helps create images that find huge audiences on social media, but there are scant laws requiring companies or creators to disclose that AI was used at all. That has left Americans largely in the dark about the technology, even as it spreads to every corner of life.

Margaret Durkin, TechNet executive director for Pennsylvania and the mid-Atlantic, said in a statement Wednesday that her organization expects to work with lawmakers on the definition of AI, “to decrease the uncertainty of who and what is affected.”

TechNet is a trade group of senior executives that lobbies for tech companies such as Meta and Google. Spokesman Steve Kidera said the group hopes to work with lawmakers to get from an opposed to a neutral position.

“For example, how does a football broadcast that uses AI to show predictive visual cues know when it’s the first time a consumer is interacting with their AI? If a copywriter uses a generative AI product to help them write something, are they obligated to present a disclosure? And how do they do that?” Durkin asked.

The Washington, D.C.-based BSA The Software Alliance, which advocates for the global software industry, said that as of early February there were several hundred AI-related bills pending before about 40 state legislatures. Topics covered by the bills include the risk of bias and discrimination, and deepfakes.

___

This story has been edited to correct the TechNet official’s name to Durkin, not Durking.

___

Associated Press writer Jesse Bedayn in Denver contributed to this report.