The views expressed by contributors are their own and not the view of The Hill

Government AI regulators won’t survive political pressures, but there’s a better way

The FDA, CDC and NIH started as non-political, scientific entities focused on research and ensuring that new drugs are safe and effective. Now, a massive transformation has taken place, and not for the better.

We have observed agency employees actively censoring inconvenient online content, while the policies of political-insiders, such as teacher unions, are promoted at the expense of science. Academic debate surrounding the science of COVID was shut down on social media, while the pharmaceutical companies made their greatest profits to date. 

All this was happening with the helping hand of these now-politicized government agencies. 

We are now entering the world of artificial intelligence and all it has to offer. So how do we regulate it? Whether legislation designates a Federal Digital Platform Commission, a department inside the FTC, or an AI Agency as proposed by Sam Altman, such government entities will inevitably be tainted by the political process of approving agency leaders and budgets. That all goes without mentioning the non-stop political culture of Washington D.C., and regulatory capture by the largest AI and online content platform companies.

Most leaders recognize that our government should not create an agency that would operate as a Ministry of Truth to combat misinformation. Besides violating our constitutional protections of freedom of speech and the press, such an entity will inevitably fail due to the fundamental problem of “Who Decides?” Our own government has regularly promoted false information and fought to censor disagreement on many important topics, whether it is WMD in Iraqcollusion by a presidential candidate with Russia, or the Wuhan Lab as the source of COVID.

Democrats who want government agencies to regulate AI and online content need to remember that political party control has historically shifted at least every eight years. Would Democrats want a future Republican-appointed agency to decide which online information is true and which is false?

Although falsehoods can be harmful, the Supreme Court has regularly made clear that the effort by government to determine what is true or false is far more harmful. What is needed is to shift the test on online content away from whether it is true or false, to a new test of whether it is imminently harmful to people.  

A set of four simple broad guard rails provides straightforward criteria for AI, search, and online content moderation. The first is safety content that is imminently harmful to persons should be moderated 

The second is neutrality. A platform with a sufficient number of users should avoid taking sides on controversial issues.   

The third is transparency: AI training data, content moderation rules and enforcements should be published 

The fourth is accountability. An entity independent of both government and the social media platforms should adjudicate appeals 

The safety, neutrality and transparency principles are straightforward. For accountability, non-government FINRA provides an example of an existing entity that provides independent adjudication of appeals of disputes, in its case between retail investors and their financial advisors. 

A similar non-government entity, independent of the online platforms, could be carefully constructed to ensure that appeals of user disputes are handled fairly at large scale and without excessive litigation. Like FINRA, this new entity could leverage the thousands of arbitration judges available who are experienced at assessing harm and adjudicating appeals of disputes between users and the monopoly-scale platform companies.

Both sides of the legislative aisle need to recognize the country’s need for a non-political, non-government solution that both protects online users from harm and ensures all voices and viewpoints can be equally heard. After all, what goes around comes around, and whether it’s 2024 or 2028, the pendulum will swing back the other way soon. 

Michael Matthys is cofounder of the Institute for a Better Internet.