The views expressed by contributors are their own and not the view of The Hill

Stop letting governments request social media censorship in secret

Governments around the world routinely request global social media and search platforms to remove content. This can be a positive thing if the content in question is clearly harmful. But it can be nefarious if the content is simply inconvenient or disagreeable to a government’s viewpoint on a particular current news topic.

Unfortunately, when governments around the world request or demand censorship, they do so with the expectation that their request will remain private and not be publicized. Some online platforms report a summary of these government requests. Other platforms are completely silent.

This has been observed recently in content removal requests from a powerful court justice in Brazil. X has brought the most recent requests into the glare of public lights with its initial refusals to comply, but it is clear that Brazil and other governments around the world intend to silence their opposition online without the embarrassment of making such requests in public.

The massive publicity around Australia’s recent request to remove content has triggered a national discussion in Australia on content moderation and the role of government in making specific requests.  

There are also undoubtedly numerous government requests to remove content that creates a truly imminent threat of harm. But sometimes they may be slow to or unable to remove such content quickly. Transparency for all these government requests, both good and nefarious, will improve both online safety and viewpoint neutrality.  

These governments, including the EU, and U.S. federal government agencies have built or are building organization infrastructure to make content removal requests. These government agencies expect their requests will not be made public and the social media platforms will quietly comply as directed to avoid facing regulatory, legal or financial consequences from these government entities.

However, when these requests are made public, such as we have seen with X publicly refusing the recent requests from Brazil and Australia, then the requests can be scrutinized and judged by the public in those countries as well as globally.   

If social media platforms are forced to provide full transparency, it will expose government requests that prove to be nefarious efforts to censor inconvenient or disagreeable content that is not imminently harmful. This will make governments much more cautious about making such requests.

Similarly, under such a mandate of transparency, if government requests are legitimate and seek to remove imminently harmful content such as threats of violence or child pornography, then the publication of such requests will quickly drive social media platforms to improve their content moderation and do a better job of ensuring such content is quickly removed and not widely available in the future.

Thus, transparency is a win-win.

Unfortunately, social media platforms currently provide minimal transparency on their enforcement actions. They release spreadsheet reports which merely include the generic term “government request” for a long list of unknown content enforcement actions, but they do not report the type of content, which government, what type of enforcement action, and whether the targeted user was notified of the specific reasons for the enforcement action in any specific case.

While our divided Congress cannot easily determine how to define even the basic guardrails of online safety and viewpoint neutrality, it is far easier and far less controversial to mandate enhanced transparency of all social media content moderation activities.

This would include specific explanations to users affected by content enforcement actions — including exact content rules broken, exact content that broke such rules and any third-party fact-checkers involved. It would also include quarterly reports detailing all enforcement actions (including hidden de-amplify actions); content rules broken; the actual specific types of content affected; all government requests (excepting actual national security or law enforcement requests); and public entities or figures affected without harming privacy of non-public individual users.

Explanations of enforcement actions will improve user trust in the platforms. The quarterly reports will allow media, academics, and the public to compare the search and social media platforms to compare their performance for both online safety and viewpoint neutrality.

Government requests which are clearly nefarious or based on an inconvenient viewpoint will suffer from the glare of publicity in the court of public opinion.  

Mike Matthys is co-founder of the Institute for a Better Internet.