The views expressed by contributors are their own and not the view of The Hill

Time to follow Europe’s lead on social media regulation

iStock

How did harmful content and misinformation online get so bad — after so much time, money, and scores of people trying to limit the spread of inaccurate, violent, obscene, and harmful content?

For starters, the problem is much bigger than it used to be, and human-driven efforts cannot keep up. Even with tens of thousands tackling the problem (Meta’s Trust & Safety team has swelled to an army of more than 40,000) the sheer amount of digital content is too overwhelming. Moderating this content by human review is not only time-consuming, ineffective, and error-prone, it also can endanger the mental health of the moderators who must sift through objectionable content day in and day out.

This go-it-alone approach at self-regulation has failed. Now, legislators’ patience has worn thin, and users are more vulnerable than ever.

All signs indicate a turning point as far as how business leaders and government bodies approach social media regulation. But what exactly will this look like, and how will platforms balance supporting free speech with getting a handle on the rampant misinformation, conspiracy theories, and promotion of fringe, extremist content that contribute significantly to bad events and outcomes?

Misinformation is particularly tricky to regulate, because of the particular dangers to freedom of speech when misinformation gets sanctioned by governments. Not regulating it means harm continues; regulating it means governments are telling people what information they are allowed to share.

A global regulation deluge

Now is a good time to turn to Europe for a lesson on thoughtful and effective regulation that looks more at the whole and less at the parts of the problem.

Among others, these include the Digital Services Act (DSA) in the EU; the Online Safety Bill in the UK, Australia’s Safety By Design. All told, there are 14 international jurisdictions that have recently introduced or announced more restrictive online content regulations, all together creating several hundred new obligations for platforms hosting this content.

New rules in the DSA in Europe, for example, include removing illegal content and goods faster, explaining how their social media algorithms work, and taking stricter action on the spread of misinformation. There are also requirements for age verification to protect minors — a very good thing. And the fines are stiff: up to 6 percent of the platforms’ annual income for noncompliance. Most of the regulations there have been announced or proposed but are not yet implemented; however, they point to positive change.

What to watch for as regulations unfold

There are several trends to watch as these regulations are implemented. So far, many of the obligations apply only to the largest platforms, but we expect them to be rolled out to a broader set of smaller companies.

Independent audits will become commonplace, and there will be less reliance on platforms’ self-reporting. Age gating and verification to protect minors is one of the most pressing areas, and regulators are working with urgency to push new laws. The algorithms and machine learning systems social media platforms use to promulgate content will be scrutinized for their impact on user safety. Performance requirements will be defined, and measurement and monitoring be required.

The ultimate goal is to prevent harm for users by reducing objectionable and harmful content, while protecting freedom of expression online. After all, what stops governments from overzealous regulation to benefit their own political goals and suppress dissent? Some governments such as China have always been heavy-handed in regulating speech online, but increasingly others have followed suit.

Many recent laws — announced under the guise of “online safety” — are also restricting what people can say. Is there a good balance? Yes, but laws should be focused on areas with greatest harm for physical safety. Regulations should also include citizen involvement, transparency and regular review, measurement and monitoring, and development of industry standards.

The upshot for trust & safety teams

For now, trust & safety teams at social media companies around the world are feeling the squeeze and anxiously tracking hundreds of proposed laws in many countries.

This flurry of new laws forces companies to focus on compliance, transparency, and improved enforcement. It will lead to increased funding and executive attention. The stakes are even higher to deliver in this area, but trust & safety teams already have a lot on their plates. They now have to deal with differences in laws, lack of specificity, and tight timelines, making compliance even harder to achieve.

It is going to get worse for trust & safety teams before it gets better, but there may be a silver lining in adept regulation and technologies that can assist them in identifying high-risk and unsafe content, accounts, and transactions at scale.

Specifically, ML-based classifiers and rules engines from companies that specialize in trust & safety are now helping humans better assess fraud and safety risks on their platforms and prevent user harm, similar to how GDPR helped guide toward data protection and user privacy.

Look toward Europe for what’s to come

As global tech companies scramble to solidify prudent strategies for policing content, it will be wise for them to take the EU’s comparatively stringent regulations as their yardstick.

Both tech platforms and lawmakers in the U.S. intent on reining in harmful content can benefit from looking to the EU’s rules for inspiration. The top priority: Freedom of speech needs to be protected, and guardrails must be defined. 

My recommendation: Company leadership needs to put trust & safety compliance at the top of their priority lists.

Long term, we need industry standards and more proactive self-governance; otherwise, an increasingly dense web of disjointed and hard-to-implement laws will make the task of overburdened trust & safety teams nearly impossible to accomplish.

Tom Siegel is co-founder of Trust Lab (San Francisco), a leading technology platform to measure and manage safety on the web, and former VP of Trust and Safety at Google. He serves on a number of trust & safety advisory boards and industry initiatives.

Tags Australia content moderation Digital Services Act England European Union misinformation moderating content Moderation system social media content Social media regulation United Kingdom

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.