A safe, open internet with transatlantic rules is easier than it sounds
The internet is global, but the laws that govern it are local.
Putin has lowered a digital iron curtain around Russia to camouflage his brutal invasion of Ukraine by blocking truth and spewing disinformation. In response, transatlantic democracies, tech leaders and the public should join forces to forge an open and safer internet that respects free expression and human rights. The future of the global internet may well hinge on greater transatlantic alignment to present a clear alternative to the internet of despotic regimes.
On both sides of the Atlantic, internet users face the same issues of disinformation and online harm, amplified by the same powerful digital platforms. But despite increasingly similar governance ideas, transatlantic collaboration on a comprehensive digital regulatory regime is not in the cards, given the disparities in the U.S. and European legal systems, norms and priorities, along with starkly different time frames.
Two weeks ago, the European Union reached a deal on the final terms of its landmark regulation, the Digital Services Act. The EU leads the way on what it hopes will become the global standard, following in the footsteps of the General Data Protection Regulation (GDPR), the EU’s privacy and data protection regime. The United Kingdom is not far behind, with the possible adoption of its massive Online Safety Bill (OSB) early next year.
In contrast, the United States is still at the legislative starting gate. Congress is flooded with bills to regulate the tech industry, yet none commands a clear path to enactment, given the lack of consensus on what is needed and the scarcity of legislative days left this term. While the U.S. plods behind, neither the EU nor the U.K. will stand still.
While full alignment is impossible, there is a way right now for transatlantic democracies to collaborate on specific functions. It’s called modularity. Working with civil society and the tech industry, transatlantic governments would recognize common “modules” — discrete standards, protocols, codes of conduct, or oversight systems — as satisfying the requirements of their separate regulatory regimes.
Examples of possible modules include systems for vetting researchers and approving their access to platform data; vetting procedures, minimum standards and oversight of independent auditors seeking to conduct risk assessments and algorithm impact audits, and minimum disclosures and archiving rules for political advertising.
Legislators, academics and industry experts already are partnering on a variety of standards, protocols and best practices that could serve as a foundation for such modules.
There are compelling benefits to be gained when disparate governments recognize common modules. Greater regulatory consistency would conserve the limited resources of regulators by not having to create separate mechanisms from scratch. It would improve platform compliance while reducing cost and uncertainty from conflicting rules. And it would enable module updates to apply immediately across jurisdictions. Sunset provisions could be built in to ensure that the modules are regularly assessed for effectiveness.
This approach isn’t just good for governments. For the private sector, abiding by common modules across jurisdictions would make their compliance burdens more consistent, predictable and manageable. For internet users worldwide, it would improve their experience and expectations. In countries where the modules are required by law, platforms would be obligated to conform; but even where compliance is optional, the benefits of consistency will encourage voluntary compliance, as we’ve seen with General Data Protection Regulation.
Encouragingly, both the Digital Services Act and the current Online Safety Bill draft anticipate that a variety of stakeholders will develop these codes of practice and other requirements. For example, the Digital Services Act requires the European Commission to support international standards bodies that are developing voluntary standards for platform audits and to build industry and civil society participation in drawing up codes of conduct.
The Online Safety Bill similarly requires Ofcom, the U.K. communications regulatory commission, to consult with various stakeholders before drafting regulations and codes to implement the law. Critically, the Online Safety Bill also envisions that compliance with equivalent standards may suffice, which could lead to the acceptance of modular standards and codes.
What is missing is an explicit agreement by transatlantic governments to work together to allow these common modules to satisfy requirements in their laws, and to add enabling legislation where it is needed.
Here is their chance. This month, the G7 Digital Ministers will meet in Germany and the US-EU Trade and Technology Council will convene in Paris. Initiating discussion of modularity at these gatherings for later development would be exceptionally timely and productive.
Now presents a powerful opportunity for democracies to collaborate on the technical systems and protocols that underpin governance of the digital realm, and ensure the survival of an open and safer internet that respects free expression and human rights.
Susan Ness is a distinguished fellow of the Annenberg Public Policy Center (University of Pennsylvania) and the German Marshall Fund of the United States, and is a former member of the U.S. Federal Communications Commission.
Chris Riley is the principal of Cedar Road Consulting, a senior fellow of internet governance at R Street. He is a global internet policy and technology researcher, a former director of policy for Mozilla, and a former internet freedom program manager at the U.S. Department of State. The views expressed in this piece are solely those of the authors.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.