The views expressed by contributors are their own and not the view of The Hill

OpenAI’s warning shot shows the fragile state of EU regulatory dominance

FILE – A man walks down stairs during a special session on lobbying at the European Parliament in Strasbourg, eastern France, Dec. 12, 2022. Following months of investigations, police have so far launched more than 20 raids, mostly in Belgium but also in Italy. Hundreds of thousands of euros have been found in Brussels and four people have been charged since Dec. 9 with corruption, participation in a criminal group and money laundering. (AP Photo/Jean-Francois Badias, File)

On May 24, OpenAI, the company that created ChatGPT, announced that it may discontinue business in the European Union (EU). This announcement followed the European Parliament’s recent vote to adopt its new AI Act.

In response to criticism from EU industry chief Thierry Breton, OpenAI CEO Sam Altman tweeted a day later that OpenAI has no plans to leave Europe. Yet the very threat of such a departure underscores the need for continued dialogue on AI regulations. As competition increases, regulatory collisions within this multi-trillion-dollar industry could be disastrous. A U.S.-EU misalignment could generate huge inefficiencies, duplicated efforts and opportunity costs.

Most importantly, the mere possibility that OpenAI would depart could signal the demise or significant weakening of European regulatory primacy — something known as the “Brussels Effect” — given the company’s widespread influence and applications.

The Brussels Effect highlights how the EU’s market forces alone are enough to incentivize multinational companies to voluntarily abide by its regulations and encourage other countries to adopt similar laws on their own. In recent years, however, the EU has implemented various interventionist policies that critics argue hinder innovation within and outside the region. One of these policies is the General Data Protection Regulation. Immediately after its enactment in May 2016, it prompted many data privacy law copycats worldwide, including within many U.S. states. However, many have argued its vague provisions and lack of guidance for companies looking to comply have rendered it ineffective.

Another example is the EU’s Digital Markets Act (DMA), enacted in November 2022. It targets “gatekeepers” — core digital platforms — which the European Commission claims “prevent competition, leading to less innovation.” Critics have said the DMA worsens services for consumers and that its “big, bad tech” approach actually reduces innovation. 

The EU’s new AI Act shares similar flaws with the GDPR and DMA. OpenAI’s CEO Sam Altman labeled the policy as “over-regulating” and stated that if compliance is unforeseeable, they will cease operations in the EU. Altman’s primary concern centers around the AI Act’s requirement for companies to disclose copyrighted materials used in the training and developing of generative AI tools like ChatGPT. Indeed, complying with that particular rule would be essentially impossible for AI companies to achieve. 

The EU’s reaction to Altman’s statement will determine the extent of American firms’ direct influence on European regulatory actions. If the EU decides to amend its policies based on OpenAI’s suggestions, it may signal a further weakening of de facto EU standards. Conversely, if the EU rigidly enforces the rule unchanged, OpenAI’s potential withdrawal would send a message to other countries and companies around the world that doing business in the EU is perhaps unnecessary.

Either way, there is a risk of divergence between EU and non-EU standards, resulting in a fragmented AI regulatory landscape with varying levels of accountability and limitations.

The U.S. and EU already exhibit starkly different approaches to regulating AI and promoting ethical and responsible innovation. Although both have similar guidelines on non-discrimination, accuracy, robustness, security and data privacy, the EU’s approach is much more centralized and punitive. In contrast, the U.S. AI Bill of Rights is more geared toward delegating regulatory decisions to agencies, whose authority in enforcing these regulations is unclear. It focuses on a tailored, sector-specific approach.

Moreover, the U.S. federal government has been investing more heavily in AI innovation. In fact, on May 4, the White House announced $140 million in funding to launch seven new “National AI Research Institutes.” 

Also, when it comes to AI, U.S. states have already broken from their pattern of following EU tech regulatory standards.

Ultimately, Washington will have to determine whether the benefits of regulatory differentiation from the EU outweigh the costs. It remains to be seen whether an American innovation-centered strategy will give rise to new methods of responsible AI. However, if U.S. firms and agencies persist in charting their own course on AI regulations, it could undermine the Brussels Effect, potentially eroding Europe’s sway over global tech norms.

April Liu is a research associate at the Libertas Institute, specializing in data privacy, tech and AI regulation.

Tags AI Artificial Intelligence Chat GPT European Union OpenAI Thierry Breton

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more