The views expressed by contributors are their own and not the view of The Hill

The EU should support Ireland’s bold move to regulate Big Tech

European Commissioner for Europe fit for the Digital Age Margrethe Vestager, left, and European Commissioner for Internal Market Thierry Breton during a news conference on Digital Services Act and the Digital Markets Act at the European Commission headquarters in Brussels, Tuesday, Dec. 15, 2020.
European Commissioner for Europe fit for the Digital Age Margrethe Vestager, left, and European Commissioner for Internal Market Thierry Breton during a news conference on Digital Services Act and the Digital Markets Act at the European Commission headquarters in Brussels, Tuesday, Dec. 15, 2020. (AP Photo/Olivier Matthys, Pool)

Dublin, Ireland, was stunned a month ago by riots that transformed its downtown into chaos, the worst rioting in decades, stemming from far-right online rumors about an attack on children. 

The riots, like Jan. 6, appear to be a direct outgrowth of the amplification ecosystem supported by social media networks such as TikTokGoogle’s YouTube and Meta’s Instagram, which likely keep their European headquarters in Dublin for tax reasons. 

Ireland, long ridiculed for bowing to Big Tech, has now come out with a powerful proposal to address the problems of algorithmic amplification. Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for digital platforms. 

It has proposed a simple, easily enforceable rule that could change the game: All recommender systems based on intimately profiling people should be turned off by default.  

In practice, that means that the big platforms cannot automatically run algorithms that use information about a person’s political views, sex life, health or ethnicity. A person will be able to switch an algorithm on, but those toxic algorithms will no longer be on by default. Users will still have access to algorithmic amplification, but they will have to opt in to get it.  

Today, algorithms feed each user different information. They derive their power from the trove of personal data that platforms acquire about users, data that enables the identification and exploitation of emotional weak points, all to maximize engagement. 

Platforms do not acknowledge responsibility for downstream consequences. Some users respond best to cat videos, others to hate speech, disinformation and conspiracy theories. For many, the response to harmful content is involuntary, driven by flight or fight. Either way, users spend more time on the platform, which allows the company to make more money by showing them ads.  

Artificially amplifying outrage may be lucrative, but it carries a terrible cost. Recommender systems enable to migrate from the fringe to the mainstream. An investigation revealed that Meta’s algorithms were key contributors to the murderous hate that cost thousands of people their lives in Myanmar’s 2017 Rohingya genocide.  

Frances Haugen, a whistleblower, revealed in 2021 that Meta had known the danger of its algorithm for years. As long ago as 2016, Meta’s internal research had reported that “64 percent of all extremist group joins are due to our recommendation tools.” 

It continued: “Our recommendation systems grow the problem.”  

When 37,000 people allowed researchers to monitor their YouTube experience, nearly all the nasty videos they had encountered were pushed into their feed by YouTube’s algorithm. An experiment using simulated users found that Facebook, Instagram and X, the platform formerly known as Twitter, recommendedantisemitic and conspiracy content to test users as young as 14 years old.   

Earlier this year, the surgeon general spoke about the danger of algorithms that promote suicide and self-harm. A recent experiment by Amnesty International proves the point: TikTok’s algorithm recommended videos encouraging suicide to a test user posing as a 13-year-old only an hour after the account was created.  

Recommender systems didn’t really take hold until 2010, and we’ve tried trusting the Big Tech platforms, but 13 years into the experiment, with plenty of data showing harm, we cannot trust technology companies to regulate themselves in the public interest.  

We do not want the government to be in the business of sorting through what is and is not harmful if amplified. The brilliance of the Ireland model is that it offers a way forward: rules that are content-neutral, giving users control of one critical aspect of their online experience. 

After years of being a tax haven for Big Tech, Ireland is now offering the world a groundbreaking rule to protect democracy, public health and public safety. In under nine months, Coimisiún na Meán has gone from initial launch to tackling the machine at the heart of the disinformation crisis. 

The rule is necessary because current European Union regulations aren’t working and the new Digital Services Act is not designed to tackle the core problem. Under the EU’s General Data Protection Regulation, tech firms are already supposed to get a person’s “explicit” (two-step) consent to process inferences about their political views, sexuality, religion, ethnicity or health. Several complaints that the big firms have failed to seek or receive this consent have not been resolved years after being brought. But for over five years, Big Tech’s primary General Data Protection Regulation authority, which is in Ireland, failed to notice or act.  

Europe often trumpets its regulatory leadership in the world. But the so-called “Brussels Effect” of other countries heeding its rules began to dissipate when Europe failed to enforce its most famous law: the General Data Protection Regulation. The European Commission is understandably focused on the Digital Services Act, which goes into effect next month, but EU policymakers should welcome the new Irish rules. 

Coimisiún na Meán’s bold move would ultimately make the Digital Services Act far more successful. Europe and the Irish government are stepping up at last to regulate harmful technology products. Social media may become social again.  

Zephyr Teachout is a professor of Law at Fordham Law School where she focuses on the intersection of corporate power and political power. Roger McNamee was an early investor in Facebook and an advisor to Mark Zuckerberg. He is the author of the New York Times bestseller  “Zucked: Waking up to the Facebook Catastrophe.

Tags algorithms disinformation campaign Frances Haugen Republic of Ireland Social media disinformation social media regulations

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.