The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence doesn’t have to be inhumane

Getty Images

Artificial intelligence (AI) doomers warn us of mass level extinction events, of AI setting off nuclear weapons and of supervillains planning catastrophic disasters. But life isn’t a movie. As AI experts, our worst-case scenario is a technocentric world where the blind pursuit of AI growth and optimization outweighs the imperative for human flourishing.

What does such a world look like? A technocentric world is highly optimized, so it has the veneer of productivity. People are online, on a screen and, in general, always “on.” We wear headsets, earbuds, goggles, microphones, immersing ourselves so deeply it’s as if we are hiding from something. Meanwhile, polite chirping constantly corrects you or nudges you to your next task.

And yet, we have no idea what we spend our time doing. We have a life of disappearing hours, animalistically consuming media that makes us feel extreme yet empty emotions. We are constantly surveilled by systems that are mapping our every movement, putting it into an algorithm, and determining whether we are driving safely, getting enough steps, deserving of a job, cheating on an exam or, simply, somewhere it doesn’t think we should be. We are so overwhelmed we feel nothing.

A technocentric world is based on the premise that humanity is flawed and technology will save us.

A world dominated by humanity-erasing technology isn’t too far from our future. A surgeon general’s advisory warns us that social media presents a ‘”meaningful risk of harm” to youth — and yet 54 percent of teens say it would be hard to stop using it.

What’s to blame? Poor decisions by children? Bad parenting? Or revenue-driven engagement optimization?

But let’s not just point the finger at social media companies. Algorithmic management — the use of surveillance technologies to track and monitor employees — creates situations where workers are urinating in bottles because they must meet stringent time constraints (and algorithms don’t need to use the bathroom). Similarly, algorithms are used to inappropriately fire hard-working Army veterans in the most soulless manner: an automated email message. This lack of basic dignity in the workplace is an inhumane byproduct of hyperoptimization.

This wave of indifference isn’t just limited to America. We trust our AI-developed content won’t be harmful because individuals in places like the Philippines, India and the African continent are paid less than $2 per hour to sanitize our experience. Content moderation, a commonly used practice in all forms of AI-curated or developed media, is known to cause post-traumatic stress disorder in moderators. We distance ourselves from human trauma behind glowing screens.

This also isn’t only a problem of blue-collar workers. The first wave of layoffs due to AI automation has been among college-educated workers ranging from designers to copywriters and programmers. This was predicted by OpenAI, the company that built ChatGPT. And yet, all it seems we can do is wring our hands in despair.

We should be familiar with these problems; after all, these technologies simply amplify and entrench the inequalities, biases and harms that already existed.

What are we doing? Why are we doing it? And more importantly, what do we do about it?

The worst-case scenario about AI isn’t about AI at all. It’s about humans making active decisions to pursue technological growth at all costs. Both AI doomer-speak and AI utopia-speak use the same sleight of tongue when they anthropomorphize AI systems. Moral outsourcing is insidious; when we ask whether “AI will destroy/save us,” we erase the fact that human beings create and deploy AI in the first place. Human-like interfaces and the allure of data-driven efficiencies trick us into believing AI outputs are neutral and preordained. They are not.

Technoexceptionalism tells us that the problems AI introduces are unprecedented, and only those who built it can tell us how it can be governed. This is simply incorrect. Most technologists are ill-equipped to wrestle with the ethical challenges introduced by technology. Good governance exists to empower, and we need a group acting on behalf of the common good.

One way to stop our worst-case scenario is by investing in global governance — an independent body that collaborates with governments, civil society, researchers and companies to identify and address the problems of AI models. A group like this could confront the biggest societal challenges and arm the world’s existing governance ecosystem with the means to guide AI’s development for public benefit.

A global governance entity should have the mission of optimizing human flourishing. This doesn’t mean AI assistants, or AI “for good,” but investment in the intangibles of humankind. Humanity isn’t an inefficiency to be optimized away but something to be carefully protected and nurtured. An investment in humanity isn’t about enabling further billions for the builders of these technologies and their investors — it’s investment toward ensuring that society thrives in a way that respects democratic values and human rights for all. 

A mission of human flourishing sounds vague, nebulous and far-fetched — but isn’t it a fair match to the AI companies’ equally far-fetched goal of artificial general intelligence? Our efforts to preserve humanity must be on par with the investment and ambition being placed toward artificial intelligence.

Rumman Chowdhury is the Responsible AI Fellow at Harvard University’s Berkman Klein Center for Internet and Society.

Sue Hendrickson is executive director of Harvard University’s Berkman Klein Center for Internet and Society.

Tags Artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.