Westworld shouldn’t frame debate over artificial intelligence

AI
AI

In the HBO show Westworld, guests are immersed in a Western experiential theme park populated by human-like robots bordering on self-awareness. The show exploits and exacerbates natural anxieties about artificial intelligence, warning of a world where humans are left jobless, powerless and at the mercy of robots.

In reality, we should welcome AI. It’s not that scary and it’s already improving our lives.

{mosads}The limited forms of AI that exist today allow computers to perform specific tasks that require learning or understanding, such as voice recognition or translation, unlike the robots in Westworld, which are designed to achieve human consciousness. Continued AI adoption and advancement means we will be more productive, prosperous and have more free time on our hands. The chores we find difficult, annoying or time-consuming will be streamlined. When we need to make decisions, we will have a greater wealth of relevant information at our fingertips.

 

For instance, Google’s Cloud Vision API can analyze images to understand text, detect objects and reveal the emotional sentiment of crowds. Amazon’s Alexa may soon use AI to respond appropriately to your mood, or predict when you’ll run out of paper towels. Even mattress manufacturers are embracing machine learning to improve sleep by adjusting shape and firmness to minimize snoring and restlessness. On the job, AI can act like your personal assistant, helping you automatically respond to emails or schedule meetings.

A September 2016 study by Accenture projects AI could double the annual economic growth rate in developed countries and increase labor productivity by up to 40 percent by 2035. Moreover, according to the Analysis Group, efficiency gains from AI will lead to between $1.49 and $2.95 trillion in global economic growth. Such gains will come in the form of better decision-making and better resource management. 

AI will chug along in the background of our lives, improving simple things like image recognition, fraud detection and translation. As the technology progresses, it will be able to tackle more complex tasks like serving as digital assistants, catching health problems early or enabling autonomous vehicles.

However, AI is already confronted by precautionary regulatory scrutiny – perhaps from regulators who watched too much Westworld. Policymakers are concerned about the consequences of AI decision-making to cause physical or social harm. For example, the EU’s General Data Protection Regulation restricts “automated individual decision-making” and states that individuals “shall have the right not to be subject to a decision based solely on automated processing.”

The EU’s approach spells doom for technology that relies on sophisticated AI technologies like neural networks. Deep neural networks employ algorithmic logic that we cannot readily understand, though some techniques can get close. It is still unclear how these restrictions will manifest in reality when the GDPR comes into effect in May 2018. However, it could mean that Europeans no longer have access to a variety of AI-driven features on platforms like Facebook, Google and Amazon, as companies decide to steer clear of restricted markets rather than suffer the potential liabilities of steep compliance costs.

This so-called “right to explanation” in the GDPR gets at the heart of a discussion about algorithmic transparency. There are some legitimate concerns with facial recognition algorithms used by law enforcement, which may discriminate based on race, among other factors. But the GDPR bans all decisions based on automated processing, including ones where there is no possible discriminatory outcome – like my digital assistant ordering me more trash bags.

Here in the United States, policymakers are taking interest. The White House released two reports last fall exploring the effects of AI on society and public policy, including automation’s impact on jobs. Additionally, the Federal Trade Commission’s newly minted Office of Technology Research and Investigation will be researching the extent to which software reinforces biases. U.S. policymakers should be careful not follow the EU’s ham-fisted approach to regulating the technology.

We are not blindly heading to certain demise, as the human overlords are in Westworld, nor is Skynet slated to become self-aware anytime soon. The AI anxiety induced by TV and movies threatens to detract from an understanding of the AI we already experience today, and how it is likely to develop in the future. An instinct to smash the machines will prevent AI from making us all better off in the real world.

Anne Hobson is a technology policy fellow with the R Street Institute.


The views expressed by contributors are their own and are not the views of The Hill. 

Tags Anne Hobson Artificial intelligence Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more