The views expressed by contributors are their own and not the view of The Hill

Humanity’s future hinges on smarter AI use

18 July 2024, Brandenburg, Cottbus: A live interaction between a simulated patient (Doris Härtel) and a robot can be seen at a press event at the Carl-Thiem-Klinikum Cottbus. A pilot study is currently being conducted at the Brandenburg University of Technology Cottbus-Senftenberg (BTU) to investigate the effects of the interaction modalities of a social robot (Pepper) on the engagement and perception of test subjects in typical care situations. Photo: Patrick Pleul/dpa (Photo by Patrick Pleul/picture alliance via Getty Images)

As critical as these crises are, the future of our species will not be determined by what happens in Ukraine, the Middle East, the Taiwan Strait or even the upcoming U.S. elections. 

The ultimate story of this moment in history is that our species is gaining the increasing ability to engineer intelligence and reengineered life. The ultimate question is whether we will use our newfound and almost godlike superpowers wisely.

Far faster than most people appreciate, the intersecting AI, genetics, biotechnology and other technological revolutions will transform our personal and professional lives and, in many ways, our world. Like agriculture, writing and electricity, these technologies will, in one way or another, be woven into most everything, and bring with them a whole new set of possibilities.

Most of our jobs will change. We’ll upgrade our health care and agriculture, start growing animal products, fuels, plastics and other industrial materials from a few cells in labs, and take another big step forward in our species’ ongoing co-evolution with our technologies. As this happens, we’ll need to upgrade our governance capabilities to match our scientific and technological superpowers.

There’s a lot that could go wrong — from runaway AIs to synthetic biology pandemics, autonomous killer robots, dangerous human experimentation and gene drives inadvertently crashing entire ecosystems. Crises could be sparked by ill-intentioned bad actors or even by well-intentioned scientists and others working to make the world a better place in ways that go wrong. We should be worried.

But there’s a reason evolution has maintained the emotion of anxiety — it’s nature’s way of inspiring us to prevent the outcomes we fear and organize around the ones we want.

Like in a Georges Seurat painting which looks like a bunch of dots up close but reveals a Paris park from a few steps back, each of us must be responsible for the dot of our own lives and the bigger story of the world we are together creating.

If we think of this collective process as a pyramid, the base of the pyramid is individual education, engagement and empowerment. The tip is global superstructure reform. The middle sections are what we do in our families, communities, countries and internationally. Because our species’ greatest challenges are global and common, we’ll need to recognize our deep-seated interdependence and enhance our abilities to solve global problems together.

On an individual level, each of us will need to start planning for what’s heading our way.

Because the AI and technology revolutions will eliminate some jobs, create others and alter nearly all that remain, we’ll each and all need to figure out how best to work alongside our AI co-pilots.

Most jobs can be broken down into a discrete number of core tasks. We should try to assess whether each of these tasks will in ten years be primarily a machine task, a human one or something in between. If it’s one best carried out by machines, we should be spending our time setting up those systems. Better for us to be first-rate humans than second-rate machines.

But we should be investing most of our time in functions we believe humans will do better than machines for the foreseeable future, perhaps including creative thinking, community building and systems integration. We’ll need to be continually figuring out how we and our machines can best work together.

We’ll also need to think more expansively about how and what we learn. All of us now engage in “line of sight” learning to fulfill the tasks immediately before us. But because change is coming at us from so many angles, that’s no longer enough. We should each be spending at least two hours a day exploring content and ideas adjacent to our direct fields of vision, letting curiosity be our guide.

Technology may bring us to this conversation, but this is ultimately about our humanity and our values. No technology comes with its own built-in value system or inevitable future. It’s up to us to infuse our best values into the technologies we develop and deploy.

Just like sailors traditionally oriented themselves by the North Star as they tacked back and forth toward their destinations, we must orient ourselves by an appreciation of who we are and what we hope to achieve. If not, we’ll wind up in an unrecognizable and unfriendly world not sure how we got there.

We may hope our businesses, governments and international institutions have plans for making sure our technological revolutions unfold optimally, but they don’t. Many of them are trying, but the job is just too big for them alone.

We can only build the kinds of systems and safeguards we need — and the futures we want — together.

Jamie Metzl is a leading futurist and the founder and chair of OneShared.World. he is the author of “Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World.”

Tags AI Autonomous killer robots Biotechnology Genetics Jamie Metzl U.S. elections

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more