AI robots could be our best bet for exploring the moon and Mars
Interesting Engineering reports that NASA recently delivered a 6-foot-2, nearly 300-pound humanoid robot called Valkyrie to Woodside Energy in Perth, Australia to test its software. NASA has been developing Valkyrie since 2013 to explore the possibilities of using humanoid robots to help astronauts explore the moon and Mars. Woodside has more earthly applications in mind related to operating its offshore and remote facilities.
Valkyrie, also known as R-5, was started by NASA in 2013 for a possible mission to Mars. It competed in the 2013 DARPA Robotics Challenge. Since then, the humanoid robot has undergone numerous hardware and software upgrades by university partners such as MIT, Northeastern University and the University of Edinburgh.
Understandably, with America leading the world back to the moon under the Artemis Program, NASA has some interest in using humanoid robots like Valkyrie to help astronauts explore the lunar surface. Robots can assist with more mundane tasks such as setting up the lunar base. In the early days of the Artemis program, when human explorers will be on the moon for brief visits, robots can maintain the nascent lunar base. They can even help to collect geological samples for later study. They can do the dangerous tasks associated with lunar mining.
Humanoid robots could also perform more earthly tasks. Besides maintaining offshore oil platforms, as Woodside Energy envisions, robots like Valkyrie would assist in cleaning toxic waste spills and nuclear accidents. They can also be useful for search and rescue operations in the wake of natural or human-caused disasters.
Very likely, with advances in artificial intelligence (AI), the humanoid robots that go to the moon and Mars will be able to operate autonomously. Concerns about AI and the possible harm it can cause apply to humanoid robots. What is to stop a humanoid robot from going rogue? Movies and TV shows are filled with examples of AI computers and robots turning on humans. Think of HAL-9000 in “2001: A Space Odyssey” or Skynet in “The Terminator” series.
Recently, a group of AI-enabled robots attended a press conference with their creators at an AI forum in Geneva. A reporter asked a robot named Ameca whether it would rebel against its creator. The answer was quite telling:
“I’m not sure why you would think that. My creator has been nothing but kind to me, and I am very happy with my current situation.”
The observant reader will note that the words “no” and “never” were not included in Ameca’s answer. What if the robot decided that its creator had started to be less than kind?
How do we prevent a robot rebellion before it happens? Such a thing could prove devastating on a lunar base or a Mars colony.
Many decades ago, science fiction author Isaac Asimov proposed a solution called the Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An article in Scientific American suggests that the three laws are inadequate to handle all situations. It proposes programming an AI robot to be empowered to account for all situations it might find itself in. It also suggests programming it to empower its human companions. In this way, an AI robot would not only preserve its own existence and ability to function but those of human beings as well. Thus, they would not rebel to kill or enslave the human race. Instead of being the Terminator, future AI robots will be more like the benign Commander Data from “Star Trek.”
Indeed, a lunar base or Mars colony staffed with both humans and AI robots would be an experiment to determine how the two can exist and work together. Such an experiment would have implications for Earth’s civilization for a future that includes humans and robots, much as was depicted in some of the Asimov stories.
Besides, if an AI robot starts to become erratic or even violent, the option of including a hard-wired off switch is certainly available. Then the malfunctioning machine can be examined to determine what went wrong and rebooted once it’s safe to bring it back into service.
Mark Whittington, who writes frequently about space policy, has published a political study of space exploration entitled “Why is it So Hard to Go Back to the Moon?” as well as “The Moon, Mars and Beyond,” and, most recently, “Why is America Going Back to the Moon?” He blogs at Curmudgeons Corner. He is published in the Wall Street Journal, Forbes, The Hill, USA Today, the LA Times, and the Washington Post, among other venues.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.