Think back on your all-time favorite space exploration movies — from “Star Wars” and “Star Trek” to “Interstellar” and “Aliens” — and chances are you’ll discover an all-star lineup of best supporting actors who are robots. Some are trustworthy, helpful companions like R2-D2 and C-3PO in “Star Wars” while others are wickedly deceitful robots, such as Ash and David in the “Alien” movies.
Regardless of whether humans view robots as trustworthy, deceitful or even dangerous, one fact remains the same: The future of space exploration rests on successful partnership between humans and intelligent machines.
Robots are built to accomplish things that would be impossible, dangerous or costly for humans to do. Robots can survive in space for many years without a return trip, cutting space exploration costs. Robots can also withstand harsh conditions that people cannot, like extreme temperatures or high radiation. Plus, sending robots to explore space reduces risks to human life — if a robotic mission fails, the humans involved with the mission stay safe.
You don’t need to leave the Earth to glimpse the future of humans, robots and artificial intelligence teaming up for space exploration missions. The General Human Operation of Systems as Teams (GHOST) Lab at Arizona State University is a new scientific test bed that is also an art installation. Researchers in the lab examine people’s ability to work with robots and AI in scenarios such as a life-threatening meteor strike on a lunar colony.
It’s also open to the community during public events such as ASU Open Door, as well as to visiting stakeholders, such as external robotics and AI experts and other people from industry, government and funding agencies.
ASU’s Center for Human, Artificial Intelligence and Robot Teaming (CHART) constructed the GHOST Lab to conduct research on coordinating teams of humans, robots and AI. Human interaction with robots and AI is increasing exponentially in areas like health care, manufacturing, transportation, space exploration and defense technologies. But information about how humans and intelligent systems work within teams remains scarce.
CHART’s work involves everything from how these teams communicate verbally and nonverbally, to how to coordinate swarms of robots, to the legal and ethical implications of increasingly autonomous technology. To accomplish this, robotics engineers and computer scientists work closely with researchers from social sciences, law and even the arts.
Lance Gharavi is associate director of ASU’s Interplanetary Initiative and associate professor in the School of Music, Dance and Theatre. He is not a roboticist, but an experimental artist, scholar and early pioneer in the field of digital performance who works with the physical interactions between humans and robots. He leads the art installation portion of the GHOST Lab.
“Space is an incredibly dangerous place for humans,” he says. “It is not a place that is conducive to human life. It is much safer and frankly cheaper to have robots and machines do a lot of the exploration work. If we’re going to have a base on the moon or Mars, we’re going to want to send robots there in advance to do some of the prep work for us. And then once we get astronauts there, they’ll need to be working with artificial intelligences and robots as collaborators to create habitats and help keep systems operative that will sustain human life.”
But first, humans, robots and AI need to collaborate well, and it’s no easy feat.
“The biggest challenges tend to be trust,” Gharavi says. “You can’t have a collaboration without a degree of trust, and humans trusting an AI or robot has been a particular challenge in robotics.” For a glimpse of our inherent distrust, he asks: “Would you get in an AI-driven car to drive around Phoenix? If an AI looked at your MRI or X-ray and reported back that you have cancer, would you want a human doctor to look at it?”
GHOST Lab explores how humans can develop trust in robots and AI so they can work together seamlessly on space exploration missions.
The project is led by Nancy Cooke, a professor of human system’s engineering at ASU’s Polytechnic School and director of CHART, a unit of ASU’s Global Security Initiative. A cognitive psychologist by training, she has spent years working to understand human teamwork and decision-making. She now applies this expertise to human-technology teams, including ones collaborating on space missions.
Last May, her research received funding from the Defense University Research Instrumentation Program, which allowed her to purchase members of a robotic dream team.
First there’s Husky, the size of a small dorm refrigerator on wheels, which can explore a two-mile radius under any type of weather or terrain conditions and bring back data. Then there’s Fetch, a type of robot that’s a hit at Amazon distribution centers because of its ability to lift heavy objects high in the air and retrieve items off top shelves. YuMi, a stationary robot that’s small in stature — about 2 feet tall — has sizeable manipulation skills, such as building things out of Legos.
There’s also a collection of swarm robots that are ideal for search and rescue missions.
CHART was also awarded an Air Force Office of Scientific Research seedling grant to conduct research at the GHOST Lab associated with Space Force, the space service branch of the U.S. Armed Forces.
The use of autonomous machines to explore space is on the rise. It took 24 years for five NASA planetary rovers to land on Mars — Sojourner in 1997, Spirit in 2004, Opportunity in 2004, Curiosity in 2012 and Perseverance in 2021. Now the pace is accelerating, with up to eight planetary rovers expected to land on the moon over the next three years: Lunar Outpost’s MAPP and Carnegie Mellon University’s MoonRanger this year, NASA’s VIPER in 2023 and Lunar Outpost’s Lunar Vertex and up to four of NASA’s CADRE rovers in 2024.
GHOST Lab researchers are exploring the challenges that teams of humans, robots and AI could confront during space exploration by simulating potential scenarios and testing responses to them.
In one scenario, the year is 2030, and Earth is rapidly running out of energy. Distributed teams from NASA Mission Control, Jet Propulsion Laboratory, the International Space Station, mini space stations and a human-robot crew on the lunar surface conduct an emergency mission to collect a substance called “enerphoto,” a potential source of energy for Earth, on the moon and Mars. But the mission is fraught with danger. A meteor strikes the lunar colony, knocking out critical equipment that must be repaired before the team runs out of oxygen. Communications latency, with a one-way message taking 21 minutes to reach Mars from Earth, adds to the tension.
Will the result be mission accomplished or mission impossible? Cooke says the answer lies in successful teamwork among humans, robots and AI.
“I always like to use the example of the dream basketball team made up of really stellar players who went on to lose the Olympics terribly because they were all hot shots on this team,” she says. “They didn’t train much together, so they weren’t used to playing together. It was just assumed if you put all these experts on the team, you’ll have a great team, but they were missing that important glue.”
Likewise, humans, AI and robots are hot shots in their own unique ways — and don’t necessarily play well together.
“Humans do higher-level cognition, adapt well to novel events and can solve problems on the fly,” Cooke says. “When it comes to deciding whether that’s a friend or foe on the battlefield, that’s a decision that shouldn’t be made by AI. I think it’s going to be the human’s job to take in all the expertise of AI, make sense of it and come to the final decision.”
“AI’s superpower is that it’s very fast computationally. It can take in, remember and manage lots and lots of data, much more than we can,” she says. “But AI doesn’t necessarily read emotions or facial expressions. It can be easily tricked and biased. And sometimes it doesn’t understand the whole task.”
“Robots can be really fast, really strong and good at repetitive motion. Humans can get an injury when they do too much repetitive movement. Robots are clunky, but they can go to places that humans can’t, such as a search and rescue mission inside a building that has collapsed. They can do surveillance over places that humans don’t even want to fly.”
Once a space mission runs into trouble, it exposes a weakness of both robots and AI.
“They don’t have social intelligence and may not share information with humans in a very timely manner, often waiting until they’re asked,” which fosters human distrust, Cooke says.
Watch
• The GHOST Lab sound installation.
• Meet the robots in the CHART Lab.
Perhaps the first seeds of human mistrust of robots were planted a century ago, when Czech playwright Karel Capeck coined the word “robot” in his play “R.U.R.” The play ends with a revolt, with the robots storming a factory and killing all the humans. Since then, the notion that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on Earth has emerged as a common theme in science fiction, including films such as “Terminator” and “The Matrix.”
Another science fiction theme is sentient robots feeling mistreated by humans, as in “Blade Runner” or “Westworld.” In a galaxy far, far away, “Star Wars” characters R2-D2 and C-3PO may just be droids, but in our world, they might be considered slaves. In fact, “robot” comes from the Czech word for slave.
These themes raise an important question. Once we’re on the brink of developing true artificial intelligence, do we program them as equals or as a means to an end, only existing to do our will? GHOST Lab invites visitors to ponder this and other ethical issues.
Gharavi incorporated a slavery theme in the art installation, with cryptic words on slate gray walls from historic American slave laws and Biblical references on how to treat slaves. The theme echoes through the sound installation, artfully blending recordings from a Mars rover expedition and slave songs of survival, created by Max Bernstein, an ASU clinical assistant professor jointly appointed in the The Sidney Poitier New American Film School and the School of Music, Dance and Theatre.
“When we think about robots and AI, we might imagine a future that is a dystopia, with robots and AI presenting a threat. Or we might imagine it producing a new kind of utopia where we don’t have to work anymore because we have robots taking care of us. Our feelings about robots and artificial intelligence tend to be kind of ambivalent,” Gharavi says.
“Social encounters with robots and AI are going to be ubiquitous features in our lives in the future,” he adds. “What do we want those interactions to feel like? We need to shape that future thoughtfully and carefully. We need to design and rehearse for it. Artists need to be part of these processes because we need that imagination and creativity.
"We wanted to create a space where people could meditate and reflect on the past, the future and the present, and ways in which all these things are haunted by our histories, by our stories, by our fantasies and by our nightmares.”
As we contemplate the future of space exploration, we may be reminded of that famous line from “Star Trek,” “to boldly go where no one has gone before.” Today it is clear that when it comes to new frontiers beyond the Earth, our AI and robot partners will be boldly going by our sides.
The Global Security Initiative is partially supported by Arizona’s Technology and Research Initiative Fund. TRIF investment has enabled hands-on training for tens of thousands of students across Arizona’s universities, thousands of scientific discoveries and patented technologies, and hundreds of new start-up companies. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.
Videos by Alexander D. Chapin, Jason Drees and Shreyas Krishna Raja. Top photo: An ASU researcher learns how to team with a robot at a new test bed on the Polytechnic campus. By Andy DeLisle
More Science and technology
Training stellar students to secure semiconductors
In the wetlands of King’s Bay, Georgia, the sail of a nuclear-powered Trident II Submarine laden with sophisticated computer equipment juts out of the marshy waters. In a medical center, a cardiac…
ASU startup Crystal Sonic wins Natcast pitch competition
Crystal Sonic, an Arizona State University startup, won first place and $25,000 at the 2024 Natcast Startup Pitch Competition at the National Semiconductor Technology Center Symposium, or NSTC…
Celebrating '20 Years of Discovery' at ASU’s Biodesign Institute
Editor’s note:The Biodesign Institute at Arizona State University continues to celebrate its landmark 20th anniversary with this fifth installment in its "20 Years of Discovery" series. Each…