A month ago a group convened in the University Club dining room at Arizona State University to discuss the future of national security research. There were retired Army and Marine generals, agents from the CIA and a bevy of scientists.
Two trendlines popped out over the peppered bacon and frittatas: Nation states are vying for technological dominance, and the Holy Grail in that sphere is the successful pairing of humans and artificial intelligence.
Creating machines that think and act like us is as much grounded in the humanities as it is in engineering. Talk to engineers about the problem, and they’ll discuss things far outside the usual lanes of engineering, things like the nature of self, perception and free will. Designing artificial intelligence is not like making a better refrigerator.
Most of us hear about artificial intelligence in apocalyptic tabloid headlines. Elon Musk says it’s going to wipe us out! Stephen Hawking said robots will take over the world!
Right now, worrying about artificial intelligence doing anything of the sort is like discussing overcrowding in the Martian colonies. It’s so far off it’s not worth talking about.
What’s the current state of the research? What are scientists grappling with now? And what does the end game look like?
How machines learn
“We don’t know how we see the world, essentially,” said Subbarao “Rao” Kambhampati, a professor in the School of Computing, Informatics and Decision Systems Engineering in the Ira A. Fulton Schools of Engineering. Kambhampati is an expert in artificial intelligence, automated planning and machine learning. He is also chief AI officer at the AI Foundation.
We need to understand how humans work, said Heni Ben Amor.
Ben Amor studies artificial intelligence and human-machine interaction. An assistant professor in the School of Computing, Informatics and Decision Systems Engineering, Ben Amor directs the Interactive Robotics Laboratory.
“In order to create these machines and algorithms that adapt to a human, we first need to understand more about humans,” Ben Amor said. “That grey zone there in the middle, between understanding a human and creating products and algorithms for humans, that’s the interesting zone. That’s what we have to think about at the moment.”
Children see the world, manipulate it, play with it, and then they learn. Artificial intelligence has gone in the opposite direction.
“We see the world by learning how to see the world,” Kambhampati said.
That’s the only way we’ve been able to make machines see: You teach the machine how to recognize a dog by showing it millions of pictures of dogs. Immense databases of labeled images are available, thanks to the internet and smartphones.
Machine learning technology advances through very large sets of examples about patterns we can’t actually describe ourselves.
“We don’t have a theory of a dog,” Kambhampati said. “We see enough examples, and we have some kind of a concept we dial up internally that we don’t know how to articulate.”
Think about what a cat is. Now write a set of examples of what a cat is. Pointed ears, whiskers, long tail and so on. The set of examples will always be wrong in some respect. (Foxes also have whiskers, pointed ears and long tails.)
Enter the huge datasets and the patterns within. That’s where artificial intelligence is right now: using perception as a learning technique. Machines learn by doing and from examples.
“Basically we are trying to figure out how to make learning more efficient,” Kambhampati said.
A hurdle on the way to true AI
Artificial intelligence learns something like we do from mistakes. But no one ever showed you 14 million pictures of dogs. Maybe over a long period of time you’ve seen a million pictures of dogs.
“You do this enough times, we can essentially get a reasonable performance in unseen images with dogs and cats,” he said. “It can actually predict them.”
And here’s the giant road block.
“Explicable AI is a big challenge,” said Spring Berman, an associate professor of mechanical and aerospace engineering. Berman works on the modeling, analysis, control and optimization of robotic swarms. She is also associate director of the Center for Human, Artificial Intelligence, and Robot Teaming — a unit of the Global Security Initiative at ASU. “It’s like a black box.”
When artificial intelligence doesn’t work, no one knows why it didn’t work.
“Essentially there is this issue of what’s called inscrutability, which is, ‘I do it right, but you don’t quite know how I do it right,’” Kambhampati said. “This has led to lots of fears about the use of machine learning. When they work, you’re happy. When they fail, you don’t know why they failed.”
Bottom line, it’s not there yet. An autonomous car can recognize people standing on a corner, but it can’t tell whether they’re going to cross the street or whether they’re just having a conversation.
“I’m not sure machine learning has reached the point where it can extrapolate or be creative like humans are,” Berman said. “There’s a database the algorithms learn from; they can recognize a stop sign in an image or something like that.”
How do we want to relate to our machines? And how do we want them to relate to us? Those are two questions top on the minds of experts.
Human-machine interaction is nothing new, Ben Amor pointed out. Using a VCR was human-machine interaction.
“Most people remember interaction with a VCR as some horrible complex interaction where technically they would have had to read their manual but they didn’t and the rest was confusion everywhere,” he said. “The idea now is to create machines that don’t need a manual. They will adapt to you rather than making you adapt to them through a manual. That would have the advantage of creating a new class of machines that basically customize themselves to the human user.”
Challenges in the past have had to do with humans using a machine the wrong way. Chernobyl is a famous example of this.
“How can we make the robot really intelligent and react to the human partner?” Ben Amor said. “That’s what human-robot interaction is about: How can we have machines that reason about human intent — 'What is the human going to do next and what is his real goal?' — How can they complement our actions to achieve that goal?”
Should artificial intelligence replace us or augment us? Augmentation would be easier to implement.
“It should be as easy to work with them as it is to work with a human secretary,” Kambhampati said. Like a great executive assistant, it should know what you need before you do.
What the future holds
Kambhampati doesn’t believe artificial intelligence will (or should) develop free will. There is a question of trust. If you start working with a machine, after seeing explicable behavior over a period of time, “you will start trusting it,” he said.
People who have worked together for a long time may not need to talk while working, because they implicitly trust each other.
“That can happen between humans and machines too,” Kambhampati said. “The military are interested in getting to the point where there is implicit trust between machines and humans. At the same time, they are worried about that trust being misplaced.”
As it stands now, artificial intelligence is susceptible to being manipulated, if it has learned perceptually from being shown millions of pictures. This is adversarial machine learning, where outside people can manipulate the data such that your machine will suddenly start making catastrophic mistakes with modified dogs, which don’t look like modified dogs to you.
For example: Take a picture of a dog and change some of the pixels. To you, it still looks like a dog, but the machine sees it as an ostrich. The machine is glomming on to some microscopic values in the picture. This is not understood, and it’s a huge worry from a security perspective. If an army drone sees a herd of cattle, and what’s really there is an enemy platoon, that’s a problem.
“Everything can be seen to be anything else,” Kambhampati said.
As associate director of the Center for Human, Artificial Intelligence, and Robot Teaming, Berman spends a lot of time thinking about how humans and machines might team up.
“Our goal is to think about how best to coordinate teams of humans, software agents and robots for a variety of applications which could be transportation, manufacturing, search and rescue or defense,” she said. “We look at creating control strategies for swarms of robots that you could give them a mission and they could then carry it out on their own.”
Potential applications could be search and rescue in disaster scenarios (This has happened once: A lifeguard drone saved two swimmers in Australia in January), environmental monitoring, guarding harbors, doing construction in outer space or exploration.
“Robots are very good at things that people may not be good at, or robots can access hazardous environments, or do repetitive tasks that people don’t want to do,” Berman said.
How far away are we from intelligent robot helpers?
“It’s hard for me to say,” she said. “Because there’s so much work and testing, (autonomous cars) will be widespread before robot swarms.”
More Science and technology
ASU postdoctoral researcher leads initiative to support graduate student mental health
Olivia Davis had firsthand experience with anxiety and OCD before she entered grad school. Then, during the pandemic and as a result of the growing pressures of the graduate school environment, she…
ASU graduate student researching interplay between family dynamics, ADHD
The symptoms of attention deficit hyperactivity disorder (ADHD) — which include daydreaming, making careless mistakes or taking risks, having a hard time resisting temptation, difficulty getting…
Will this antibiotic work? ASU scientists develop rapid bacterial tests
Bacteria multiply at an astonishing rate, sometimes doubling in number in under four minutes. Imagine a doctor faced with a patient showing severe signs of infection. As they sift through test…