ASU student explores new frontiers of artificial intelligence, responsible innovation through collaboration, scenario planning
Miles Brundage had just defended his dissertation proposal when he was offered a fellowship position at Oxford University’s Future of Humanity Institute. Brundage, 29, now balances his role as an artificial intelligence policy research fellow at Oxford with his role as a PhD candidate as he prepares to graduate next fall from ASU’s Human and Social Dimensions of Science and Technology (HSD) program in the School for the Future of Innovation in Society.
“The program includes a great group of people,” he said about the multidisciplinary focus of his ASU program. “Problems as tricky as science and technology need a lot of different perspectives. The cross-pollination is super useful.”
Brundage explores the usefulness of AI scenario planning at Oxford using the same type of cross-pollination that has served him in his PhD program. He conducts workshops that help him understand how different groups of people with different perspectives understand the challenges, responsibilities and potential of this fast-emerging technology.
“In my work, I try to bring together different perspectives on the role of AI in society, and communicate with a variety of stakeholders,” he said. “Making AI socially beneficial is not something that any one group — policy makers, researchers, the public, etc. — can do on their own. It's something distributed throughout our society and shaped by many actors."
His workshops have included representatives from universities in the U.S., Europe and China; top companies involved in AI such as Google; and nonprofits such as the Electronic Frontier Foundation. They explore issues as varied as the long-term risks of AI, biased algorithms, job displacement, and whether the technology should be approached quickly, controlled for safety or even widely distributed.
Brundage and his workshop colleagues are looking at what is possible for AI through more than one lens.
“Do we approach it from the perspective of business as usual, where it often does not look very good?” he said. “We want to explore reflections on what’s possible and what’s desirable.”
His interest in AI began about five years ago when he was working on energy policy in Washington, D.C. He had expressed an interest in graduate school, and a friend steered him to ASU’s Consortium for Science, Policy & Outcomes and the HSD program. He ended up studying energy and technology policy in the program for a while, including solar and biofuels, and took a few AI classes on the side.
Although he found AI interesting, he wasn’t sure what to do with it. Yet, the more he explored, the more intriguing it became.
“The field has progressed,” he said. “It became clear that AI will have a great impact.”
He was motivated to study AI to explore all sides of the big questions in order to understand how best to approach the technology. Should there be laws governing AI? Should governments invest? Are there safety issues? Will there be an “arms race” for AI technology? He also wanted to learn how to influence AI policy. Who should develop and manage the technology? Should it be released on a small or international scale, and should the approach be soft or regulated?
"Miles has taken exceptional steps to learn and become an expert in both the artificial intelligence and responsible innovation sides of his work,” said David Guston, director of the School for the Future of Innovation in Society and Brundage’s dissertation chair. “His work is creative enough and rigorous enough to make contributions to both scholarly communities."
His dual roles — PhD student and Oxford fellow — provide valuable opportunities for synergy. While he is thinking rigorously about the issues he studies through his PhD program, he is able to bring what he learns to his research at Oxford. That research, including the collaborative learning from his workshops, informs his studies at ASU.
“The UK is a good place to look at the issues,” he said. “Oxford and Cambridge are at the intersection of policy and technology.”
Although citizens and institutions have expressed concerns about AI — from displaced jobs and mass surveillance to AI “taking over the world” — it’s a general-purpose technology that can be applied to a wide variety of problems, and it must be managed well, Brundage said.
“AI isn't going to stop progressing anytime soon, so we'll see a growing number of domains in which machines can achieve or surpass human levels of performance,” he said. “The long-run implications are unclear and sometimes scary, but there are also a lot of exciting, positive applications of AI in health, science and elsewhere that I'm quite optimistic about.”
More Science and technology
ASU and Deca Technologies selected to lead $100M SHIELD USA project to strengthen U.S. semiconductor packaging capabilities
The National Institute of Standards and Technology — part of the U.S. Department of Commerce — announced today that it plans to…
From food crops to cancer clinics: Lessons in extermination resistance
Just as crop-devouring insects evolve to resist pesticides, cancer cells can increase their lethality by developing resistance to…
ASU professor wins NIH Director’s New Innovator Award for research linking gene function to brain structure
Life experiences alter us in many ways, including how we act and our mental and physical health. What we go through can even…