The pros and cons of AI


Fingers pointing at each other

|

Science fiction books and movies have largely formed the public's worldview of artificial intelligence, often clouding the truth on where we stand with the technology. Many are under the impression that “the machines” will eventually eliminate our jobs, police human beings and take over mankind; others think AI will only enhance our lives.

One thing’s for certain: Everybody's got a take on the matter.

ASU Now enlisted two scholars — Subbarao Kambhampati and Miles Brundage — to have a discussion on the pros and cons of AI, which has increasingly become a part of our everyday lives. 

Kambhampati, a professor of computer science in Arizona State University's Ira A. Fulton Schools of Engineering, works in artificial intelligence and focuses on planning and decision-making, especially in the context of human-machine collaboration. As president of the Association for the Advancement of Artificial Intelligence, Kambhampati, believes the “AI as a threat to humankind” arguments are far-fetched and distract attention from discussions we need to have about the effects of increased autonomy and automation on our society.

Man with grey hair smiling

Subbarao Kambhampati

Question: The general public’s view of AI Is largely formed by science fiction books and movies. How accurate are some of these depictions?

Kambhampati: Whether it is the Ganesha of Hinduism or the Golem of Judaism, humans have always been fascinated and frightened by the possibility of creating non-human entities with intelligence. Science fiction has, for the most part, run with these primordial fears, adding modern detail to them. When the lay public sees signs of machine intelligence, even in the narrowest of domains, they are, in a way primed to do an “example closure” and assume that the machines’ behavior is over and above everything we humans can already do. Unfortunately, that is far from the truth. Current day AI agents may have superhuman abilities in narrow spheres, but still are no match for the general intelligence of humans. It is said that current day AI agents can sit and make the perfect move in a chess game while the room is on fire.

Q: AI and robotics are often lumped together. What’s the major difference between the two and should we think of them as separate?

Kambhampati: AI is about creating artifacts — be they cyber or physical — that exhibit intelligent behavior. When the intelligent behavior is manifested in embodied agents inhabiting physical worlds, we have robotics. In these contexts, AI and robotics are as intimately intertwined as the conventional “mind” (AI)  and “body” (robot). Even areas such as kinematics and dynamics of robotic systems, that have traditionally been seen to be within the purview of model-based engineering approaches, are increasingly trending towards AI approaches.

Q: In your opinion, what are some of the greatest achievements of AI so far and how has it helped our lives?

Kambhampati: AI permeates our daily lives — from search engines to ride-share schedulers to digital personal assistants to ever increasing diagnostic assistants. Most of these AI achievements stay in the background as they “augment” us and increase our productivity rather than “compete/replace” us. Ironically, AI is often in the news not for these myriad quotidian applications, but for those cases where it successfully dethrones human dominance in some area — be it chess, poker or Go. In the near future I expect to see AI applications that work by our side and can truly collaborate with us, by entering our world with social and emotional intelligence, and becoming our artificial teammates.

Q: A lot of AI’s depictions have a “doomsday” feel to them: robots replacing jobs, policing human beings. How far off are we from this reality and what’s a healthy viewpoint to hold?

Kambhampati: Every technology we have tamed, starting with fire, has been both a tool and a weapon. AI is certainly no exception. In fact, intelligence is perhaps the ultimate dual use technology. Just as we learned to fight fire with fire, I am confident that we will learn to use AI technology itself to fight adverse impacts of AI technology. In the future we will all have personal guardian AI agents that have our beneficence as their sole aim, guarding us against spoofing and other AI-based safety threats. In the case of jobs, personalized tutoring systems based on AI can be in the front lines in reskilling the workforce.

Q: On the whole, are you optimistic or pessimistic about the progress and deployment of AI technology in the near future?

Kambhampati: I remain cautiously optimistic. On the technology side, we need to combine the AI’s strides in cognitive intelligence (planning and reasoning) with the latest strides in perceptual intelligence (vision, speech and language). Some crosscutting challenges will include common sense reasoning, and social intelligence. On the deployment side, I expect to see beneficial applications of AI becoming ubiquitous in our day-to-day lives, to the point that we will take them for granted. The ongoing revolution in the internet of things should accelerate data gathering and make almost all areas of our lives ripe for leveraging AI technology.

Of course, given the power of the technology, it behooves us to be cautious and vigilant about the inadvertent adverse impacts of the technology — be it data bias, threats to privacy and security or technological unemployment, and to promote ethical use of the technology. This is part of the reason why the organizations I am involved in — AAAI and Partnership on AI — are proactively engaged in studying the societal impacts of AI. Technology is not some uncontrollable force of nature; it is invented and deployed by us. I remain sanguine that a thoughtful approach to both technical and deployment issues will allow us to reap the benefits of AI while avoiding inadvertent dystopias.

***

Woman with brown hair smiling

Miles Brundage

Brundage is a doctoral candidateIn the School for the Future of Innovation in Society. in human and social dimensions of science and technology at ASU, and a research fellow at the University of Oxford's Future of Humanity Institute. His research focuses on developing methods for rigorous AI policy analysis, and fostering international cooperation on AI. He’s of the belief that AI can be dangerous in certain scenarios and needs to be watched carefully.

Q: Your paper, “The Malicious Use of Artificial Intelligence,” states that AI and machine learning capabilities are growing at an unprecedented rate. Is it growing too fast in your opinion?

Brundage: I wouldn't say that AI is developing too fast, but it's certainly surprised a lot of people in the past few years. Some things that people thought were years away, such as pretty good image captioning, realistic synthetic image generation, and superhuman performance at the game of Go, have all occurred. Much of this progress has been driven by the area of "deep learning," which has received a lot of investment in the past few years, and other long-term trends such as improved computing hardware and bigger datasets are also pushing the field forward. There are still a lot of limitations of AI, and many achievements remain (or at least seem to remain) a long way off, but the research frontier is moving quickly.

Q: How is most AI used in today’s world?

Brundage: Most AI today is performing specific functions as part of consumer products that people use every day, such as speech recognition, image tagging on Facebook, and search engine result ranking. There's also a lot of focus on more ambitious applications like autonomous cars and drone delivery, and the research frontier is constantly pushing for more general-purpose systems and improved performance on specific tasks.

Q: What are some of the abuses of AI that the general public is not aware of and should be concerned about?

Brundage: In the same way that there are a lot of potential positive applications of AI (because it's a fairly generic technology), there are also a lot of potential malicious applications. Only some of these positive and malicious applications have been realized so far. On the malicious side, one concern is that AI could be used to generate a lot of fake images, audio, and video — this isn't totally novel, since forgery has been around forever, and digital fakery has also been around a long time, but AI could "democratize" the ability to do such things. An example of this is DeepFakes, a program which was designed to put someone's face into a pornographic video, but the potential applications are very wide-ranging, and could make the "fake news" problem even more extreme.

In the cybersecurity domain, one concern is that AI could be used to make hacking and "spear phishing" (customized emails designed to trick someone into running malicious software) more widespread, as some of the human labor is turned over to machines. In both of these cases, I don't think the public should panic, but should continue to do things they should have done anyways, even without AI, such as being skeptical about their news intake and being vigilant against cyber threats (e.g. by using two-factor authentication). Finally, in the physical domain, there are ways in which AI could be used in drones to carry out terrorist attacks, or used in military combat between states. 

Q: What types of uniform regulation or safeguards are in place in the cybersecurity and AI worlds?

Brundage: There aren't really any regulations for AI per se, though the government is involved a little bit in various ways (issuing patents and funding research). The dialogue on AI policy is really just beginning. The cybersecurity area is more regulated, in both good and bad ways, as it has been in the public consciousness for longer, and some issues have been addressed through policy means (government-supported mechanisms for sharing information about attacks). It's important to think ahead about what sorts of policies are applicable in these different areas — we don't want government to fall too far behind the technology's development, but we also don't want to stop the many positive applications of AI being developed, so it's a bit of a balancing act. 

Q: What are you working on next? 

Brundage: In addition to hopefully finishing my degree soon, my plan is to spend much of the next year exploring lessons from other technologies for what international governance of AI might look like. There's an emerging narrative of an "AI arms race" which I think is potentially harmful way to look at things, and I'd like to help develop alternative, more cooperative models, like a "CERN for AI" which has been mentioned a few times but not really fleshed out. 

More Science and technology

 

Silhouettes of an adult and a child facing each other.

ASU graduate student researching interplay between family dynamics, ADHD

The symptoms of attention deficit hyperactivity disorder (ADHD) — which include daydreaming, making careless mistakes or taking risks, having a hard time resisting temptation, difficulty getting…

Portrait of Shaopeng Wang.

Will this antibiotic work? ASU scientists develop rapid bacterial tests

Bacteria multiply at an astonishing rate, sometimes doubling in number in under four minutes. Imagine a doctor faced with a patient showing severe signs of infection. As they sift through test…

Photo of a 3D model of bacteria.

ASU researcher part of team discovering ways to fight drug-resistant bacteria

A new study published in the Science Advances journal featuring Arizona State University researchers has found vulnerabilities in certain strains of bacteria that are antibiotic resistant, just…