Fast-paced advancements in the field of artificial intelligence, or AI, are proving the technology is an indispensable asset. In the national security field, experts are charting a course for AI’s impact on our collective defense strategy.
Paulo Shakarian is at the forefront of this critical work using his expertise in symbolic AI and neuro-symbolic systems, which are advanced forms of AI technology, to meet the sophisticated needs of national security organizations.
Shakarian, an associate professor of computer science in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, has been invited to AI Forward, a series of workshops hosted by the U.S. Defense Advanced Research Projects Agency, or DARPA.
The event includes two workshops: a virtual meeting that took place earlier this summer and an in-person event in Boston from July 31 to Aug. 2.
Shakarian is among 100 attendees working to advance DARPA’s initiative to explore new directions for AI research impacting a wide range of defense-related tasks, including autonomous systems, intelligence platforms, military planning, big data analysis and computer vision.
At the Boston workshop, Shakarian will be joined by Nakul Gopalan, an assistant professor of computer science, who was also selected to attend the event to explore how his research in human-robot communication might help achieve DARPA’s goals.
Shakarian will meet with a select group of researchers at the event to discuss how AI and security can intersect to create security solutions for the U.S. Department of Defense, in addition to exploring current research in the field and challenges on the horizon.
In addition to his involvement in AI Forward, Shakarian is preparing to release a new book in September 2023. The book, titled “Neuro-symbolic Reasoning and Learning,” will explore the past five years of research in neuro-symbolic AI and help readers understand recent advances in the field.
As Shakarian prepared for workshops, he took a moment to share his research expertise and thoughts on the current landscape of AI.
Question: How would you explain your research focus areas of symbolic AI and neuro-symbolic systems?
Answer: To understand symbolic AI and neuro-symbolic systems, it’s important to talk about what AI looks like today, primarily as deep-learning neural networks, which have seen a wonderful revolution in technology over the last decade. They have achieved significant performance on technologies like image recognition and language generation, which was unexpected. However, when looking at problems specifically relevant to the Department of Defense (DoD), these AI technologies were not performing as well.
One challenge is black box models and their explainability; these models do not clearly describe where their results are coming from. Another issue is that these systems are not inherently modular because they’re trained end to end. That means we are not able to modify these deep-learning systems once they are trained. Consider how one might evaluate each individual component of an airplane for safety as it’s being assembled. That’s not possible in deep learning because anything you say about a given component once you complete training may not be true anymore when you go back to evaluate it, giving us a modularity issue. This is important in very complicated DoD systems.
There is another issue involving enforcing constraints. A common constraint in the military involves multiple aircraft sharing the same airspace and analyzing how to deconflict that to make sure aircrafts are not interfering with each other. With neural networks, there’s no inherent way in the system to enforce constraints. Symbolic AI has been around longer than neural networks, but it is not data driven. Neural networks are data driven; they can learn symbolic things and repeat them back. Traditionally, this has not been demonstrated anywhere near the learning capacity of a neural network, but all these issues I’ve mentioned are shortcomings of deep learning that symbolic AI can address.
When you start to get into these use cases that have significant safety requirements, like in defense, aerospace and autonomous driving, there is a desire to leverage a large amount of data while taking into account safety constraints, modularity and explainability. The study of neuro-symbolic AI uses data with those other parameters in mind. I’ve been thinking about how to combine machine learning ideas with symbolic ideas for close to the last decade.
Q: Tell me about your research lab. What research are you currently working on?
A: The main project I’ve been working on in my lab, Lab V2, is a software package we call PyReason. One of the practical results of the neural network revolution has been really great software like PyTorch and TensorFlow, which streamline a lot of the work of making neural networks. Google and Meta put a lot of effort into these pieces of software and made them free to everyone. What we’ve noticed in neuro-symbolic literature is everyone reinventing the wheel, in a sense, by creating a new subset of logic for their particular purposes. Much of this work already has copious amounts of literature previously written on it. In creating PyReason, my collaborators and I wanted to create the best possible logic platform designed to work with machine learning systems. We have about three or four active grants with it and people have been downloading it, so it has been our primary work. We wanted to create a very strong piece of software to enable this research so you don’t have to keep reimplementing old bits of logic. This way, it’s all there, it’s mature and relatively bug free.
Q: What initially drew you to engineering and drove you to pursue work in this field?
A: I had an interesting journey to get to this point. Right out of high school, I went to the United States Military Academy at West Point, graduated, became a military officer and was in the U.S. Army’s 1st Armored Division. I had two combat tours in Iraq, and after my second combat tour, my unit sent me on a three-month temporary assignment to DARPA as an advisor because I had combat experience and a technical degree — a bachelor’s degree in computer science. At DARPA, I learned how some of our nation’s top scientists were applying AI to solve relevant defense problems and became very interested in both intelligence and autonomy. Being trained in military intelligence, I’ve worked in infantry and armor units to understand how intelligence assets were supporting the fight, and I saw that the work being done at DARPA was light-years beyond what I was doing manually. After that, I applied to a special program to go back to graduate school and I earned my doctoral degree, focusing on AI. As part of that program, I also taught for a few years at West Point. After completing my military service, I joined the faculty at ASU in 2014.
Q: There’s so much hysteria and noise in the media about AI. Speaking as a professional researcher in this field, are we near any truly useful applications that are going to be game changers for life in various industries?
A: Yes, I think so. We’ve already previously seen what convolutional neural networks did for image recognition and how that has been embedded in everything from phones to security cameras and so on. We’re going to see a very similar thing going on with large language models. Large language models have problems, mainly a concept called hallucinations, where the models give the wrong answer or information. We also can’t have any strong safety guarantees with large language models if you can’t explain where the results came from, which is the same problem with every other neural model. Companies like Google and OpenAI are doing a lot of testing to mitigate these issues that could come out, but there’s no way they could test every possible case.
Now, with that said, I expect to see things like the context window, or the amount of data you can put in a prompt, expand with large language models in the next year. That’s going to help improve both the training and use of these models. There have been a lot of techniques introduced in the past year that will significantly improve the accuracy in everyday use cases, and I think the public will see a very low error rate. Large language models are crucial in generating computer code, and that’s likely to be the most game-changing, impactful result. If we can write code faster, we can inherently innovate faster. Large language models are going to help researchers continue to act as engines of innovation, particularly here in the U.S., where these tools are readily available.
Q: Do you think AI’s applications in national security will ever get to a point where the general public sees this technology in use, such as the autonomous vehicles being tested on roads in and around Phoenix, or do you think it will stay behind the scenes?
A: When I ran my startup company, I learned that it was important for AI to be embedded in a solution that everyone understands on a daily basis. Even with autonomous vehicles, the only difference is that there’s no driver in the driver’s seat. The goal is to get these vehicles to behave like normal cars. I think the big exception to all of this is ChatGPT, which has really turned the world on its head. Even with these technologies, I have a little bit of doubt that our current interface is going to be the way we interact with these types of AI going forward, and the people at OpenAI agree.
I see further development in the future to better integrate technology like ChatGPT into a normal workflow. We all have tools that we use to get work done and there are always small costs associated with using them. With ChatGPT, there’s the cost of flipping to a new window, logging into ChatGPT and waiting for it to respond. If you’re using it to craft an email that’s only a few sentences long, it might not feel worth it, and then you don’t think of this as a tool to make an impact as often as you should. If ChatGPT were more integrated into processes, the use of it would be different. It’s such a compelling technology and I think that’s why they were able to release it in this very simple, external chat format.
More Science and technology
ASU postdoctoral researcher leads initiative to support graduate student mental health
Olivia Davis had firsthand experience with anxiety and OCD before she entered grad school. Then, during the pandemic and as a…
ASU graduate student researching interplay between family dynamics, ADHD
The symptoms of attention deficit hyperactivity disorder (ADHD) — which include daydreaming, making careless mistakes or taking…
Will this antibiotic work? ASU scientists develop rapid bacterial tests
Bacteria multiply at an astonishing rate, sometimes doubling in number in under four minutes. Imagine a doctor faced with a…