ASU cybercrime-fighter helps us compute with confidence: Q&A with Adam Doupé
CTF director believes the center will be a cybersecurity industry leader
Adam Doupé has always been interested in how computers work. When he was in high school, he read an online post about how to spoof email addresses and sent an email to his friends from santa@northpole.com. He said he was blown away by the fact that if you just understand how the technology works, you can get it to do something super cool.
Doupé, director of the Center for Cybersecurity and Trusted Foundations at Arizona State University, said he loves working with cybersecurity because it's all about understanding how systems work and how a bad guy with that knowledge affects how a system operates.
Doupé and his team aim to comprehensively make trusted and secure technology and computing systems, as well as understand the people who use these systems — including criminals.
“Sometimes you need to understand things better than the people who made them,” Doupé said. “Your background doesn’t matter … the only currency is knowledge. Anyone can learn these computing systems we've created. They're not magic. Humans made them.”
Doupé believes one of the reasons GSI is the perfect place to do this type of cybersecurity work is because it operates outside of any particular school or college, which allows his team to leverage expertise anywhere across the university. While computer science and engineering faculty are critical partners, the team also connects with experts in areas like human psychology and cognitive psychology, fostering innovative solutions to complex cybersecurity challenges.
In this Q&A, Doupé discusses CTF’s focus on keeping people safe in a very holistic view while using computing systems.
Why is this work important to society?
A lot of our work focuses on keeping software secure. Security vulnerabilities in the software we use can lead to malicious people taking advantage of those vulnerabilities and then breaking into either our phones or websites. One of the key things we focus on is improving the security of software, finding these vulnerabilities before the bad guys do so we can help keep people safe. From the national security perspective, we want to make sure that the software that runs our society is safe, and that's through making sure that people can't mess with our systems. The other thing we focus on is protecting people. Whether hackers use low-level software vulnerabilities or they're using a phishing attack pretending to be your bank, we want to detect those things and prevent them from accidentally giving important information to a criminal.
What is something you consider one of the center’s biggest successes?
In August, our hacking team that's affiliated with CTF won $2 million in DARPA’s AI Cyber Challenge competition by making a fully automated system that can autonomously identify vulnerabilities in software and patch them using modern AI approaches. The winning team was made up mostly of PhD students. It was a three-university effort with the University of California in Santa Barbara and Purdue, where ASU was the lead. It really sums up the last almost decade of work that we in the center have been doing. We've been participating in DARPA programs, pushing the state-of-the-art forward in automated vulnerability analysis and finding vulnerabilities in software.
How are students involved in the center’s research?
We try to involve students at all levels. Our PhD students form the bulk of the center's research engine. They do a lot of the heavy lifting, but it's not just PhD students. We have master's students and undergrads. One of the other things I'm really proud of is the high school internship program, an annual summer program where high school students get to work with CTF PhD students on a research project. They received over 100 applications last year. It's been really interesting bringing in students from the Phoenix area and getting them involved and exposing them to research. We hope that by doing this, we can spur the next generation of PhD students and scientists.
If someone gave your center $100 million, what would you do with it?
With $100 million, we could really take a comprehensive approach. Part of the reason why I find cybersecurity so interesting is a lot of the issues and problems come at the interaction between different layers. We used to assume that our hardware was secure, but recent research has found breaks in the hardware that need software fixes. So to solve this problem all the way from the hardware up to the software even to the humans that use the software requires a really comprehensive approach. At ASU, we've been pushing for that. We've been hiring a lot in cybersecurity, great people who are working with our center and who do things like work on the human aspects. So with $100 million dollars, I could create an incredibly well-positioned team to focus on these difficult challenges that really crosscut not just technology but also society issues.
What are three of the more pressing challenges in your field?
One pressing issue is definitely the proliferation of software vulnerability. We want our iPhones and our servers that we use to be secure. So developing automated techniques to find those vulnerabilities before the bad guys do is critically important. The second one is combating cybercrime. Trying to identify all the ways that criminals are trying to defraud people when they use computer systems is absolutely critical. And the third is understanding the humans behind the systems, because even if you make the best, most technically beautiful secure solution, if you design it in a way that you put it in the hands of a person and they then make mistakes, which is natural because they're humans, that can subvert all of the security that you've done. So keeping the human in the loop, in terms of cybersecurity, is incredibly important.
What are emerging challenges you foresee in your field and how is your center preparing for them?
One of the things that's on the horizon is definitely how AI is going to shape things. I'm really excited that we're kind of on the forefront with this AI Cyber Challenge of trying to understand how we can harness AI to work for us, not against us. People are very worried about phishing attacks that are going to be much better because they can be generated by an AI. Or even if you project further and you think about deep fakes that are all AI-generated. As they get better and criminals get better at using them, we’re going to be trying to solve that identity problem of, “Am I talking to a real person, or is this an avatar that's being generated by a scammer?” I think the way AI is going to change things is unknown in security, but we're trying to be at the forefront with attacking those problems head on and early.
What is on the horizon you’re optimistic about?
As a scientist, I think it's healthy to be skeptical about things, particularly things that have a lot of hype. AI was one of those things. Sure, I played with it and it's fun. It can write a screenplay about some weird thing you came up with, but I don't know if that's going to be useful. I was pretty skeptical until we started on this AI Cyber Challenge. It's not at the point yet where it solves everything. We still need all the other cool tools and techniques that we've built over the years, but we were able to solve problems that we did not have a good solution to before, such as writing a patch that fixes a software vulnerability. This is something that there are no good automated solutions to, but current AI techniques and large language models are very, very good.
I see a future where these automated techniques that we've built work really well with the LLM AI-based techniques to try to fix things and really make people not just more productive, but more safe. So as they're writing and developing code, it's being checked by AI systems like ours, so that before they ever even push the code out to people, the flaws are found and we can base everything on a much more secure foundation.
What happens now is people write software, they do testing internally, find bugs, and then they release it. Then bad hackers and good hackers alike look at it and find security vulnerabilities. The good hackers report them back to the company so that they can be fixed, but even if they find things, that leaves months, if not years, that these bugs are out there. By closing that loop, as soon as the developer writes that code, the system checks it and says, “Hey, here's the suggested fix.”
Can you tell me about your center’s partnerships and how they propel your research?
As a center we pride ourselves on our ability to work with different agencies. One of our main partners is DARPA, and we really like working with DARPA because they understand the hard problems and are willing to challenge people to go above and beyond. Over the years we've been involved in eight different DARPA projects.
The other thing that's really awesome is getting more involved in ARPA-H, the National Institutes of Health's version of DARPA. So they kind of take a DARPA hard approach to the problems that impact the health care domain. We're starting to understand this new domain that's critical and that cybersecurity essentially ignored.
One of the things I think we've been doing well is communicating to people, “Hey, when there's an update for your computer, you should install that update.” It's fixing security bugs. Well, in medical devices, if you want to apply an update, you need to get it retested and recertified often. There's a lot more reluctance to upgrade something because once it works, it works and you don't want to take the risk that you upgrade it and it causes problems in the health care domain, which has massive real-world impacts. Understanding how we can actually secure those devices and how we can operate in this new domain has been really exciting.
How has your teaching or approach to education influenced your research, or vice versa?
One of the things we've tried to do is completely rethink how to teach cybersecurity to students. When I took cybersecurity, we maybe learned about one type of security vulnerability and then if we were lucky we would have an assignment where we actually had to do it. We realized that's very different than if you're learning math. You don't just take the quadratic equation, study the theory and then just do one problem on the quadratic equation. You do a sequence of problems to really try to grasp what this thing is and how it works.
We realized a lot of cybersecurity education was missing that hands-on component. I think of it like putting fingers to keyboards. Theory is not enough. You have to be able to apply what you know. Computers are very demanding and very annoying. Anyone who has tried programming understands that if you miss one semicolon the whole thing does not work. It's similar when you're learning about security vulnerabilities, if your 1 is in a place where a 0 should be, it's just not going to work. So by understanding how to exploit these security vulnerabilities, you get a really good understanding of the whole system.
We've developed this platform called pwn.college, where we teach several cybersecurity courses at a time. This does hands-on, practice-based learning where you don’t do just one, but several of these challenges. What's really cool is we're seeing a lot of good recognition and respect. We've actually been able to take our semester-long courses and crunch them down to two-week, three-week intensive workshops that we've run for the DOD. These are employees who want additional training so they take these classes and are able to accelerate their hands-on learning. We're really stoked to be able to do this at ASU.
As part of the center, we have the new American Cybersecurity Education (ACE) Institute, funded by DARPA to create a master's degree in hands-on offensive cybersecurity. The cool thing is, in the spirit of openness and inclusion at ASU, all of the content on pwn.college is open source and all of the curriculum is available and open to everyone. But the goal of the ACE Institute specifically is to create this master's degree curriculum and then allow that to be used by other universities. We're partnering with Dakota State University and several other universities to run similar style classes at their institutions so that we can not just increase the amount of people going into cybersecurity at ASU, but nationwide, which I think is super exciting.