Artificial intelligence and the future of national security


Artificial intelligence
|

Artificial intelligence is a "world-altering" technology that represents “the most powerful tools in generations for expanding knowledge, increasing prosperity and enriching the human experience” and will be a source of enormous power for the companies and countries that harness them, according to the recently released Final Report of the National Security Commission on Artificial Intelligence.

This is not hyperbole or a fantastical version of AI’s potential impact. This is the assessment of a group of leading technologists and national security professionals charged with offering recommendations to Congress on how to ensure American leadership in AI for national security and defense. Concerningly, the group concluded that the U.S. is not currently prepared to defend American interests or compete in the era of AI.

The NSCAI was chartered by Congress in August 2018 to review AI and related technologies and make recommendations to address U.S. national security and defense needs. The commission, chaired by former Google Chief Executive Officer Eric Schmidt, spent over two years collecting information and compiling its recommendations. The NSCAI’s final report is expected to inform policy around defense-related AI issues over the coming years.

Arizona State University’s Global Security Initiative engaged with the NSCAI on multiple occasions, and Global Security Initiative Executive Director Nadya Bliss contributed to a summary of the final report posted on the Computing Research Association’s blog.

Below, Bliss discusses the NSCAI’s recommendations, why the federal government should drive some technological development and potential risks of widespread use of AI in defense and security operations.

Question: Why is advancing artificial intelligence important to national security?

Answer: First, as the NSCAI notes in its report, AI is a dual-use technology that can provide a competitive advantage in any sector — military or civilian. In other words, this is not like creating a new, more advanced version of a helicopter. This is a technology that will advance all other technologies. The countries that lead in AI technology will have a significant advantage in any arena. 

Second, as AI becomes more ubiquitous, it can expand our threat landscape significantly. Information operations are already wreaking havoc on our civil society and democratic institutions and can be accelerated with more advanced AI. Cyber attacks could become more damaging and harder to defend against. And as the NSCAI notes, the competition to lead the world in AI development is not only about technological superiority. It is also about value systems. AI can be a powerful tool for authoritarian regimes to remain in power, and if the U.S. leads in AI, we and our like-minded allies can develop AI systems in alignment with democratic values.

Q: What do you see as the key takeaways from the NSCAI’s final report?

A: The report outlines a number of important recommendations and I urge anyone interested to read it, but I’ll highlight three here:

First, the commission recommends substantial new investments in AI research and development, including more than quadrupling Department of Defense spending on annual AI research, and increasing non-defense spending on AI research from a current level of approximately $1.5 billion to $32 billion by 2026 (through the Department of Energy, National Science Foundation, National Institutes of Health, National Aeronautics and Space Administration, and other federal agencies). 

Second, the commission makes multiple recommendations aimed at elevating AI development to the level of a top-tier national strategic priority, instead of just viewing it as a technical challenge. For example, they recommend establishing a "Technology Competitiveness Council" in the White House, led by the vice president, that would oversee a coordinated approach to the global technology competition.

Finally, the commission notes that we need to accompany technological advances with educational efforts to develop a national security workforce that can work with and implement AI effectively.

Q: Why should the government be involved in driving AI development? Isn’t that the role of the private sector?

A: Actually, the government has compelling motivation for driving new technology development in certain circumstances, and government involvement has led to some of the greatest inventions in history.

Public- and private-sector motivations are appropriately different. While the private sector is primarily driven by market conditions, the public sector is driven by mission — in national security, defending the nation and working with allies to defend democratic values. This can lead to fundamentally different decisions about what to prioritize — security or convenience? Minimizing energy consumption so technology can operate longer in austere environments, or adding additional capabilities? 

This mission-driven approach also allows governments to take a much longer view than most companies and invest in the basic research that may not lead to a profitable product for decades. The iPhone is a great example. Many of its capabilities were initially created through government-funded research. This includes the touch screen, GPS, speech recognition and others.

Q: History is full of new technologies spawning unintended negative consequences — a big one now is the rampant spread of disinformation online. What threats may arise or be accelerated with more advanced AI?

A: Artificial intelligence is capable of providing more horsepower to decision-making, allowing us to process incredible amounts of data quickly to come up with what we hope are the most effective decisions.

But of course the quality of that decision is reliant on the quality of the data and the fairness of the algorithms driving the decision-making. AI-enabled technology, in the end, is created by people and reflects many of our flaws and faults. We’ve seen many instances of this, and a great MIT Technology Review article highlighted just how sexist and racist current versions of AI can be because they are scraping data from the internet.

One major concern for the national security community is adversarial AI, or the possibility an adversary could purposefully manipulate the data fed into an AI system so that it provides faulty information, leading to bad decisions. As one could imagine, this could be incredibly damaging in a battlefield scenario or when responding to a natural disaster like a hurricane, when split-second life-or-death decisions need to be made.

Q: What can we do to decrease the chances those threats come to fruition?

A: There’s a lot that needs to be done, but I’m going to highlight actions in three areas here — research, education and policy. 

In research, we need a heavy focus on developing AI that is trustworthy and functions as a good teammate. AI will not replace humans, it will work alongside us and should be developed to complement our skills and to serve as effective teammates. We also need more interdisciplinary research to complement computer science, which is too often focused on the question "can it be done," while we need to start asking "should it be done, and what are the potential vulnerabilities if it is done." The social sciences and humanities can bring this critical perspective into technology development. This idea is further outlined in a recent Computing Research Association white paper of which I was a co-author.

In education, we need broad initiatives aimed at expanding the pipeline of science, technology, math and engineering talent for defense and national security, and at reorienting today’s workforce so people can thrive when new technologies are introduced into their work environments. The report calls the talent deficit in DoD and the intelligence community the greatest impediment to being  "AI-ready by 2025." To bridge this talent deficit, we need to focus on the full spectrum of learning pathways from K–12 efforts, to community colleges to college degrees and graduate programs to upskilling and reskilling efforts.

In terms of policy, the NSCAI report calls on the U.S government to work with allies to develop a rules-based international order for the development and adoption of AI. The report recommends that the State Department take the lead on this and develop an International Science and Technology Strategy in concert with allies.

Funding sources: Global Security Initiative is partially supported by Arizona’s Technology and Research Initiative Fund. TRIF investment has enabled thousands of scientific discoveries, over 950 patents, 328 startup companies and hands-on training for nearly 39,000 students across Arizona’s universities. Publicly supported through voter approval, TRIF is an essential resource for growing Arizona’s economy and providing opportunities for Arizona residents to work, learn and thrive.

Top photo courtesy of Shutterstock

More Science and technology

 

Photo of the ISPMHA group at ASU with Olivia Davis in the center

ASU postdoctoral researcher leads initiative to support graduate student mental health

Olivia Davis had firsthand experience with anxiety and OCD before she entered grad school. Then, during the pandemic and as a result of the growing pressures of the graduate school environment, she…

Silhouettes of an adult and a child facing each other.

ASU graduate student researching interplay between family dynamics, ADHD

The symptoms of attention deficit hyperactivity disorder (ADHD) — which include daydreaming, making careless mistakes or taking risks, having a hard time resisting temptation, difficulty getting…

Portrait of Shaopeng Wang.

Will this antibiotic work? ASU scientists develop rapid bacterial tests

Bacteria multiply at an astonishing rate, sometimes doubling in number in under four minutes. Imagine a doctor faced with a patient showing severe signs of infection. As they sift through test…