Artificial Intelligence, Cyber Attacks and Nuclear Weapons: A Dangerous Combination
Artificial intelligence (AI)—defined by John McCarthy, one of the doyens of AI, as “the science and engineering of making intelligent machines”—is slowly gaining relevance in the military domain. While commercial use of AI is widening, there are only three countries that are reported to be developing serious military AI technologies: the United States, China and Russia. AI promises a significant military advantage to a nation’s offensive and defensive military capabilities.
AI now has the capacity to be merged with sophisticated but untried, new weaponry, such as offensive cyber capabilities. This is an alarming development, as it has the potential to destabilize the balance of military power among the leading industrial nations. Notably, with the advent of machine learning and AI, more targets have become available for computer hacking, meaning that critical infrastructure—banking systems, airport flight tracking, hospital records, the programs that run the nation’s nuclear power reactors—are vulnerable to attacks.
One of the most pressing problems, however, lies in the destabilizing effects of sophisticated cyber weaponry, enhanced by AI technology, on the balance of nuclear power. There is no definite proof that nuclear command and control systems are vulnerable to cyber attacks, but these systems are digital, hence the vulnerability exists.
The destabilizing effect of sophisticated AI cyber weaponry is of special concern for U.S.-Russia relations. Indeed, defending against such weapons and protecting a nation’s hardware, software and data against attack, has become an important issue in bilateral relations—on par with nuclear arms control and accidental conventional military escalation.
Today, the owners of the largest stockpiles of nuclear weapons—Russia and the United States—are locked in, arguably, a new Cold War. Many of the traditional communication channels, including military ones, are compromised or broken, destabilizing nuclear diplomacy between the two countries. Along with the U.S. withdrawal from the Intermediate-Range Nuclear Forces Treaty, the arms control regime created during the Cold War can no longer guarantee strategic stability. The existence of new technologies, such as cyber capabilities paired with AI, will only amplify this destabilizing trend.
In fact, a new technological, rather than nuclear, AI cyber arms race may have already begun. Notably, AI may be especially dangerous in combination with other information technologies—cloud computing, big data, the Internet of things, among others. The recently published U.S. Department of Defense Artificial Intelligence Strategy states that “the most transformative AI-enabled capabilities will arise from experiments…this includes creating a common foundation of shared data, reusable tools, frameworks and standards and cloud and edge services.” For example, self-learning technologies need data for learning. As a result, they require the ability to access constantly updated databases, with fast communications. A good example from the civilian world are self-driving cars. As Ted Senator, program manager in DARPA’s Defense Sciences Office stated:” Thanks to massive amounts of data that include rare-event experiences collected from tens of millions of autonomous miles, self-driving technology is coming into its own.”
A Way Forward
Even though it is impossible to reverse the advances of AI development, it is still possible to work out the rules of an arms race involving this technology. In this regard, it is worth thinking about the experience of Soviet-American agreements when antiballistic missile (ABM) technologies were introduced in the 1970s. Instead of getting into an arms race centered around this new technology, the Soviet Union and the United States agreed to accept mutual vulnerabilities and, for the sake of strategic stability, abandoned plans to field more advanced anti-ballistic missile technologies as evidenced by the ABM treaty, which remained in force until 2002.
The ABM treaty could serve as the basis for a similar agreement on AI cyber weapons.
It is critical to work out similar rules of engagement, taking into thorough consideration the specifics of artificial intelligence. The self-learning machines are working within the framework designed by humans, and the input of the offensive AI technologies should be discussed on the international level. Any arms control agreement had verification provisions. However, in the case of AI technologies, such provisions are nearly impossible to agree on at a bilateral level.
While the nuclear arms control agreements are slowly going extinct, there is an urgent need to negotiate new norms. AI and cyber technologies must be a subject of negotiations among great powers. At the same time, the current state of U.S.-Russian relations leaves little room for any agreement, so discussions should be held at the Track 2 level and involve expert communities from multiple countries. These discussions should deliberate on what constitutes cyber offenses and cyber defenses (including those with AI technologies), and pathways of conflict escalation and de-escalation. Russia, the United States and China should also consider their public cyber posture, which entails revealing under which conditions the countries will or will not use cyber weapons. An agreement should also include ways of verification—a difficult feat to implement, but nonetheless, a crucial element for successful talks on strategic stability in cyberspace.
Russian and U.S. leaders could enhance stability in great power relations through a joint political declaration stating that signatory countries deny the intention to attack each other’s critical infrastructures, such as nuclear command and control systems, with cyber or AI technologies. This would be a positive step for all involved.
Pavel Sharikov is a research fellow at the Institute for U.S. and Canada Studies at the Russian Academy of Sciences.
This article is also published here on the Statfor Worldview.
The views expressed in this publication are solely those of the author and do not necessarily reflect the views of the EastWest Institute