Controlling Cyber Conflict
Joseph Nye, a member of the Advisory Group for the EastWest Institute, poses the question: "Are cyber-attacks the wave of the future, or can norms be developed to control international cyber conflict?"
When cyber-security professionals were polled recently at their annual BlackHat conference in Las Vegas, 60% said they expected the United States to suffer a successful attack against its critical infrastructure in the next two years. And U.S. politics remains convulsed by the aftermath of Russian cyber interference in the 2016 election. Are cyber-attacks the wave of the future, or can norms be developed to control international cyber conflict?
We can learn from the history of the nuclear age. While cyber and nuclear technologies are vastly different, the process by which society learns to cope with a highly disruptive technology shows instructive similarities. It took states about two decades to reach the first cooperative agreements in the nuclear era. If one dates the cyber-security problem not from the beginning of the Internet in the 1970s, but from the late 1990s, when burgeoning participation made the Internet the substrate for economic and military interdependence (and thus increased our vulnerability), cooperation is now at about the two-decade mark.
The first efforts in the nuclear era were unsuccessful United Nations-centered treaties. In 1946, the U.S. proposed the Baruch plan for UN control of nuclear energy, and the Soviet Union promptly rejected locking itself into a position of technological inferiority. It was not until after the Cuban Missile Crisis in 1962 that a first arms control agreement, the Limited Test Ban Treaty, was signed, in 1963. The Nuclear Non-Proliferation Treaty followed in 1968, and the bilateral U.S.-USSR Strategic Arms Limitation Treaty in 1972.
Read the full commentary on Project Syndicate here.