Experts and politicians in China are worried that a rush to integrate artificial intelligence into weapons and military equipment could accidentally lead to war between nations.
According to a new report published by US national security think tank Center for a New American Security (CNAS), Chinese officials increasingly see an “arms race” dynamic in AI as a threat to global peace. As countries scramble to reap the benefits of artificial intelligence in various domains, including the military, the fear is that international norms shaping how countries communicate will become outdated, leading to confusion and potential conflict.
“The specific scenario described to me [by one anonymous Chinese official] is unintentional escalation related to the use of a drone,” Gregory C. Allen, an adjunct senior fellow at CNAS and author of the new report, tells The Verge.
As Allen explains, the operation of drones both large and small has become increasingly automated in recent years. In the US, drones are capable of basic autopilot, performing simple tasks like flying in a circle around a target. But China is being “more aggressive about introducing greater levels of autonomy closer to lethal use of force,” he says. One example is the Blowfish A2 drone, which China exports internationally and which, says Allen, is advertised as being capable of “full autonomy all the way up to targeted strikes.”
Because drones are controlled remotely, militaries tend to be more cavalier about their use. With no risk of human casualties, they’re more willing to shoot them down, but also deploy them into contested airspaces in the first place. This attitude can also be seen in cyberwarfare, where countries will intrude in ways they wouldn’t necessarily risk if humans were involved.
“The point made to me was that it’s not clear how either side will interpret certain behaviors [involving autonomous equipment],” says Allen. “The side sending out an autonomous drone will think it’s not a big deal because there’s no casualty risk, while the other side could shoot it down for the same reason. But there’s no agreed framework on what message is being sent by either sides’ behavior.”
The risks in such a scenario become greater when factoring in advanced autonomy. If a drone or robot fires a warning shot at enemy troops, for example, how will that action be interpreted? Will the troops understand it as an automated response, or will they think it’s the decision of a human commander? How would they know in either case?
In essence, says Allen, countries around the world have yet to define “the norms of armed conflict” for autonomous systems. And the longer that continues, the greater the risk for “unintentional escalation.”
Read More: The Verge