Big names warn against military AI Thursday, 13 August 2015

More than 2500 artificial intelligence experts and researchers have signed an open letter warning of the dangers of an arms race centred on military artificial intelligence (AI) and demanding a ban on offensive autonomous weapons.

Another 15,000 also co-signed the letter, presented at this year's International Joint Conference on Artificial Intelligence in Buenos Aires. Among them were Apple co-founder and UTS adjunct professor Steve Wozniak, as well as Tesla and SpaceX founder Elon Musk and DeepMind Technologies chief executive Demis Hassabis. Stephen Hawking was also a signatory.

AI has reached a level of maturity such that deployment would be feasible within years rather than decades, according to the letter.

"The stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," the authors stated, citing armed quadcopters or drones searching for and choosing their own targets.

If a major power were to pursue such technology, the authors argued that an arms race would be inevitable. With no costly or difficult to obtain components, autonomous weapons would be cheap and ubiquitous, "the Kalashnikovs of tomorrow", they said.

"It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, [or] warlords wishing to perpetrate ethnic cleansing," the letter stated. "Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group."

Just as chemists and biologists have largely supported international prohibitions on chemical and biological weapons, the authors stated most AI researchers have no interest in building weapons, and do not want others to tarnish the field by doing so.

But there are those who have waded into the fray on the opposite side. Writing for IEEE Spectrum, contributing editor Evan Ackerman argued autonomous weapons could potentially exercise greater caution and follow the rules of engagement with greater fidelity than human combatants.

Even astrophysicist Neil deGrasse Tyson weighed in, saying on the ABC's Q&A program that the problem was not the technology itself, but the "diabolical politic that uses the technology for nefarious power-ploy hegemonistic gains”.

"You don't stop the technology, [instead] you monitor our conduct as human beings in the face of it," he said.

In a piece on The Conversation, UNSW research fellow in Indo-Pacific defence, Dr Jai Galliot, echoed deGrasse Tyson's argument, and pointed to technologies already in use by the Australian Navy such as Close-In Weapons Systems (CIWS). These can autonomously fill roles normally performed by people, including detection, targeting and firing.

"This system would fall under the definition provided in the open letter if we were to follow the signatories' logic. But you don’t hear of anyone objecting to these systems," Galliot wrote.

Photo: US Air Force