Illustration by xMx Luo

In August, at the United Nations’ Convention on Conventional Weapons, 80 countries met to ban autonomous weapons. The US, Russia, Australia, Israel, and South Korea blocked the attempt. Last week, Russia reduced the length of the next convention in 2019 to just one week. This will make negotiations harder.

Unlike the push against nuclear and chemical weapons, these countries say they want to explore the potential benefits of lethal autonomous weapon systems first. Most global powers, like the US, India, Russia, and China, are actively developing them.

Autonomous weapons will change warfare more than any other technology. They can act and engage without human oversight. Unlike human soldiers, they can’t get killed or injured — at worst they get destroyed or damaged. They’re faster and more precise, and they can operate outside of parameters in which a human would survive. They do not take years of training, and they’re easily mass-produced.

They can also be used for policing and surveillance. Whereas humans can get emotional and ignore rules of engagement, these weapon systems can be monitored and controlled. They are programmed and can be tested. Self-defense will not be a valid excuse for them.

Governments can justify military engagements more easily to the public when autonomous weapons are used. Nobody mourns a robot.

Recent military engagements by the US and EU in Libya, Syria, and Yemen have already focused on using drones, bombing raids, and cruise missiles. As human losses to the attacker’s side have been minimal, these wars have kept a low profile in the US and in Europe. Autonomous weapons will make this even easier.

At the same time, over 3000 leading researchers have pledged not to develop autonomous weapons. Among them are the founders of DeepMind, and Elon Musk, the founder of OpenAI.

Many researchers and engineers fear their development might kick off an arms race, or that such weapons could be used to suppress the civilian population if they fell into the wrong hands.

Many of the most promising AI technologies are difficult to test and understand. Researchers are struggling with their complexity and when asked to explain why they work so well.

Do we want to give a machine the final say over injury, life and death? Where do we draw the lines?

When I was a research engineer at DeepMind, we all agreed that no one would want their research to be used for warfare. It is not something we wanted to be associated with.

However, if we don’t develop autonomous weapons, we will be at a disadvantage when we have to fight an enemy that employs them. Without a global ban, those without moral concerns will most definitely develop them. That might lead to less desirable future.

So, could we still ban them globally and stop their development — just as we stopped the proliferation of nuclear weapons?

A ban will not work

Even if all countries signed an international treaty to ban the development of autonomous weapons, it would not to prevent their creation. Nuclear non-proliferation worked because it is different in important ways.

There are two properties that make 1958’s nuclear non-proliferation treaty work quite well.

First, it takes a long time to deploy nuclear weapons. This allows other signatories to react to violations and enact sanctions.

In fact, it takes many years to create nuclear bombs. You need considerable and specialized know-how and tools. All of this has to be developed from scratch because countries keep this knowledge classified. And then, you still have to develop missiles and other means of deploying the nuclear payload.

Second, effective inspections are possible. it’s easy to work out if a country is working on nuclear weapons. You need enrichment facilities and weapons-grade plutonium. It’s difficult to hide this, and, even when hidden, it’s easy to detect traces of plutonium during inspections.

But it’s the opposite with autonomous weapons.

To start, they have a very short ramp-up time: different technologies that can be used to create autonomous weapons already exist and are being developed independently. Combining these technologies will lead to autonomous weapons.

Most of the technologies and research needed for autonomous weapons are not specific to them.

For example, tanks and fighter planes already use sensors and cameras to record everything that is happening. Pilots steer their jet fighters through a computer that reinterprets their steering commands using the input from the sensors. Drones are controlled remotely by operators that are thousands of miles away.

When the remote operators or pilots are replaced by an AI giving the steering commands, these weapon systems become autonomous weapons.

In fact, this could already have happened for drones. There would be no way to tell at the moment.

AI research is also progressing faster than ever before. Both governments and private entities pour more money into it at the same time.

OpenAI, a nonprofit research lab founded by Elon Musk, published research on “transfer learning,” which allows AIs to be transferred from a simulation into the real work with little effort.

Video-game companies, who specialize in virtual warfare, have also joined the fray. They want to create the perfect opponent. EA’s SEED division has begun to train more general-purpose AIs to play its Battlefield 1 game.

Together, these companies unwittingly prepare the ground for autonomous warfare.

Could we at least determine with certainty if someone was working on autonomous weapons?

Sadly, effective inspections are impossible. The hardware for autonomous weapons is no different from regular drones and weapon systems. The AIs for autonomous weapons could be trained in any datacenter because most of their training can happen inside simulations. Running such a simulation would look no different to outside inspectors from predicting tomorrow’s weather forecast or training an AI to play the latest Call of Duty.

Moreover, their code and data can easily be moved to any other datacenter and hidden without leaving a trace.

Without effective inspections and a long ramp-up time, a non-proliferation treaty would be useless. Signatories would still continue to research general technologies in the open and integrate them into autonomous weapons in secret with low chances of detection. They would know that others are likely doing the same, and that abstaining is not an option.

An incredible amount of trust and transparency within the international community would be needed, and it seems we are further away from this at the moment than any time before in the last couple of decades. There might not be enough time to rebuild trust again.

So, what can we do?

We just cannot shirk our responsibility.

Policy makers should not waste time on trying to ban autonomous weapons. They should take action to keep them out of the reach of non-state actors, though.

While I was a residential fellow at Newspeak House in London, I invited OpenAI’s Jack Clark to talk about policy challenges at our AI & Politics meetup. He showed a video of Skydio’s commercial drone and stressed the dual-use nature of any general technology such as AI.

Skydio’s drone can independently track people and keep them in sight of a camera. It could easily be abused for building autonomous weapons by non-state actors.

Commercially available hardware must be secured against tampering, purchases need to be registered, and suspicious activity must be investigated. This is already done for materials that could be used for bomb-making, and we should take the same cautionary approach with autonomous commercial drones.

Most researchers are wary of speaking about this. Supporting an overall ban of autonomous weapons is often pointed to as the only solution by researchers. Not because it could work, but because stating anything else could lead to a public backlash to AI research.

Such a backlash would make matters worse. It would push development further into secrecy. There is a lot of good coming from AI research, and overzealous regulations could one-sidedly affect this.

Instead, international bodies should regulate the development of autonomous weapons and oversee their use as far as possible. Both offensive and defensive uses need to be researched transparently, and we have to build up deterrent capabilities.

Even if we cannot avoid autonomous weapons, we can still try to prevent them from being used on civilians and constrain their use in policing, too. Otherwise, they could be used to suppress the civilian population by authoritarian regimes.

As researchers, we can sign many pledges, but we are still in a bind. If we can throw our best at developing better models to show people the right ads and solve computer games, should we not also use our best where it matters, too?

We must avoid the “incompetent middle way”, where autonomous weapons work well enough to be used but are failing and harmful in unnecessary ways.

International bodies must not become deadlocked because they want to ban what is inevitable instead of compromising in regulation.

But we will end up there if most researchers turn their back on the issue.

There can be no happy ending, only one we can live with.


An older version of the article was originally published on Quartz as “Autonomous weapons will be tireless, efficient, killing machines—and there is no way to stop them”. The current version is also published on Medium as an unsyndicated article.


Illustration by xMx Luo