Drones: breakthrough or threat?

Christof Heyns, UN Special Rapporteur and Professor of Human Rights Law

Christof Heyns, UN Special Rapporteur on extrajudicial, summary and arbitrary executions and Professor of Human Rights Law at the University of Pretoria.

As a United Nations-based expert, Christof Heyns has on many occasions demonstrated concern on the growing number of drone attacks in countries such as Pakistan, Somalia and Yemen, and has warned of the dangers that the proliferation of these vehicles weakens international security and the protection of human life. In this interview, he talks about the challenges that drones pose, and the urgent necessity to stop the proliferation of ‘killer robots’ to preserve the world order.

As an expert in international law, do you believe that there is any possible use of armed drones compatible with international humanitarian law?

I do not think that drones are inherently illegal weapons. They are being used in Afghanistan in the context of an established armed conflict, and I do not particularly see a big difference whether there is somebody on board and pushes the button at 16000 feet and does not see the ground, or whether they push that button somewhere on the ground where they have got a screen and are able to see exactly where they are targeting. However, their use creates a lot of challenges, and that is what we need to study.

In your latest reports, you have stated that ‘a world where multiple States use armed drones in secrecy is a less secure world’ and have called for more transparency from state actors. Which measures must be adopted for this to happen?

Currently in the United States, there is a draft bill in which they look at disclosing the number of civilian casualties when there has been a drone strike and that is a first step. Ideally, the identities should be revealed and not just the numbers. One must have clarity on the facts, the law surrounding drones, and who exactly is potentially a target; for example the issue of members of organized armed groups. Who exactly can be targeted? Is it any member or certain members of the armed group? One must also have clarity on the policies in what cases are drones used and what are the policies concerning employing this form of warfare. Those are the three areas where I am looking for transparency and visibility.

What about non-state armed groups? Do you believe that the acquisition of drones by such groups is a possibility?

In order to send a drone half way around the world, you need to be in possession of some advanced telecommunications, and even state actors often do not have that kind of telecommunications. They could probably only use them within a short range. However, it is a danger that they could be hacked and fall into the hands of non-state actors.

Clarity on the facts, the law surrounding drones, and who exactly is potentially a target. Those are the three areas where I am looking for transparency and visibility

In order to deal with all these challenges, do you think that the international community and public opinion must play a bigger role?

The international community must express itself on robots. It is already happening in the General Assembly of the United Nations; it is important the kind of discussion we had some months ago in New York with 193 states, where states expressed themselves on a number of issues, including the European Union saying that the established international legal system must be used, its norms must not be opened, and there should be transparency. The Convention on Certain Conventional Weapons (CCW) is also looking into this, and the Secretary General of the UN has also been advised to begin a process.

In a previous report by Special Rapporteur Philip Alston, in 2010, he expressed his concern about the use of drones to carry out targeted killings and he already mentioned that there is an urgent need to study the implications of fully autonomous weapons. Are we already late on lethal robots?

I do not think it is too late. It is in a way unusual to address problems with weapons systems in advance; it is often after it is used that problems are addressed, but I think that it will be very difficult if robots are used to get rid of them. I think there is urgency but it is not too late. The main states that have this technology are at least aware that there is a concern. We have a window of opportunity to act but within a few months we will probably lose that.

You suggested, in your report, a moratorium on robots. Should it be a first step towards the future prohibition?

The important thing to mention about the moratorium is that it is a temporary situation on an item or a procedure which in practice is not used. The international community understands well enough that these robots should be banned; my proposal was not to use them for the time being. This could lead to a ban or to higher regulation, or it can potentially lead to the international community deciding that it can deal with robots. I don’t think the last option would be a realistic possibility, so I foresee either a ban or some kind of regulation.

One concern is whether it is at all acceptable whether machines could take decisions over life and death of human beings. The second concern is that of accountability. How do you regulate the situation if we do not know who eventually is responsible?

Is it possible to impose a moratorium on the research into these weapons?

That is impossible. The technology used for these robots is the same that is used for the Google car and for many other applications. What I am calling for is a moratorium on the building, use and development of these weapons- and on weapon platforms too, because these robots simply carry other weapons. I am simply asking for domestic moratoria, so that states themselves say that they should not use this. It is difficult to monitor this internationally; my main point is that we are too uncertain at the moment about the implications of these weapons so states for the time being should commit not to use them.

What are the biggest concerns about the use of robots?

One concern is whether it is at all acceptable whether machines could take decisions over life and death of human beings. Even if they could comply 100% with international law, it is a problem with regards to human dignity, that we can be killed by machines. The second concern is that of accountability. Even theoretically if these robs can comply with international humanitarian law (IHL), how do you regulate the situation if we do not know who eventually is responsible? The person who develops the software, the politicians, the commander? These machines will have been programmed years in advance before there is even a war, and can this person be tried for a war crime? There is a lot of uncertainty about accountability and where IHL cannot be policed as we do not know who can be held accountable.

This issue has been discussed at the Convention on Certain Conventional Weapons (CCW) and some countries have raised the issue. Do you think that if not enough progress is made, this could lead to a fast-track process, such as the cluster munitions ban process?

Many people are critical about the CCW, because many weapons such as cluster munitions or landmines were dealt with by them but eventually solutions were found elsewhere. People are sometimes critical of the CCW saying that they have not achieved many concrete results in the past few years, but there seems to be a new energy; there is a French chair of the state party group (Jean-Hughes Simon-Michel) who I have met with, and they are committed to employ CCW procedures too. There is reason to be cautiously optimistic about this to take the agenda further.

There seems to be a momentum on humanitarian disarmament internationally. Certainly this is something positive but could it also have a negative side? Are there too many issues competing for public space in the agenda?

There is a danger that one focuses only on autonomous robots and at the same time new technology is being developed and are not attended to. I do not think it is a bad thing that a lot of attention is being paid to different technologies, but I think that people must be realistic about it as well. We have to think about how to regulate the use of drones, and in that case, it is not a question of whether they should be banned. With robots it is a different matter; we should channel our energy in asking whether they should be banned. Then there are all kinds of bio and Nano-technology that are being developed. Those things need to be focused on and so I think that it is a good thing that there is a wide spread of attention.

How do you imagine armed conflicts will develop in the future?

It is clear already that we are moving away from inter-state wars. It is mostly non-international conflict between states and non-state actors and war gets very complicated. There will be an increased incentive to use technology for the states that have this technology so that is why I am worried about autonomy as well, because one day it could go too far and push over the edge, and becomes something that dehumanizes the very idea of humanity. But I think states with the technology will increasingly use that and that is a concern. For them the incentive is significant, especially with unmanned systems, because they do not lose their own troops.

And probably as you say in your report, there will be more conflicts, in the sense that actors will have less to lose.

And that is my concern; with war, there is a typical timeframe. Usually, there is a time for war, then peace follows; war comes to an end, and then comes recovery and healing. However, if states have access to technology that they can use to pinpoint a particular target and a particular state, more low intensity conflict where a state tries to target its enemies in many different states will become common. It would mean that the established rhythm of war and peace would be blurred, and an ongoing low intensity armed conflict would prevail.

© Generalitat de Catalunya