“Are you smarter than Professor Hawking?” Higher Forces and Gut-Feelings in the Debate on Lethal Autonomous Weapons Systems

Written by

“Professor Hawking says that artificial intelligence without control may cause the extinction of the human race”, noted a Chinese delegate following a session on ‘mapping autonomy’ at the Convention on Conventional Weapons (CCW) meeting of experts which took place from 11-15 April 2016 at the United Nations in Geneva. The CCW convened its third meeting of experts to continue discussions on questions related to emerging technologies in the area of lethal autonomous weapons systems (LAWS) and I had the privilege of participating.

LAWS are most often described as weapons that are capable of selecting and attacking targets without human intervention; one of the key questions addressed at the meeting was what exactly this means. According to most of the commentators present at the meeting, LAWS do not yet exist however, the possibility of using autonomous weapons in targeting decisions raises multidisciplinary questions that touch upon moral and ethical, legal, policy, security and technical issues. The meeting addressed all of these, starting with the technical session aimed at mapping autonomy.

Without expressing their position on a ban, the six technical experts on the panel presented a nuanced view of the state of current autonomous weapons technology and the road that lies ahead. The Chinese were one of the first delegations to respond to the panel and the delegate seemed startled; some of what was said seemed to contradict the conclusions reached by Professor Hawking et al. China read the Open Letter issued by the Future of Life Institute (FLI) and signed by thousands of artificial intelligence (AI) and robotics researchers, as well as by a number of other endorsers including the well-known Professor Stephen Hawking. The Open Letter calls for a ban on offensive autonomous weapons beyond meaningful human control, claiming that these weapons would be feasible within years, not decades. The Open Letter attracted a good deal of attention, largely because it is signed by a number of well-regarded figures including, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and as previously mentioned, Professor Stephen Hawking.

The expert panelists offered some divergent views on the claims and predictions made in the Open Letter. In response to these, China asked the panelists “do you think you are smarter than Professor Hawking?” A number of delegates, academics, NGO members and panelists seemed quite amused by the provocative question posed by China. Who dares to disagree with Hawking? Fortunately, some of the experts did. “Isn’t Hawking a physicist, and not an AI expert?”, asked one panelist. Another expert confidently said, “Yes, I am smarter than Stephen Hawking.” Why? “Because, like Plato, I know that I do not know.” The debate is amusing, but also a little bit troublesome. What is the effect of well-regarded figures on the discourse about autonomous weapon systems? I wondered about this for a couple of minutes, until the Chinese delegate once again took to the floor. What he said made me realize that in this complex debate, higher forces, public figures, beliefs and gut-feelings, greatly affect the debate. The delegate began, “Hawking […] is a higher force of knowledge”, implying that Hawking is not to be argued with.

Professor Hawking and the thousands of researchers who supported the FLI’s call may very well be right however, this is not the point. What the words of the Chinese delegate illustrate is that depending on the authoritative status of who is offering them, certain statements may be taken as truths without further consideration. If Stephen Hawking really is a higher force who can solve the dilemma of autonomous weapons in a 1-page letter, then why are we still debating autonomous weapons?  Should we stop arguing and just listen to the man?

Another example of how ‘higher forces’ or our ‘lower abdomen’ can play a role is in terms of the gut-feeling. According to Nobel Peace Prize laureate Jody Williams, the primary reason for starting the Campaign to Stop Killer Robots, which calls for a preemptive ban on autonomous weapons, was an intrinsic gut-feeling that this development is morally and ethically wrong. During the CCW meeting of experts in April 2016, she claimed that the gut-feeling, or ‘ugh’ factor, is one of the fundamental moral arguments for a ban. Williams is one of the spokespersons for the Campaign to Stop Killer Robots and she always speaks from a certain admirable fire in her heart. But a person’s gut-feeling, no matter whose, should not be the sole reason to call for a ban. Saying it louder or for longer may attract more listeners, but this will not strengthen the argument.

Perhaps Williams speaks for most people when she says that there is something in our gut that tells us we should be worried about this development (I know it applies to me); our guts are very important in terms of intuition and should not be underestimated. Only very fearless (or perhaps ignorant) people would contradict that. But this does not mean that we can simply jump from our gut-feeling to the signing and ratifying of a comprehensive and preemptive prohibition on autonomous weapon systems. Moreover, we deserve a global and careful conversation on all aspects of the debate before a decision of this gravity is made.

The decision on whether or not to ban autonomous weapons should not be taken lightly. The thought of the horrendous effects that war has on individuals, societies, entire countries and the world as a whole, make most people feel deeply uncomfortable. Therefore, it is understandable that people might resort to certain higher forces or gut-feelings in search of answers to such difficult and profound questions. However, those feelings or beliefs need to take the shape of deliberative discussions in order to move the debate on the legality of autonomous weapons forward. Hence, we should embrace them but at the same time, be aware that these feelings and beliefs can shape our reasoning and should not be the sole foundation upon which our decision making process rests.

Fortunately, many of the participants and commentators present at the meeting believe that there is much to be discussed before a decision can be made on a potential prohibition. Multidisciplinary questions considering the need for human control, moral and ethical arguments, the relation to military effectiveness, legal ramifications, political incentives and many more should play a role in decisions on the regulation of autonomous weapons. Let us consider these questions alongside our gut-feelings and beliefs about higher forces, but let us not be blinded by the many scientific arguments that either support or reject a ban.

Clearly, the decision on whether to ban autonomous weapons should never be solely led by our guts, nor by a statement signed by Professor Hawking. Approaching this and issues of similar gravity with deliberative and comprehensive discussions is something we owe not only to ourselves, but also to future generations.

Print Friendly, PDF & Email

Leave a Comment

Comments for this post are closed

Comments

John R Morss says

April 27, 2016

Very thoughtful contribution, thankyou Merel

Roger O'Keefe says

April 28, 2016

I second John Morss's praise for and thanks to Merel Ekelhof. A really interesting and entertaining post. Brava.

MH says

April 28, 2016

While I agree with the overall gist of this post (i.e. principled and evidence-based discussion over mere gut feeling), I'm not sure it's entirely fair to the Chinese delegation. I was present at the LAWS meeting and, from my recollection, the delegate followed his 'higher force' comment with a reference to Hawking's extensive experience as a scientific researcher: one who generally understands the imperfections of science and the unpredictability of how innovations will ultimately be used (or at least that's the impression that the delegate's follow-up comments left me with).

You don't need to be an AI expert to draw attention to these considerations. Yet over 3,000 AI experts have indeed signed the open letter, suggesting that the Chinese delegation isn't deferring to Hawking merely because of his reputation as a physicist; but perhaps also because of the body of expert opinion that has echoed his call.

Bear in mind, also, that the delegate was speaking in English and, therefore, not his first language. The term 'higher force of knowledge' may sound overly deferential, but as a bilingual myself, I'm only too aware of how meanings can be distorted when phrases are directly translated.

For me, it wasn't so much the 'higher force' comment, but what came after it that made more sense of the Chinese position...one that's not as outlandish as it might first appear!