Why We Need to Stop Talking About ‘Killer Robots’ and Address the AI Backlash

Written by

In the field of artificial intelligence, the spectacle of the ‘killer robot’ looms large. In my work for the ESRC Human Rights, Big Data and Technology Project, I am often asked about what the ‘contemporary killer robot’ looks like and what it means for society. In this post, I offer some reflections on why I think the image of the ‘killer robot’ – once a mobiliser for dealing with autonomous weapons systems – is now narrowing and distorting the debate and taking us away from the broader challenges posed by artificial intelligence, particularly for human rights.

In order to address these challenges, I argue that we have to recognise the speed at which technology is developing. This requires us to be imaginative enough to predict and be ready to address the risks of new technologies ahead of their emergence. The example of self-driving cars is a good illustration of technology having arrived before regulatory issues have been resolved. To do otherwise means that we will be perpetually behind the state of technological development and regulating retrospectively. We therefore need to future-proof regulation, to the extent possible, which requires much more forward-thinking and prediction than we have engaged in so far.

Origins of the Killer Robot

The term ‘killer robot’ has many origins, including frequent use in books, TV series and films like Terminator. In relation to international law, the term has been used in the context of autonomous weapon systems (AWS) or lethal autonomous weapons (LAWS). In 2013, Human Rights Watch coined ‘killer robots’ to refer to AWS in its report ‘Losing Humanity: the Case against Killer Robots’. The report was not one of science fiction but focused on a specific risk that, within 20 to 30 years, militaries could develop (or acquire) and deploy ‘fully autonomous weapons that could select and engage targets without human intervention’. In the same year, the author of the report, Mary Wareham, launched the Campaign to Stop Killer Robots, a coalition of organisations aimed at the implementation of the report’s main recommendation to achieve a ‘preemptive prohibition on their development and use’.

At the time, Mary Wareham was reported in the Atlantic as explaining that ‘[w]e put killer robots in the title of our report to be provocative and get attention’. The journalist covering the story agreed with the goal, observing that ‘the organized campaign against killer robots has gained momentum as the technology and militarization of robotics has advanced, and the smartest thing the movement has done is pick its name’. The term was therefore employed as a visualisation aid and to make the risks of AWS less abstract in order to mobilise and campaign against their development.

However, as I argue below, the debates on artificial intelligence are now much wider than AWS and the use of the term is distracting from the challenges posed by the current applications of artificial intelligence outside of the military context. This is not to say that dealing with AWS is not important. Indeed, since 2013, there has a process underway to look at how AWS should be regulated. The Convention on Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons at the UN has met annually to discuss the issue, including whether negotiations should begin into a treaty. However, there has not yet been resolution of the issue and some commentators have questioned whether it is the best forum for addressing these issues. In addition to the process, a number of key substantive issues still need to be addressed. For example, commentators have observed that the issues are not only whether or not to ban AWS but there is also debate on what constitutes AWS and whether it includes existing or only future technology; the meaning of autonomy and human control; whether a prohibition or a focus on implementation of international humanitarian law constitutes the best course of action; the implications of not developing AWS where others have; and the wider role of AWS in cyber defence. It is therefore an area of complex and ongoing discussion with little yet resolved.

The Spill-Over into Wider AI Debates

The advent of big data and more advanced and cheaper computational power has meant that machine learning, at least, has become much more accessible and available to a wider set of actors. Beyond military uses, debates on the opportunities and risks of artificial intelligence are now taking place within governments and across a wide range of industries and sectors of societies. This is illustrated by the range of national reports and plans on AI (see, for example, three of the most recent: the UK House of Lords report on ‘AI in the UK’, the report of the Indian government’s Task Force on Artificial Intelligence and the US Government Accountability Office report on Artificial Intelligence: Emerging Opportunities, Challenges and Implications).

In this wider context, references to ‘killer robots’ (or robots generally) can create hype and focus the mind on science fiction and singularity: a point in time (which many dispute will ever come) where machines become smarter than humans and ‘use their superior intelligence to take over the planet’. In the recent House of Lords report, Sarah O’Connor of the Financial Times was quoted as stating that ‘if you ever write an article that has robots or artificial intelligence in the headline, you are guaranteed that it will have twice as many people clicking on it’. The report also noted that, ‘at least some journalists were sensationalising the subject’. Mary Wareham has also spoken about the risks that robots such as ‘Sophia’ can create the impression of much greater sophistication, intelligence and autonomy than they actually have.

This type of hype can have the effect of drawing the public and policymakers away from current issues with artificial intelligence. It can also mean that attention is only focused on addressing the issues for a short period of time and can therefore thwart efforts for a sustained response to the challenges that artificial intelligence presents.

The Current Challenges Posed by Artificial Intelligence

The recent ‘AI backlash’ is beginning to shift attention to the real and urgent challenges that need to be addressed today. In the space of this post, there is insufficient room to set out all the pressing issues. However, some key themes from the ‘backlash’ exemplify the point.

The range of incidents like Facebook/Cambridge Analytica and Grindr illustrate the ongoing need to regulate the collection, storage, analysis and sharing of data as the ‘fuel’ for artificial intelligence and emerging technologies. Given that it has just entered into force and some businesses with operations outside of the EU are considering voluntarily signing up to it, the EU General Data Protection Regulation (GDPR) has been touted as a central solution. As I argue in a forthcoming post, the GDPR is an important start to addressing these issues but it is a regional instrument and is not a panacea, meaning that many issues still need to be addressed that fall outside of the GDPR.

A key area falling outside the scope of the GDPR is the use of machine learning and other forms of technologies in the context of law enforcement and national security. The use of predictive policing and algorithms to support decision-making on whether a person is granted bail has received the most attention given the potential to adversely affect the presumption of innocence, the right to liberty and the right to a fair trial, in addition to the risks of discrimination and profiling.  However, concerns have also been raised about other forms of technology such as automated facial recognition technology (AFR), which can search databases in real time, particularly in relation to the rights of freedom of assembly, association and expression. In the UK, Big Brother Watch and Liberty have recently challenged the use of AFR on the grounds that it is ‘unregulated’ and threatens human rights.

While most analysis focuses on particular forms of technology, a critical point is that when used together, they can create a situation of pervasive surveillance in real time that extends far beyond anything previously possible. Moreover, organisations such as Human Rights Watch have reported on the risk of ‘parallel construction’ whereby alternative explanations and ways of finding criminal trial evidence are provided by state agencies to avoid scrutiny of the legality of such use of technology.

The issues that arise for law enforcement also draw out bigger regulatory questions that are not covered by the GDPR. For example, the question of to what extent it is appropriate – and lawful – to use technologies, such as AFR, even when technologically possible, and the extent to which these technologies should be able to link to other datasets a state and/or businesses might hold. This is particularly relevant in contexts such as smart cities, which run most effectively if different forms of technology and artificial intelligence applications work together.

Similarly, the use of algorithms to support decision-making is central to current usages of and backlash against artificial intelligence. Yet, a robust framework for the design and oversight of the use of algorithms in decision-making is still lacking, despite the extensive debate and documentation of the risk that the use of algorithms in decision-making can introduce or accentuate existing inequalities, discrimination or other forms of harm to human rights. As we argued in a recent submission to the UK Parliament’s Science and Technology Committee, a framework is needed that addresses the full algorithmic life-cycle from the design phase right through to remedies for individuals and groups affected.

Finally, there are larger structural issues about the locus of power within a handful of businesses and states and how the right to benefit from scientific progress should be realised, so that the benefits of artificial intelligence can be shared by all.

Why We Have to Be Better at Looking into the Future

In addition to addressing current issues, we need to be more effective in predicting and imagining the trajectory of technology. While we may never reach singularity, technology is evolving rapidly and the ‘art of the possible is always changing’. This is a ‘complicating factor’ for regulation but one that needs to be addressed. One of the major critiques and concerns about law (including international law) in a world of artificial intelligence is that it is ill-equipped and lacks agility to effectively respond to the challenges posed in the ‘Fourth Industrial Revolution’. What is needed is a shift in the methodology of regulation that looks to the future in order to have thought through the potential risks, challenges and regulatory options before the technology emerges or while it is under development. This requires close interdisciplinary collaborations – stripped of hype – and much better structures and ways of working so that the public and policymakers understand the current state of technology and its trajectory in order to be able to effectively regulate its use.

Print Friendly, PDF & Email

Categories

Tags

No tags available

Leave a Comment

Comments for this post are closed

Comments