Human Rights in the Era of Automation and Artificial Intelligence

Written by

 

Editor’s Note: Following is a Keynote Lecture delivered at the 2nd KU Leuven AI Law & Ethics Conference (LAILEC 2020), held on 18 February 2020.  The author is grateful for the exchanges with Professors Joanna Bryson, Peggy Valcke, Nathalie Smuha, AI policy practitioners and EU Commission representatives.  The lecture was delivered one day before the European Commission issued its landmark White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, which is the first policy paper of its kind by one of the global powers to embed in significant detail human rights protections in defining Europe’s policy toward automation and artificial intelligence in the future.  

There are three key issues I aim to bring to your attention today, and aptly so here at the LAILEC 2020.  To the legal experts here, they will not appear new or unfamiliar.  To the Phd candidates present, the issues are par for the course in Phd life, especially in this constantly evolving field of AI Law and Ethics. But to the rest of international law academic, policy, and practitioner audiences that are heavily caught up in the deepening silos of business and human rights, the three key issues I aim to discuss today perhaps do not have the same level of visibility in our public, academic, and practitioner deliberations.  After all, given the barrage of news coverage on climate disasters, armed conflicts, assassinations, refugee displacements, the global health emergency of corona virus, and the continuing erosion of democracies around the world – isn’t the question of human rights in the era of automation and artificial intelligence something to be addressed farther down the line when we all reach some degree of steady state equilibrium?

My answer to this (and which is why I was grateful to accept this invitation to speak here) is a resounding No.  There are three key issues, in my view, that go to the core of our expected regulatory quandaries about how to incentivize and manage technological developments alongside our shared desire to ensure they are harnessed for the common good of humanity:

FIRST.  The challenges to the formation and communication of individual consent, which is central to the vindication and protection of any human right, whether civil, political, economic, social, or cultural.  In an era where any individual’s perception of a given factual reality could be readily manipulated without any standardized cross-border legal accountability or legal enforcement mechanisms to redress such manipulations, I submit that we should acknowledge that there already is an ongoing asymmetric impairment in our individual abilities to authentically consent to market-based approaches, measures, and practices of the tech sector.  As I show later, the dilution of the free basis of our individual consent – either through outright information distortion or even just the absence of transparency – imperils the very foundations of how we express our human rights and hold others accountable for their open (or even latent) deprivation.

SECOND. The challenges to autonomy, personhood, and self-determination, which are essential values protected under international human rights law but which are increasingly challenged when artificial intelligence, algorithmic decision-making tools, and automation, are not only made available in the market by the tech sector, but are increasingly relied upon in political arenas to reinforce autocracies that today deftly co-opt the discursive spaces, institutions, and vocabulary of democracy and human rights – exploiting notions of individualist pluralisms to pit one’s ‘interpretation’ of human rights as simply “my truth” against “your truth”.  This debate, in ordinary circumstances, would be healthy and expected in the ecosystem of international human rights, but it is not so when the autocracies that have co-opted the language of human rights can do so through massive information distortion and absence of transparency that is enabled by today’s technological developments.  Routinized political control over information flows pollutes our human right to self-determination – to “freely determine one’s political status and freely pursue economic, social, and cultural development.”

THIRD and finally, there are also urgent challenges now to our human dignity, what is, in essence, our equal moral worth as persons.  Every technological revolution in human history has always created its counterpart Schumpeterian ‘creative destruction’, but automation and artificial intelligence that is left unregulated – or which flourishes, as it does today, in a significantly vast regulatory vacuum – will do more than just cause job losses as economies transition from current production processes, means, and methods, to more automation and AI-based means of production.  Automation and AI are both reality-altering: they will not just affect supply-side production but also demand-side consumption.  They will – if they have not already – change what is the actual barometer of ‘value’ in the market, from the usual measures of wealth in real and personal property, to the more fluid, less traceable, more easily manipulable, more volatile, and less stable measure of our data as the new coin of value.  Those with first-mover advantages in automation and artificial intelligence do not just stand to accumulate wealth and influence in a scale that, in some cases, is even more than the GDPs of small states, but they themselves stand to entrench today’s existing inequalities by practicing regulatory arbitrage while themselves favorably shaping the future landscape of regulation. 

What can be most threatening to human dignity – again, defined as our “equal moral worth as persons” – goes precisely to issues of equality, moral decision-making, and human worth or value. To the extent that automation and AI could be freely deployed to instrumentalize the human person, this can cause serious problems for respecting, protecting, and fulfilling human rights and ensuring remedy for any violations of such rights. Questions abound if States could still ensure that their populations’ human rights are respected, protected, fulfilled, and any violations redressed, for example, in automation and AI-generated externalities, such as: 1) the devaluing and obsolescence of individual workers’ skills training, education, and professional capabilities resulting from labor force displacements that automation will inevitably create; 2) the routinized displacement of human judgment, choice, and agency by predictive algorithms not just in the delivery of public services but also in governmental functions such as criminal justice, law enforcement, and judicial adjudication; and 3) the increasing alienation, discriminatory treatment, objectification, and isolation of human beings from establishing genuine connections, sociability, expressions, and relationships, in favor of a matrix of virtual platforms, constructed realities, curated experiences, and manufactured ‘communities’ or social groups that all ultimately function as readily accessible marketing hubs and focus groups for paid advertising and market influencers.  All the more, cognitive AI (which combines “numeric data analytics techniques that include statistical analysis, modeling, and machine learning, plus the explainability (and transparency) of symbolic artificial intelligence”) should give us all pause to rethink why, in what circumstances, and under what parameters we would delegate our decision-making.  While one could readily argue that cognitive AI for “business” purposes should not be a problem, this does not address the inevitable moral, ethical, and human rights challenges that confront business either.  Would it really be so easy to draw a sharp line of separation of decisions that we would leave for cognitive AI, and those we would retain as the understanding of human dignity also evolves in an automation and AI-driven “Life 3.0”?

This is not to say, of course, that we should turn back the clock or advocate some form of Neo-Luddism.  For better or for worse, automation and AI in our Fourth Industrial Revolution are already here and they are ubiquitous throughout postmodern human life.  No one gainsays their economic, technical, scientific, medical, and technological advancements and benefits for various parts of human existence and endeavor.  AI itself has had a myriad of definitions beyond John McCarthy’s initial coinage of this field in 1956, now spanning not just the “simulation of intelligent behavior in computers” or the “the ability of a digital computer or computer-controlled to perform tasks commonly associated with intelligent beings”, but also machine learning (e.g. “the field of computer science that studies algorithms and techniques for automating solutions to complex problems that are hard to program using conventional programming methods”) as well as deep learning (e.g. where computers “learn from experience and understand the world from a hierarchy of concepts, with each concept defined through its relation to simpler concepts…this approach avoids the need for human operators to formally specify all the knowledge that the computer needs.”).  Automation, especially in the era of AI, stands to reach far into every area of work (e.g. defined as “any activity performed in exchange for an economic reward”) and could, according to one scholar, be cause for introspection to explore new forms of human flourishing.

The three key issues I am taking up today point to the inevitable externalities of automation and AI – and whether regulators (especially in the United States, the home of innovation and robust intellectual property protections for the tech sector) can still afford a laissez faire approach.  My not-too radical argument is this: while automation and AI innovation have flourished in the regulatory vacuum fostered by such a laissez faire approach, the legal reality is that there never was such a vacuum.  States’ international human rights obligations – especially under the International Covenant on Civil and Political Rights, the International Covenant on Economic, Social and Cultural Rights, and the considerable body of international human rights treaty law – continuously operate and are in effect even in the absence of such a lex specialis for automation and AI – making it imperative for all States to deliberate craft policies that both anticipate and provide remedy and redress for the inevitable externalities and human impacts of automation and AI.  Human rights thus remains an enduring fabric of constraints, parameters, and checks to the automation and AI-era of “Life 3.0”.

Challenges to the Formation and Communication of Individual Consent

I could probably sum up this part of my keynote lecture with two names: Facebook and Cambridge Analytica.  We are all probably aware of Brittany Kaiser’s explosive whistleblower account of the information distortions, mass information harvesting, and privacy leakages from millions of Facebook profiles around the world, supposedly affecting “more than 100 election campaigns in over 30 countries spanning 5 continents”.  These global operations reportedly succeeded in undermining, if not subverting, peoples’ right of self-determination with respect to their rights to “freely determine their political status”; facilitated the destruction of individual persons’ civil and political rights against “arbitrary or unlawful interference with his privacy, family, home or correspondence…[and against] unlawful attacks on his honour or reputation”; imperiled voters’ rights “to hold opinions without interference” and to exercise freedom of expression, including the freedom “to seek, receive, and impart information and ideas of all kinds” subject to legal restrictions to “respect the rights or reputations of others” and “for the protection of national security or of public order, public health or morals.”  The still unraveling Facebook and Cambridge Analytica scandal of alleged mass information distortions, information control, and privacy breaches for millions around the world, challenges the human right of citizens to freely take part in the conduct of public affairs and to vote in elections that guarantee “the free expression of the will of the electors”.  The Facebook and Cambridge Analytica scandal starkly demonstrate the disenfranchising, disempowering, and distorting challenges to the free and genuine expression of various civil and political rights of voters.  The absence of any concrete legal remedies and human rights redress thus far for all of the millions of individual victims whose Facebook information data were supposedly harvested, manipulated, shared, among others by Cambridge Analytica also magnifies the continuing impacts from these reported violations, since the global and cross-border legal accountability of all entities involved for individual human rights violations around the world has not yet been fully established, notwithstanding recent trans-Atlantic developments, such as the US$5 Billion settlement reached by Facebook with the United States Federal Trade Commission (US FTC) which imposed the penalty for Facebook’s mass violation of U.S. consumers’ privacy and also imposed new restrictions; another recent report this year of a US$550 Million settlement to be paid by Facebook to claimants alleging violations of Illinois law through Facebook’s gathering and storage of biometric data without their consent; and Italy’s imposition of a 1 Million Euro fine against Facebook for violations in relation to the Cambridge Analytica scandal.  Significantly, Facebook CEO has himself just admitted that “social media companies need more guidance and regulation from governments in order to tackle the growing problem of harmful online content.”

Information distortions as seen in the Facebook-Cambridge Analytica scandal thus challenge the authentic nature and actual scope of individual consent to AI and automation practices.  On the other end of the spectrum, transparency deficits as to how tech companies use and share individuals’ data remains a work in progress.  Since Google famously published its Transparency Report in 2010 disclosing its content restrictions, security, and privacy measures, voluntary reporting of transparency policies have arguably become the industry norm.  However, while there are many approaches to transparency in AI and automation, these remain subject to industry-driven variances and jurisdictional differences.  To the extent that transparency policies in AI and automation remain as to content, legal effect, and consequence, it remains an open question if individuals are issuing meaningful, free, and informed consent to the tech sector’s use, sharing, or disposition of data obtained from individuals, as well as all other AI and automation practices and measures dependent on such data. In this regard, from a human right standpoint, in my view, the European Union is ahead of the curve globally in seeking to ensure the meaningful, informed, and free consent of individuals, as seen from its 2019 Ethics Guidelines for Trustworthy Artificial Intelligence.  These Guidelines laudably focus on 7 key requirements for trustworthiness of AI systems: 1) human agency and oversight (including fostering informed decisions respectful of individual human rights); 2) technical robustness and safety; 3) privacy and data governance; 4) transparency; 5) diversity, non-discrimination, and fairness; 6) societal and environmental well-being; and 7) accountability.  In this, the rest of the world certainly has much to gain from the EU’s pioneering regulatory example premised on ensuring respect for, protection of, and fulfillment of, individual human rights.

Challenges to Autonomy, Personhood, and Self-Determination

The second key issue that is of deep significance from the standpoint of international human rights law is the deployment of AI and automation in political spaces by today’s autocracies, usually as measures of social control, law enforcement, and surveillance.  These include facial recognition systems, emotion recognition systems, internet restrictions and controls to squelch protests or opposition views, controlling distributive access to public and social services, as well as disinformation or fake news, data collection, censorship, and automated surveillance collectively dubbed today as “the rise of digital authoritarianism”, where AI is massively deployed to “reshape repression”.  The extension of AI and automation to future warfare also causes considerable risks to the continuing needs to ensure the legality of the use of force and civilian protections under international humanitarian law.  To all these, however, the June 2019 UN High Level-Report on Digital Cooperation tamely recommends that “there is an urgent need to examine how time-honoured human rights frameworks and conventions – and the obligations that flow from those commitments – can guide actions and policies relating to digital cooperation and digital technology…how human rights can be meaningfully applied to ensure that no gaps in protection are caused by new and emerging digital technologies.”

In my view, however, in 2020 we are long past the time of questioning how and why international human rights law could apply to the era of automation and AI, especially when used as autocratic tools to thwart the autonomy, personhood, and self-determination of individuals.  There is nothing ‘new’ here insofar as the escalating abuses and ongoing impunity that autocrats enjoy when they violate civil and political rights through digital repression measures, nor when they violate economic, social, and cultural rights when they condition the delivery of public services (whether the rights to the highest attainable standard of physical and mental health, the right to social security, the right to education, the right to participate in cultural life, the right to an adequate standard of living) on non-transparent, discriminatory, and inherently partisan social control mechanisms.  What makes these violations particularly pernicious in 2020, is the extent by which States justify them under cover of sovereignty or their chosen modality or paradigm of ‘development’.  But even these arguments are obviously flawed.  States exercised their sovereignty to themselves limit the scope, nature, and effects of measures they would impose in the future on their peoples – this is precisely why international human rights treaties remain just as universal today as at the inception of the UN Charter system, which internalizes international human rights and human dignity as part of the purposes and principles and obligations of membership in the Charter. Even if States had not chosen to ratify international human rights law, others may well argue that the rights-bearers, e.g. individuals and peoples, did not relinquish their basic civil, political, economic, social, and cultural rights when they entered into the social contracts that formed their governments and States.  Neither is the argument of idiosyncratic, repressive, measures to realize ‘development’ persuasive in international law. Even the definition of the right to development under Article 1 of the 1986 UN Declaration on the Right to Development is explicit in stating that the right to development is a right of individuals and peoples – not governments or autocratic States – to participate in, contribute, enjoy economic, social, cultural, and political development, “in which all human rights and fundamental freedoms are realized.”  The fact that there remains such a glaring enforcement of international human rights treaty obligations and even the UN Charter Article 2 and Article 55 obligations to ensure respect, protection, and promotion of human rights – in the ongoing mass deployment of automation and AI as tools of repression and avoidance of international responsibility – is the main question to be addressed today, NOT whether and how human rights law applies.

Relevant as well to our assessment of State responsibilities in respecting, protecting, and fulfilling their civil, political, economic, social, and cultural rights obligations is the role of the private sector that creates the tools of automation and AI and makes them available for State purposes.  Now this itself is a question that is fact-based and context-dependent, but we cannot avoid raising questions, especially in an era where the UN business and human rights treaty is near completion.  The Committee on Economic, Social, and Cultural Rights took a definitive position on States’ duties to regulate the local and overseas activities of business entities under their sovereign control or jurisdiction, to ensure continuing duties to respect, protect, fulfill and remedy deprivations of economic, social, and cultural rights.  In its General Comment No. 24, the Committee emphasized the continuing duties of States Parties to the ICESCR to take steps to regulate business activities that may adversely affect economic, social, and cultural rights, particularly to avoid worker, migrant, sexual, religious, racial or other forms of discrimination. The Committee was explicit in stressing that “the obligation to respect economic, social and cultural rights is violated when States parties prioritize the interests of business entities over Covenant rights without justification, or when they pursue policies that negatively affect such rights.”  The obligation to protect, on the other hand, “means that States parties must prevent effectively any infringements of economic, social and cultural rights in the context of business activities.  This requires that States parties adopt legislative, administrative, educational, and other appropriate measures, to ensure effective protection against Covenant rights violations linked to business activities, and that they provide victims of such corporate abuses with access to effective remedies.”  The obligation to fulfil requires States parties to take necessary steps, “to the maximum of their available resources, to facilitate and promote the enjoyment of Covenant rights, and, in certain cases, to directly provide goods and services essential to such enjoyment.” 

Most importantly, especially for purposes of our discussion on automation and AI challenges to autonomy, personhood, and self-determination rights, it is remarkable that the Committee emphasized “States parties’ obligations under the Covenant did not stop at their territorial borders.  States parties were required to take the steps necessary to prevent human rights violations abroad by corporations domiciled in their territory and/or jurisdiction (whether they were incorporated under their laws, or had their statutory seat, central administration or principal place of business on the national territory), without infringing the sovereignty or diminishing the obligations of the host States under the Covenant.”

It is less a question, therefore, of whether international human rights law applies to the misuse of automation and AI tools in a manner that violates civil, political, economic, social, and cultural rights.  Of course it DOES.  However, what remains a serious task to be considered for regulators everywhere is the extent to which they can address the need for remedies in the ongoing mass violations of these rights by autocratic regimes that use automation and AI with impunity not just against their own citizens or persons present in their own territory, but also against perceived enemies of the State abroad.  Because remedies for these violations remain primarily domestic, and precisely because of the “black-box” approaches to transparency by various tech sector behemoths when confronted by regulators overseas, it is a glaring challenge for individuals, groups, peoples, and communities facing automation and AI-enabled repression and mass human rights atrocities to obtain needed redress.  THAT, is the question, in my view, is the existential and urgent issue that the UN High Level Panel on Digital Cooperation and all States and non-State stakeholders to automation and AI ought to consider, beyond issues of legal harmonization or ‘covering protection gaps’ for digital interdependence.  In an increasingly unequal world between human rights victims and autocratic governments benefiting from vast networks, tools, and resources of automation and AI at their disposal to routinize and regularize human rights violations as ‘national security measures’ (even without the usual treaty limits of derogation clauses, limitation clauses, or even the Charter prohibitions on the use of force), human rights victims around the world remain in search of a forum, a process, or an institution that will truly realize genuine access to justice, remedies, and reparations for these burgeoning forms of digital repression, dictatorship, and authoritarian control.

Challenges to Human Dignity or Equal Moral Worth of Persons

Finally, my foremost concern might, perhaps, be viewed by others as not a legal, but an epiphenomenal one (e.g. “a secondary phenomenon that is caused by and accompanies a physical phenomenon but has no causal influence itself.”). What can be most threatening to human dignity – again, defined as our “equal moral worth as persons” – goes precisely to issues of equality, moral decision-making, and human worth or value. To the extent that automation and AI could be freely deployed to instrumentalize the human person, this, in my view, can cause serious problems for respecting, protecting, and fulfilling human rights and ensuring remedy for any violations of such rights. With automation and AI as our inevitable reality under “Life 3.0” or the “Fourth Industrial Revolution”, should we not simply accept that even our conceptual understanding of humanity would change as well?  And if so, would that not also alter our legal calculus for what dimensions of our ‘equal moral worth’ as persons ought to be legally protected through regulation?  Automation and AI are advanced technologies that inherently contain a certain degree of decision-making removed (in small or large degrees) from the human operator, with a certain measure of autonomy potentially achievable without need of constant human oversight in the performance of functions, and which could potentially (if not in reality) make redundant or irrelevant current human skills, education, learning, or capacities.  If this is the reality ahead, why should law anticipate – or even hinder, delay, or pace – humanity’s evolution to a reality of more automation and AI?

My view on this is probably more cautious than laissez faire.  The tremendous capacities and forces of automation and AI are probably well beyond the abilities of law and regulators to foresee (as of course regulation often only emerges reactively to felt experiences and impacts of new technologies or fields of endeavor).  But that does not eliminate the need for States, laws, and regulators, to recall that human rights protection is a constant obligation.  And to the extent that human rights serve as an inbuilt constraint or continuing parameter to the creation of law and policy, I would err on the side of States, laws, and regulators proactively anticipating possible externalities (such as human rights impacts) from automation and AI, if only for the fact that trying to ensure redress and reparations for human rights victims and survivors is harder today in 2020 with the proliferation of automation and AI tools.  We have already discussed the transparency, disinformation, and information distortion challenges in the human rights fact-finding process.  That only stands to be amplified further by autocratic regimes bent on insulating themselves from international legal responsibility; co-opting (and forcefully re-interpreting) the language, discourses, and vocabularies of human rights in a manner that all but guarantees impunity for mass atrocities enabled through automation and AI.  Some of these externalities that we are now witnessing include, as I have previously said: 1) the devaluing and obsolescence of individual workers’ skills training, education, and professional capabilities resulting from labor force displacements that automation will inevitably create; 2) the routinized displacement of human judgment, choice, and agency by predictive algorithms not just in the delivery of public services but also in governmental functions such as criminal justice, law enforcement, and judicial adjudication; and 3) the increasing alienation, discriminatory treatment, objectification, and isolation of human beings from establishing genuine connections, sociability, expressions, and relationships, in favor of a matrix of virtual platforms, constructed realities, curated experiences, and manufactured ‘communities’ or social groups that all ultimately function as readily accessible marketing hubs and focus groups for paid advertising and market influencers.  All of these challenge the full spectrum of our human rights – from the right to work and to just and favorable conditions of work; the right to equality before the law and not to be discriminated against; the right to freely determine one’s political status and freely pursue economic, social, and cultural development; the right to freely take part in cultural life; the right to vote and to be informed of public affairs; the freedom of expression, peaceable assembly, privacy, and arguably, even fundamental rights to democracy and accountable government. 

The dialogic, remedial, and protective functions of international human rights law are so much harder and arduous to discharge when faced with a climate of: 1) cross-border jurisdictional differences in regulating the potential or actual use (and misuse) of data by the tech sector; 2) the resource disparities, power imbalances, and inherent inequalities already faced by human rights victims who rarely have the capacities to obtain redress from monopolistic tech companies; 3) varying transparency policies for the tech sector due to the competing claims of intellectual property protection and the different informational demands of regulators around the world; and 4) the ongoing automation and AI-enabled siege by autocratic regimes against human rights defenders around the world, and the proliferation of disinformation and heightened propaganda tactics to acculturate populations against valuing the protection of human rights in the first place.  To human rights lawyers and Global South practitioners, it has never been a more challenging time to obtain accountability against dictators and resurgent authoritarians who now style themselves – with automation and AI tools at their disposal to complement their use of force – as the actual ‘authorities’ of a ‘different’ version of human rights.  For human rights scholars and professors, it has never been a more difficult time to invite careful scrutiny, examination, and analysis of the actual text, nature, scope and application of international human rights, when automation and AI can stridently amplify, cause mistrust of, and promote outright dismissals of human rights as “the tyranny of human rights”, “human rights imperialism”, or “Western-imposed human rights”.  For individuals witnessing and experiencing human rights abuses in various jurisdictions, it has never been a more chilling time to try and seek redress, enforcement, and vindication of international human rights.  To this day, there is no equal opportunity access to automation and AI – these are technologies predominantly wielded, shaped, and driven by market-dominant companies and economic actors, as well as governments of States.  Individuals in automation and AI world are either passive consumers, workers, users, small purchasers.  The human rights threats have materialized; the human rights impacts are real, imminent, and actual, and the human rights violations are continuing and proliferating with impunity.  In this respect, the European Commission’s deliberate move towards human rights-centered automation and AI policy regulation through its forthcoming White Paper on AI is a welcome pioneering development.

I invite the researchers, experts, regulators, practitioners, and civil society participants in this conference to reframe automation and AI law and ethics discourses to encompass moral, political, sociocultural, and legal discourses and deliberations that we as an international community must undertake to preserve, respect, protect, and fulfill international human rights law.  “Life 3.0” in an automation and AI universe cannot, and should not, jettison our rights.

Print Friendly, PDF & Email

Tags

No tags available

Leave a Comment

Comments for this post are closed

Comments

Kishor Dere says

February 26, 2020

This exhaustive coverage of diverse issues related to human rights in the times of AI and automation is quite thought-provoking and intellectually challenging. These new technologies, like most of their predecessors, have created opportunities as well as challenges. Unlike previous technological revolutions, this one has, however, immensely helped criminals, robbers, and terrorists besides strengthening the state apparatuses and their appendages. Prof. Diane Desierto rightly argues that these technologies are part and parcel of contemporary society and therefore, cannot be simply wished away. The states, businesses, researchers across disciplines, media, voluntary sector, and public-spirited individuals need to find out ways to minimize the harm to human rights caused by these technologies even though we optimize and enjoy benefits offered by their usage. Moral and ethical issues are also equally important. It may still be little far-fetched to say that AI and automation can render human beings redundant. Historically, advent of every technology caused such apprehensions which were subsequently allayed. After all, technology is a creation of human endeavors. It is there to aid and assist human beings.