Tackling Football-Related Online Hate Speech: The Role of International Human Rights Law: Part I

Written by and

Part I – A pattern of online racist speech

Introduction

When Marcus Rashford, Jadon Sancho and Bukayo Saka missed crucial penalties to seal England’s defeat to Italy in the final of the European Championships, a dread began to consume many football fans as well as less enthusiastic observers. This dread was much darker than the crushing blow felt by the losers of a hard-fought sporting battle. What was feared was the recurring online racially motivated hate speech that would be hurled at three black footballers who had failed to convert from the spot under the utmost pressure. Yet the apparent inevitability of what has become a pattern of racist abuse in England and its subsequent normalisation in some quarters must not overshadow the illegality of such conduct.

In this post, we argue that online racial abuse directed at black English football players may fall into different categories of hate speech under Articles 19(3) and 20 of the International Covenant on Civil and Political Rights (ICCPR) and Article 4 of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), both of which have been ratified by the United Kingdom (UK). Thus, to meet its obligations to protect individuals from racial hatred and discrimination, whilst safeguarding users’ freedom of expression, the UK must not only enforce its existing laws on incitement to racial hatred offences but also clearly define other forms of speech that may be limited by Internet service providers, as well as the necessary and proportionate measures to achieve those ends. Our analysis explains why we believe the UK has failed to discharge its duties under Articles 19 and 20 ICCPR and Article 4 CERD not only in respect of this and other football-related incidents of online racial abuse but also in England as a whole.

  1. Setting the scene: An existential threat to the beauty of the game

Racist abuse began to flood the social media pages of the black England players as soon as the team was defeated. The most frequent form of racially motivated hate speech were comments consisting of racial slurs and monkey emojis from Instagram accounts, often with little personally identifiable information. Other types of abuse included standalone tweets which used racial slurs, a prominent example of which allegedly came from a Savills employee, who is now under investigation by the Greater Manchester Police.  

The response from the relevant social media companies was disappointing but predictable. Twitter claimed that it had removed more than 1000 tweets, using both Artificial Intelligence (AI) technology and human moderators, on top of permanently suspending a number of accounts in light of the racist abuse directed at the England players. Facebook (the owner of Instagram) produced a similar statement with regard to deleting comments and accounts, also suggesting that players use Instagram’s Hidden Words tool in order to reduce their own exposure to hate speech. Although both companies condemned the racist abuse as unacceptable or against their guidelines, many such speech acts were left unaddressed, with some users who reported relevant posts receiving an automated message stating that ‘the account likely didn’t go against our community guidelines’.

Football and sports-related racially motivated online hate speech is not a new phenomenon and reflects wider racist attitudes in society. To give but two recent examples, in May 2021, Marcus Rashford received at least 70 abusive messages on social media following Manchester United’s loss in the final of the Europa League, including monkey emojis. Similarly, in February 2021, Chelsea FC defender Antonio Rüdiger revealed that he was targeted with immense online racist abuse after being blamed for instigating the sacking of the club manager in January. In spite of widespread social media boycotts in 2019 and 2021 to raise awareness of the problem and challenge the platforms, the pattern of football and sports-related online hate speech continues. 

It is also worth stressing that racially motivated abuse suffered by non-white athletes, whether online or offline, is distinct from the backlash that any player might be subjected to after an unpopular result, irrespective of their race, nationality, religion, or another protected characteristic. In the former case, the individual is not only attacked for their perhaps subpar sports performance. Their human rights and dignity are also denied because of their race, the colour of their skin, their descent, national or ethnic origin, as defined in Article 1 ICERD. Given the context of systemic racism in which English and other societies around the world still live, every racially motivated speech act is a threat to the enjoyment of human rights by individuals belonging to protected groups.

  1. The applicable legal framework under the ICCPR and ICERD

Article 19 of the ICCPR protects individuals’ fundamental rights to the freedoms of opinion and expression, including the right to seek, impart and receive information and ideas of all forms and kinds by any means, whether offline or online. This right is essential in any democratic society, particularly for the protection of vulnerable groups, and, as such, covers even the most shocking or offensive forms of expression, such as harsh criticism of governments and religious doctrines, tenets or leaders (see Human Rights Committee (HRC), General Comment No. 34, para 48). However, as is well-known, freedom of expression under Article 19(2) ICCPR and in other human rights instruments is not absolute. The first of those limitations is provided for in Article 19(3) ICCPR, which entitles states to restrict freedom of expression by law whenever necessary (and proportionate) to respect the reputations of others or to protect national security, public order, public health or morals. The second limitation to freedom of expression is found in Article 20 ICCPR, which requires states to prohibit by law any propaganda for war (para 1) and any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence (para 2). This provision embodies the right of individuals to be free from hatred or incitement to certain forms of discrimination, in line with Article 26 ICCPR (see HRC, Rabbae v The Netherlands (2017) CCPR/C/117/D/2124/2011, para 10.4; HRC, Faurisson v France, CCPR/C/58/D/550/1993 (1996), paras 4 and 10). As noted by the HRC in its General Comment No. 11:

For article 20 to become fully effective there ought to be a law making it clear that propaganda and advocacy as described therein are contrary to public policy and providing for an appropriate sanction in case of violation. The Committee, therefore, believes that States parties which have not yet done so should take the measures necessary to fulfil the obligations contained in article 20, and should themselves refrain from any such propaganda or advocacy.

Article 4 ICERD complements Article 20(2) ICCPR by requiring states parties to ‘condemn all propaganda and all organizations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form and undertake to adopt immediate and positive measures designed to eradicate all incitement to, or acts of, such discrimination’, with due regard to the freedoms of opinion and expression. This includes an obligation to ‘declare an offence punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof.’ According to General Recommendation No. 35 of the Committee on the Elimination of Racial Discrimination (CERD):

Racist hate speech can take many forms and is not confined to explicitly racial remarks. As is the case with discrimination under article 1, speech attacking particular racial or ethnic groups may employ indirect language in order to disguise its targets and objectives. In line with their obligations under the Convention, States parties should give due attention to all manifestations of racist hate speech and take effective measures to combat them. The principles articulated in the present recommendation apply to racist hate speech, whether emanating from individuals or groups, in whatever forms it manifests itself, orally or in print, or disseminated through electronic media, including the Internet and social networking sites, as well as non-verbal forms of expression such as the display of racist symbols, images and behaviour at public gatherings, including sporting events.

This means that, under both the ICCPR and ICERD, online or offline ‘hate speech’, including racist speech, is neither a legal nor a one-size-fits-all concept (see Human Rights Council, A/74/486, para 1). Rather, it is an umbrella term encompassing speech acts that may fall under three distinct legal categories. These are 1) prohibited speech (i.e., Articles 19(3) ICCPR and 4 ICERD); 2) limited speech (Article 19(2) ICCPR); and 3) protected or free speech (Article 19(2) ICCPR). Importantly, any prohibition or limitation to freedom of expression, whether by criminal, civil or administrative means, must follow the general requirements listed in Article 19(3), namely, it must be established by law, for a legitimate purpose (including the purposes stated in Articles 20 ICCPR and 4 ICERD), and be necessary and proportionate to achieve such a legitimate purpose (Human Rights Committee, General Comment No. 34, paras 50-52; CERD, General Recommendation No. 35, paras 4, 19-20; A/74/486, paras 12 and 16).

Thus, when giving effect to such provisions domestically, states must carefully distinguish between a) hate speech constituting criminal offences; b) speech acts that are not criminal but prohibited and thus sanctioned by civil or administrative law; and c) expressions that neither give rise to criminal or civil sanctions and are thus protected, however repugnant they may be (see Human Rights Council, Rabat Plan of Action, paras 12 and 20). While the first and second classifications are not exactly coterminous, proportionality requires that criminal punishment be reserved to only the most serious forms of hate speech, that is, instances of incitement to hatred constituting ‘the most severe and deeply felt form of opprobrium’, taking into account the context, the speaker, any intent, the content, form and extent of the speech act, as well as its likelihood of harm (Human Rights Council, Rabat Plan of Action, para 29).

The various instances of online racial abuse targeting England’s black football players seem to fall under different categories of hate speech, i.e., prohibited, limited and protected. Most clearly, posts inviting users to ‘punish a [N-word]’ with different forms of violence amount to prohibited speech under both Articles 20 ICCPR and 4 ICERD. This is so to the extent that they constitute expressions of advocacy of racial hatred inciting others to be hostile or use violence against individuals belonging to a racial group. Along similar lines, Facebook’s Oversight Board has found that the Russian word “тазики” (“taziks”) to describe Azerbaijanis was ‘a dehumanising slur attacking national origin’, and, as such, warranted removal from the platform. One may thus legitimately question why several posts using the N-word or abbreviations thereof were not promptly removed from Instagram and other social media platforms. The same goes for the racially motivated and direct threats of violence made by neo-Nazi groups to players on Telegram, as well as the multiple expressions of white supremacy on this and other platforms, celebrating the ‘victory of “white” Italy against diverse England’ and repeatedly claiming that that ‘racial purity wins’. These fall squarely within Article 4(a) ICERD. Also falling within the category of ‘prohibited speech’ are posts comparing players to slaves.

In contrast, expressions of hatred stating that the players should ‘get out of [the] country’ or ‘go back’ to a certain country, or that they do not belong to the team, do not seem to amount to direct incitement to racial discrimination, hostility or violence. However, in this particular context of hooliganism and repeated racial abuse, which led to the defacement of Marcus Rashford’s mural in Manchester and physical violence against some fans (see here and here), the presence of intent or knowledge to instigate or incite others to commit violence or discriminate against the players might well suffice for a ‘prohibited speech’ classification. This is especially so considering the high visibility of social media posts and the high risk of serious harm to victims.

More controversial are the endless monkey or banana emojis, whether addressed to players themselves or posted for broader audiences. While these emojis are in principle innocuous and were never meant to have any discriminatory connotation, their use in certain racially-charged contexts has come to be seen as a symbol of racism. Thus, when used to dehumanise, insult, or otherwise discriminate against racial groups as such, that is, on the basis of their race, skin colour, descent, ethnicity or nationality, those emojis might, at the very least, warrant proportionate limitations to protect the rights and reputations of targeted individuals under Article 19(3) ICCPR (see Human Rights Council, A/74/486, paras 19-24), such as contextual removal or tagging. Furthermore, to the extent that said emojis might amount to expressions of racial superiority, a good case can be made that they fall under the prohibited speech category, in line with Article 4(a) ICERD.

Print Friendly, PDF & Email

Leave a Comment

Your comment will be revised by the site if needed.

Comments