Tackling Football-Related Online Hate Speech: The Role of International Human Rights Law: Part II

Written by and

Part II: The UK’s response to football-related online hate speech

In the first part of this post, we argued that the various expressions of online racial hatred directed at England’s black football players following the country’s defeat in the recent European Championship final, as well as earlier instances of football-related online racial abuse, fall under different categories of hate speech. These are: 1) prohibited speech, under Articles 20(2) of the International Covenant on Civil and Political Rights (ICCPR) and Article 4 of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD); 2) limited speech, under Article 19(3) ICCPR; and 3) protected speech, under Article 19(2) ICCPR. In this part of the post, we assess whether the UK has done enough to meet its obligations to prevent, combat, condemn, and provide redress for such forms of racial hatred and discrimination, in line with Article 20 ICCPR and Articles 4, 6 and 7 ICERD, whilst respecting individuals’ right to freedom of expression under Article 19(2)-(3) ICCPR. In our view, the answer is no. This is so for two main reasons: 1) an insufficiently clear and granular legal framework addressing different types of (and measures for) racist speech, and 2) a defective implementation of the existing laws.

  1. Insufficient racist speech laws

In England and Wales, racially-motivated hate speech is dealt with in Part III of the Public Order Act 1986 on ‘racial hatred’. This statute criminalises a number of ‘acts intended or likely to stir up racial hatred’. These are i) the use of words or behaviour or display of written material; ii) publishing or distributing written material; iii) the public performance of a play; iv) distributing, showing or playing a recording; v) broadcasting or including a programme in cable programme service; and vi) possessing racially inflammatory material, where a) the material is threatening, abusive or insulting, and b) either there is an intention to thereby stir up racial hatred or, having regard to all the circumstances, racial hatred is likely to be stirred up thereby. Such offences are punished by imprisonment of up to seven years and/or a fine.

To be sure, Part III of the Public Order Act seems to mirror other states’ incitement laws which have been deemed consistent with Articles 20 ICCPR and 4 ICERD by the Human Rights Committee (HRC) (see Rabbae v The Netherlands (2017) CCPR/C/117/D/2124/2011, paras 10.5-10.7, on section 137d of The Netherlands’ Criminal Code). However, apart from these criminal offences, all covering types of prohibited speech, no provision is made for measures to limit other types of racist speech falling short of incitement which must still be limited to give effect to Articles 4, 6 and 7 ICERD. These include, most prominently, ‘propaganda and all organizations which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form’ (Article 4 ICERD), such as the online activities of white supremacist and neo-Nazi groups reported on Telegram and other platforms. As the HRC noted in Faurisson v France (CCPR/C/58/D/550/1993 (1996), para 4), to fully protect individuals from incitement to discrimination on grounds of race, religion or national origin under Article 4 ICERD, states may need to adopt further legislation beyond ‘a narrow, explicit law on incitement that falls precisely within the boundaries of article 20, paragraph 2’, in line with Article 19(3) ICCPR. According to the HRC:

‘[t]his is the case where, in a particular social and historical context, statements that do not meet the strict legal criteria of incitement can be shown to constitute part of a pattern of incitement against a given racial, religious or national group, or where those interested in spreading hostility and hatred adopt sophisticated forms of speech that are not punishable under the law against racial incitement, even though their effect may be as pernicious as explicit incitement, if not more so.

Yet, since 1987, the Committee on the Elimination of Racial Discrimination (CERD) has expressed concern over the UK’s implementation of Article 4(b) ICERD due to the lack of provisions dealing with racist organisations (see A/42/18, para 703 and A/46/18, para 189). As of July 2021, provisions addressing racist organisations and propaganda short-of-incitement are yet to be adopted. More fundamentally, on several occasions (see, e.g., A/46/18, para 189; CERD/C/GBR/CO/18-20, para 11; CERD/C/GBR/CO/21-23, paras 15 and 17), CERD has expressed concern and called upon the UK to reconsider its restrictive interpretation of Article 4 ICERD, according to which ‘further legislative measures in the fields covered by sub-paragraphs (a), (b) and (c) of that article [are required] only in so far as [a state party] may consider’. This is so especially ‘in the light of statements by some public officials and media reports’ (CERD/C/63/CO/11, para 12) as well as ‘the continuing virulent statements in the media that may adversely affect racial harmony and increase racial discrimination in the State party’ (CERD/C/GBR/CO/18-20, para 11). Accordingly, CERD recommended that the UK ‘adopt comprehensive measures to combat racist hate speech and xenophobic political discourse, including on the Internet’, and to ‘take effective measures to combat racist media coverage, taking into account the Committee’s general recommendation No. 35 (2013) on combating racist hate speech’ (see CERD/C/GBR/CO/21-23, para 16(d)-(e)).

Also ‘concerned about the prevalence in the media and on the Internet of racist and xenophobic expressions that may amount to incitement to discrimination, hostility or violence’, the HRC stated that the UK:

should strengthen its efforts to prevent and eradicate all acts of racism and xenophobia, including in the mass media and on the Internet, in accordance with articles 19 and 20 of the Covenant and the Committee’s general comment No. 34 (2011) on freedoms of opinion and expression  (CCPR/C/GBR/CO/7, para 10)

Following those consistent recommendations from both human rights bodies, UK reported the adoption of a hate crime action plan –  the so-called Action against Hate. Yet the action plan only applies to hate crime as defined in existing legislation, i.e., Part III of the Public Order Act and its subsequent amendment to include similar offences of stirring up hatred on religious grounds or grounds of sexual orientation in Part 3A. Other forms of hate speech continue to be undefined and unlimited by civil or administrative law in England. This means that, in the online environment, the definition of limited hate speech and the necessary and proportionate measures to constrain it under Article 19(3) ICCPR have been left entirely in the hands of tech companies’ community standards or guidelines.

Granted, the UK Government is in the process of adopting the so-called Online Safety Bill to tackle a range of ‘online harms’, including hate crimes committed on social media and other Internet platforms. However, in its present form, the Draft Bill focuses on imposing a statutory duty of care on different Internet service providers, rather than clearly defining limited speech acts and identifying necessary and proportionate measures to restrict user-generated content, in line with Article 19(3) ICCPR.  In particular, the Bill stipulates a range of ‘safety duties’ with respect to ‘illegal’ content, defined as words, images, speech or sources reasonably amounting to certain pre-existing offences (such as terrorism or child abuse) or those specified in future secondary legislation to be adopted by the Secretary of State. Such duties include ‘tak[ing] proportionate steps to mitigate and effectively manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service’. For other types of content that are deemed ‘harmful to adults’, i.e., content imposing a ‘material risk of […] having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities’, the Bill simply delegates to relevant Internet service providers the responsibility of clearly and accessibly identifying what amounts to such content (section 11). Companies are directed to fulfil such duties by carrying out content risk assessments (sections 7 and 19) and whatever other measures they deem proportionate, ‘having regard to the importance of protecting users’ freedom of expression within the law’ (sections 9-12 and 21-22).

As is well-known, corporations are not bound by international human rights law, but only have voluntary responsibilities to respect human rights. Thus, it falls upon states to protect the human rights of those within their jurisdiction by, inter alia, regulating corporate activities that might infringe upon those rights, including social media companies and other Internet service providers (see Human Rights Council, A/74/486, para 41). In the context of online hate speech, this means that limited speech acts other than incitement under Articles 20 ICCPR and 4 ICERD must be clearly defined, along with the necessary and proportionate measures that may be taken to constrain it, such as content moderation (Human Rights Council, A/74/486, para 31). While the Online Safety Bill rightly requires relevant Internet service providers to carry out the necessary human rights due diligence before limiting online content (sections 7 and 19), as well to put in place internal redress mechanisms, i.e. user reporting and complaint mechanisms (sections 15 and 24), it fails to clearly define what content may be limited in the first place. In the same vein, apart from imposing the establishment of a reporting and complaints mechanism, the Bill does not lay down what other necessary and proportionate measures companies must or may adopt to limit speech, as required by Article 19(3) ICCPR. Instead, the Bill appears to further legitimise private censorship by affording Internet service providers significant discretion to define and sanction what they consider to be harmful speech acts.

  1. Defective implementation of existing laws

In its General Recommendation No. 35 on combating racist hate speech (para 17), CERD reiterated that ‘it is not enough to declare the forms of conduct in article 4 as offences; the provisions of the article must also be effectively implemented’, which is ‘characteristically achieved through investigations of offences set out in the Convention and, where appropriate, the prosecution of offenders’. However, in some of its Concluding Observations on the UK’s periodic reports (see e.g., CERD/94th session/FU/AR/ks, page 1; CERD/C/GBR/CO/21-23, para 15), CERD has expressed concern ‘about the underreporting of [hate crime] and about the gap between complaints and convictions’, which results in ‘a large number of racist hate crimes [going] unpunished’.

In addition, CERD expressed particular concern over ‘the rise of racist hate speech on the Internet’ and deep concern:

‘that the [2016 EU] referendum campaign was marked by divisive, anti-immigrant and xenophobic rhetoric, and that many politicians and prominent political figures not only failed to condemn such rhetoric, but also created and entrenched prejudices, thereby emboldening individuals to carry out acts of intimidation and hate towards ethnic or ethno-religious minority communities and people who are visibly different’ (CERD/C/GBR/CO/21-23, para 15).

Thus, CERD recommended that the UK take effective and comprehensive measures to combat racist online speech and media coverage, particularly by applying ‘appropriate sanctions’, as well as ‘ensur[ing] that such cases are thoroughly investigated’ and ‘that public officials not only refrain from such speech but also formally reject hate speech and condemn the hateful ideas expressed’ (CERD/C/GBR/CO/21-23, para 16(d)-(e)).

Following similar concerns, the HRC recommended that the UK a) effectively implement and enforce the existing relevant legal and policy frameworks on combating hate crimes, b) improve the reporting of cases of incitement to discrimination, hostility or violence, and of cases of hate crimes, as well as c) thoroughly investigate alleged cases of incitement to discrimination, hostility or violence, and alleged hate crimes, prosecuting the perpetrators and, if they are convicted, punishing them with appropriate sanctions (CCPR/C/GBR/CO/7, para 10).

Nevertheless, one only prosecution for the offence of ‘stirring up racial hatred’ was reported by the UK in England in 2015-16 (ICERD/C/GBR/CO/21-23/Add.1, para 10). Seemingly unconvinced by the UK’s response to its Concluding Observation, CERD insisted on requesting ‘additional information on the results of the measures taken to ensure that the present legislation is fully implemented and racist hate speech thoroughly investigated and punished in accordance with article 4 of the Convention and in line with General Recommendation 35 on Combating Hate Speech’, as well as ‘information on how effectively media regulatory bodies address the issue of racist hate speech function.’

As of July 2021, the UK  Government still lacks specific statistics on reported incidents of hate speech crime in England, including online, focussing on other hate or racially motivated crimes (see here, here, here and here). Official statistics record only 13 prosecutions and 11 convictions for ‘stirring up hatred’ since the UK last submitted information to CERD in 2017. While the Crown Prosecution Service for England and Wales recognises that this is a low number when compared to other hate crimes, civil society groups have reported 362 verified online Islamophobic incidents and 633 verified online anti-Semitic incidents only in 2017-2018. Of course, the implementation of any obligation to protect human rights, including by investigating and sanctioning those responsible for violations, is subject to a state’s capacity to act, such as its available human and financial resources (see HRC, General Comment No. 31[80], para 8). Thus, it would be unreasonable to expect a state to track, trace and take action against every single incident of speech crime, especially considering the speed and volume at which they occur online. However, the UK’s consistent explanation for its low number of prosecutions is the ‘higher evidential thresholds and the need to consider an individual’s right to freedom of expression’ (see CPS Hate Crime Report, at page 18; ICERD/C/GBR/CO/21-23/Add.1, para 12), which does not seem to add up. This is because freedom of expression is already factored in the prohibitions required by Articles 20 ICCPR and 4 ICERD, read together with Article 19(3) ICCPR. Likewise, the standard of evidence is one and the same across all criminal offences, namely, beyond reasonable doubt.

Conclusion: A way ahead

In sum, English law only defines and addresses certain forms of prohibited hate speech under Articles 20 ICCPR and 4 ICERD, namely, the ‘stirring up hate’ offences laid down in Parts III and 3A of the Public Order Act 1986. Yet instances of online racially motivated incitement have gone largely unreported and unpunished, despite calls by both the HRC and CERD for the UK to strengthen the enforcement of its existing laws on racist speech. Racist organisations remain unsanctioned, in violation of Article 4(b) CERD. Likewise, other speech acts, such as racist propaganda short-of-incitement, which must be limited to give full effect to Articles 4, 6 and 7 ICERD, are not defined or addressed by civil or administrative law but left in the hands of online and offline media companies. Therefore, the UK appears to have failed to put in place a structured approach to the different types of hate speech, as required under Articles 19 and 20 ICCPR and Article 4 ICERD. Its all-or-nothing, often hands-off, framework on hate speech will remain in place even if the UK Parliament passes the Online Safety Bill, as the latter continues to dodge hard decisions about censorship and free speech, leaving them to the discretion of private companies.

Amending the Online Safety Bill to clearly define limited types of online speech and specify what non-criminal measures Internet service providers are required or permitted to adopt to restrict such content would go a long way towards achieving compliance with both the ICCPR and ICERD. Aside from the reporting and complaints mechanism that must be put in place by Internet service providers under sections 15 and 24, the Bill should provide for the following measures:

  1. Tweaking company algorithms to increase the visibility of and opportunities for counter speech (see Human Rights Council, A/74/486, para 28). This could be achieved by, inter alia, introducing ‘dislike’ buttons, such as those piloted by YouTube, with highly disliked comments hidden from users’ views.
  2. Tagging or labelling limited or protected racially-motivated hate speech as content that ‘may contain racist abuse’ and direct viewers to awareness-raising or educational resources, similar to the tags introduced by Twitter to tackle COVID-19 dis- and misinformation. This measure would contextualise expressions of racial hatred by informing users about their potentially discriminatory meaning and taking a clear stand against such types of content.
  3. Scaling up content moderation to cover difficult cases that can neither be conclusively dealt with by companies’ AI technology or human moderators nor added to the caseload of formal complaints mechanism, such as an oversight board. This could be done by randomly selecting panels of country-specific users to decide on the maintenance or removal of content, or by allowing prominent users to nominate moderators for their pages, endowed with the power to remove hateful replies and block users if necessary.
  4. Keeping a record and preserving evidence of online speech crimes, as well as notifying such incidents to the police.

Whatever the measures selected, one thing is clear: the UK’s current legal framework on racist hate speech must be effectively enforced, as well as complemented by more precise and granular regulation of limited and protected speech in England. Until then, this issue will remain a filthy stain not only on football and sports more generally, but in English society as a whole.

Print Friendly, PDF & Email

Leave a Comment

Comments for this post are closed

Comments