Viral Misinformation and the Freedom of Expression: Part III

Written by

Editors’ note: this post is part of a series – see here for Part I and Part II.

In my third, and final post in this series I will provide a provisional evaluation of the responses to Covid-19-related misinformation by states and online media companies and how these should be assessed within the framework of human rights law. There are many more actors whose response in this regard is necessary and should be examined (e.g. that of civil society more broadly), but I’ll focus on states and online media companies because of their legal position and their outsize role in the viral infodemic. Finally, I will outline some thoughts for the long road ahead.

State responses to misinformation

In recent years states have increasingly adopted legislation, repurposes old legislation, or implemented other measures to combat the spread of misinformation more generally. Now, during the pandemic, many states are either applying such pre-existing measures to Covid-19-related misinformation, or are adopting new and often sweeping solutions (See, for example, the very direct statement on this point by Dunja Mijatovic, the Council of Europe Commissioner on Human Rights). As I noted in my first post in this series, a comprehensive inventory of such measures and of how they have been applied is crucial in the mid-term, but that is obviously not my purpose here. It is rather to make some general points.

First, it is clear that laws that contain blanket bans on misinformation or untruthful speech fail the necessity and proportionality tests under human rights law, and unduly infringe on the freedom of expression. As I explained in my first post, for speech to be subject to limitations it is not enough that it is untruthful, but it must cause significant social harms – in the Covid-19 context in particular, these would generally be harms to human health. As noted in the 2017 Joint Declaration of special mandates on the freedom of expression (para 2.a):

General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible  with international standards for restrictions on freedom of expression … and should be abolished.

States moreover have a duty under human rights law not to disseminate misinformation themselves and to promote accurate information about Covid-19. The first line of defence against the ill-effects of bad speech must be more good speech, and states must act affirmatively in that regard. Especially in highly polarized societies, messaging from apolitical, non-partisan experts and community leaders is more likely to be believed (see, e.g. here with regard to the Ebola epidemic in the Congo), and states should encourage such messaging and refrain from any activity that would undermine it.

Second, the impact of misinformation will vary from society to society, and so must state responses. Some might need more speech-restrictive measures, many will not – just like, say, it is justifiable for Germany to criminally punish the denial of the Holocaust, but most states do not need to do so. The virulence of the infodemic is simply not uniform.

Third, a blunt response to misinformation, particularly one which imposes harsh criminal penalties on speech where there is little evidence that the state carefully calibrated these measures to its own context and to the precise threat it was facing, and where there is little evidence that less restrictive measures have been attempted, is very likely to fail the proportionality test. Such measures may indeed indicate that their purpose is not to combat the virus, but to crush dissent and criticism of the government more generally, which is per se illegitimate under human rights law. In other words, the spread of viral misinformation is simply a pretext for ramping up authoritarianism and state control over the information space. It seems clear that a number of authoritarian and hybrid regimes around the world have adopted speech-restrictive measures for such a purpose, and that implied or express threats of criminal prosecution are used precisely for their chilling effect on speech critical of the government.

The same goes for blanket Internet shutdowns. It is hard to think of measure that could supposedly be taken for the purpose of combatting Covid-19 misinformation but that could be so counterproductive in a pandemic.

Fourth, criminalization of misinformation would only be appropriate in the most exceptional of cases, through laws that contain a precise definition of the social harm caused by untruthful speech and require proof of a high standard of mens rea (e.g. disseminating misinformation about methanol as a cure for Covid-19 while knowing that the information is false and knowing the health risks of the ingestion of methanol). The more repressive a measure is, the more it needs to be used surgically, and only if some less restrictive measure would not be sufficiently effective.

Fifth, states need to actively collaborate with social media companies and provide them with sufficiently clear guidance and criteria on content moderation (on which more below).

Finally, states need to establish long-term policies that will address structural causes of our susceptibility to misinformation (on which also more below), and gradually build resistance to misinformation within the population. Failure to adopt such policies will, just like the state’s failure to diligently promote accurate information, cast doubt on the necessity and proportionality of any speech-restrictive measures.

Responses by social media companies

Let us now turn to the response to viral misinformation by corporate entities running digital platforms, which are not subject to direct state control. Again, a fully inventory of the measures that these companies have taken with regard to Covid-19 is necessary to assess their effectiveness and justifiability, but my purpose here is to make some more general points.

First, private actors are generally not directly bound by international human rights law. This is why it is important to distinguish between the various different sources of misinformation, as I have done in Part II. But while such actors can therefore legally suppress more speech than the state, the state does have a positive obligation under human rights law to ensure that such suppression is not excessive.

Second, through various soft initiatives, such as the Ruggie Principles, and efforts in this specific area by the UN Special Rapporteur on the freedom of expression (among others), major digital platforms have increasingly accepted the need for both more rigorous and transparent self-regulation and for intervention by the state. Crucially, they have increasingly adopted international human rights law as the only possible universal regulatory framework. Facebook, for example, has done so explicitly.

Third, partly because most of these platforms are incorporated in the United States and their founders are at least one some level ideologically committed to the First Amendment and its aversion to content and viewpoint-based regulation (although formally the US Constitution, as a purely negative charter of rights, has no applicability whatsoever to speech restrictions by private actors), these companies have long resisted efforts to more effectively police misinformation online. This has been especially in the case in the more partisan, political context. (For more background and analysis, see these two Columbia and Chatham House research papers).  That said, Google, Facebook, Microsoft and Twitter have all signed up to a recent EU regulatory effort, the Code of Practice on Disinformation, which inter alia involves a self-reporting obligation, and generally seem to be more keen on state regulatory intervention.

Fourth, with regard to Covid-19 misinformation in particular, digital platforms have been far more willing to combat it effectively than in political or electoral context. They have employed a variety of graduated responses, ranging from softer measures such as the promotion of accurate information from authoritative sources and notices flagging suspicious content, to harder ones such as takedowns of content, or the relegation of such content in search results. Moderation decisions are often quite granular – for example, YouTube is removing all videos promoting conspiracy theories about 5G networks and the coronavirus, but is not removing falsehoods about 5G that do not mention the virus, choosing instead not to include these in search results. A key issue, on which action seems lacking thus far, is the demonetization of false content, where advertising revenue creates an incentive for spreading misinformation.

Even WhatsApp, which employs end-to-end encryption and thus cannot moderate content as such, has introduced measures designed to slow down the spread of misinformation, such as limits on the number of times a message can be forwarded. By doing so, just like with the real-life epidemic, WhatsApp is essentially trying to reduce the R0, or the basic reproduction number, of the infodemic. (It previously did so for users in India, in addition to reducing the size of chat groups, after a spate of lynchings generated by the spread of misinformation through the messaging app).  

Again, a fuller assessment of the effectiveness and proportionality of these measures is necessary in the future, but it is beyond doubt that they have been used on a far greater scale than before – even if despite that the infodemic shows no sign of abating. This is due less to the increased moderation and technical capacities of digital platforms, and more to the greater willingness of their management to limit speech on the basis of its content or viewpoint – likely because Covid-19 misinformation is less overly partisan, and its harms more direct, obvious and imminent. Digital platforms have also been more precise and transparent with regard to the types of misinformation that would violate their policies (see, e.g., Twitter’s Covid-19 misinformation guidance).

That said, it is crucial for states to collaborate with private actors to provide them with regulation and guidance on content moderation, while remaining vigilant that they do not suppress too much speech. Major regulatory decisions that potentially involve balancing between competing human rights need to be made by states, and be subjected to public scrutiny. As the UN Special Rapporteur, David Kaye, has explained, ‘the rules of speech for public space, in theory, should be made by relevant political communities, not private companies that lack democratic accountability and oversight.’

Finally, it is exceptionally interesting to observe how some digital platforms are acting as human rights protectors against states. In the business and human rights context, we are accustomed to thinking of corporations as human rights violators, and of states as having the positive obligation to protect individuals against corporate violations. Now, however, online media companies are suppressing speech by state actors in order to protect the human rights of their own people. Perhaps the most pertinent such example is the suspension by Facebook of numerous accounts run by the military and other officials of Myanmar that were engaged in disinformation and hate propaganda targeting the Rohingya minority (although that effort was both belated and flawed).

In the Covid-19 context, Facebook and Twitter have both taken down posts by world leaders disseminating some kinds of misinformation, e.g. uncritically promoting the use hydroxychloroquine. On the other side of the coin, companies could rely on human rights law to resist unjustified state demands to take down content. Again, as David Kaye put it,

[H]uman rights law gives companies a language to articulate their positions worldwide in ways that respect democratic norms and counter authoritarian demands. It is much less convincing to say to authoritarians, “We cannot take down that content because that would be inconsistent with our rules,” than it is to say, “Taking down that content would be inconsistent with the international human rights our users enjoy and to which your government is obligated to uphold.”

The long road ahead

Speech-restrictive measures meant to curb viral misinformation are the legal equivalent of spraying disinfectant on streets and surfaces to kill the coronavirus. They do not address the root causes of why people innocently and sincerely engage in the spreading of misinformation, or are prone to believe in it (I will leave manipulative bad faith actors aside, because misinformation inevitably relies on innocent spread). To understand those causes we need insights from the social sciences (see, e.g., here, p 10 ff and the footnotes for a compilation of much of the relevant research). I have also previously extensively looked at that research when analysing the inability of the ICTY to persuade local audiences in the Balkans of the truthfulness of the factual findings contained in its judgments (see here and here).

A couple of points are important to understand in that regard. First, we are particularly susceptible to misinformation in times of crisis and uncertainty due to the anxiety that they cause, and the consequent need to ‘make sense’ of them and of our own situation, which will frequently result in the assimilation of unreliable information. This will particularly be the case if accurate information is being suppressed. Second, once false beliefs become accepted they become difficult to dislodge, due to the effects of psychological mechanisms such as confirmation bias, motivated reasoning, or ingroup/outgroup bias, especially if the beliefs are highly emotionally charged for the relevant individual. No amount of new scientific studies will persuade a highly committed anti-vaxxer, for example – they will always have a way of rejecting these studies that is consistent with their worldview.

Third, we are all vulnerable to misinformation. In particular, smart and educated people are not immune; on the contrary, some of the most pernicious misinformation can be peddled or justified by the smart and the educated. That has certainly been our experience with regard to misinformation about wartime atrocities in the former Yugoslavia. It is easy to ridicule people for buying into misinformation or conspiracy theories, as I have in fact (deliberately) done at the beginning of the first post in the series. But we all prone to biases, and that calls for some degree of empathy and humility.

Fourth, similarly, we need to appreciate fully the contingent and mediated nature of the information that we believe to be accurate. We all need to put our trust in some authoritative sources of knowledge that we do not ourselves directly possess. I do not really know, through direct observation, that hundreds of people died in Iran due to misinformation-induced alcohol poisoning. I know this by choosing to believe reports from various media organizations that I consider to be reliable. The vast bulk of the knowledge that we think we possess we acquired by relying on somebody else, e.g. our parents, our teachers in school, a journalist, or (for Covid-19) an expert epidemiologist. Every link that I put in these post, for example, is in reality an act of trust. The issue is how we determine who is worthy of our trust. People sincerely spreading misinformation are fundamentally no different – it’s just that they chose poorly when deciding on whom to trust, and often (like the rest of) are incapable of admitting to themselves or to others that they made such a mistake.

All of this means that the suppression of misinformation, on Covid-19 or some other issue, by state and corporate entities can at best be a mitigating strategy. I am not saying that they are ineffective – far from it. They can be especially important when there are acute spikes in misinformation that can cause immediate harm to human health. But they are simply not effective enough for the ultimate goal that they are pursuing, especially when there are serious structural problems in societies that create fertile ground for misinformation (as, for example, with regard to recent Ebola epidemics in the Congo ; see also here and here). Even in societies which do not experience severe structural problems, what is really necessary are long-term, durable measures of prevention that are not about controlling information, but are about empowering individuals to make better, more rigorous choices as to which sources of information to trust.

The long-term solution to the infodemic, in other words, is the same as it is for the pandemic – building resistance (if not complete immunity) within the population. Obviously, there will never be a misinformation vaccine, even though social scientists have in fact done some promising studies on ‘inoculation’ through the exposure of individuals to methods of misinformation that would enable them to recognize when such methods are used in the future (see, e.g., here and here). Building resistance will thus inevitably take a variety of measures that will hopefully act synergistically. They will take time, resources, leadership, and public trust. For example, states, social media companies and civil society more broadly need to promote practices of good information hygiene, just like today they are promoting handwashing. Educational curricula should not simply focus on conveying facts (e.g. that ingesting methanol cures no virus, but causes blindness and death), but on building a curious, critical mindset, and should expressly cover misinformation.

And we, as individuals, need to practice some social distancing online, coupled with intellectual modesty – invest a bit of time in learning how to recognize misinformation, don’t spread stuff you are unable to verify or know nothing about, even if it comes from your family, friends or members of some other in-group. A resource such as Sifting Through the Coronavirus Pandemic is a good place to start.

What does all of this mean for human rights lawyers? In dealing with viral misinformation the importance of the long game is such that human rights bodies should not focus only on the measures states and other actors are taking here and now, although these are hugely important – on one hand to ensure that speech-restrictive measures are not excessive, on the other to ensure that social harms caused by viral misinformation are minimized. This is crucial, above all, to ensure that any future mass vaccination effort that can stop the Covid-19 pandemic is not fatally undermined by misinformation, in a situation in which the public trust in vaccines has already been significantly eroded.

As part of their monitoring functions, human rights bodies also need to strategically ask questions of states and other relevant actors about the long-term. What are you doing to help your populations deal with misinformation for some future pandemic? What have you specifically done to build trust in public health communications? What have you specifically done to promote accurate information? What have you specifically done to educate your population on how to recognize and cope with misinformation? What experts have you consulted? Do you have an ongoing deliberative process in which you can regularly think about doing more? Have you looked at what other states are doing? Only by addressing these questions can states really abide by their obligation to respect, protect and fulfil human rights. And only by posing them can human rights bodies ensure their own contribution to the general welfare.


Print Friendly, PDF & Email

Leave a Comment

Comments for this post are closed