The Facebook Oversight Board Made the Right Call on the Trump Suspension

Written by

The Facebook Oversight Board’s decision on the ‘indefinite suspension’ of Trump’s account has provoked a storm of commentary, akin to a landmark judgment of a national or international court. Much of that commentary is understandably focused on the bottom line: that Facebook was justified, at the time it made its decision, to suspend Trump’s account because he incited or encouraged the violent insurrection at the US Capitol, but that the penality that it imposed – indefinite suspension – was not provided for in Facebook’s rules and was thus arbitrary. The OB thus effectively remanded the case to Facebook, asking the company to either issue a time-limited suspension or to permanently ban Trump from the platform.

This was, in my view, the right approach to take – even more so because of the Board’s overall methodology and reasoning. Much can be said about the substance of the decision, but here are some preliminary points that I thought were particularly striking:

1) The OB bases its decision almost entirely in international human rights law. Facebook’s internal values and policies get much less emphasis than in the Board’s earlier decisions, where it vacillated between them and IHRL. Here IHRL takes central stage. The Board justifies this increasing reliance on human rights in part by saying that now Facebook has expressly adopted a human rights policy and has committed to the UN Guiding Principles on Business and Human Rights.

2) This is exactly the right approach. There is no other regulatory language out there that could even conceivably have universal ambition and acceptance from the different states and societies in which Facebook is operating, with some 2 billion users worldwide.

3) And yes, it’s true, as many have noted, that IHRL standards on the freedom of expression have been designed for states and do not provide ready-made answers for numerous specific problems in the online context, including content moderation on digital platforms. But no other framework can provide such answers either. The US First Amendment jurisprudence certainly can’t do so, nor can Facebook’s own vaguely defined values. What IHRL offers is a way of thinking about speech regulation that is simultaneously both structured and flexible, e.g. through the provided by law, legitimate aim, necessity and proportionality criteria for limitations. And yes it’s true that the outer bounds of state limitations on free speech cannot simply be transposed to corporate limitations on free speech. But the whole business and human rights endeavour has been precisely about adapting IHRL standards to non-state entities that are in reality having as much of an impact on the enjoyment of human rights as a government. Applying IHRL standards becomes more appropriate the more a corporate actor has a monopolistic, quasi-regulatory position in a certain sphere of life, and Facebook certainly qualifies in that department.

4) What’s particularly striking is that this mainstreaming of IHRL within the OB has taken place even though the majority of OB members are NOT human rights lawyers, while a number of the lawyers have a background in other regimes, such as the First Amendment. For example, the decision is replete with references to the UN Human Rights Committee’s General Comment No 34 and to the Rabat Plan of Action, with no references to domestic jurisprudence (although a passing comment is made about First Amendment standards being broadly in line with international ones). It’s also striking that the OB is increasingly seeing itself and its decisions in a quasi-judicial mode, even though about a half of its members are not lawyers by training or experience. The style of writing is very much that of a court or analogous independent decision-making, which I imagine is a conscious choice in an effort of building authority and legitimacy.

5) In the end this was at least nominally not a hard case, despite its political impact. As far as I can tell the Board was unanimous (and rightly so) in applying IHRL criteria, assessing the ‘real’ content behind Trump’s posts, and finding that suspending the account was justified. The OB was also unanimous that the ‘indefinite suspension’ was arbitrary, since such a penalty was not provided for in Facebook’s rules. Similarly, the Board was unanimous that it should not at this point substitute its own judgment for that of Facebook as to what prescribed penalty was appropriate, but that Facebook should make this call at least initially (although depending on what they do the case might not end up before the Board again because of how its jurisdiction is structured). Again we have here a quasi-judicial posture of a reviewing body that will deal with only a tiny fraction of content moderation decisions, and which forces the company to make the hard choices (again at least initially).

6) While the Board does not say so expressly, a permanent ban on Trump, or some other user seriously violating the rules of a digital platform or doing so repeatedly, would be perfectly in line with IHRL standards. Facebook would be entitled to apply that penalty, and that’s probably the wisest choice for everybody concerned (except Trump, obviously).  Necessity and proportionality analysis does not always require the company to provide the user with some opportunity for redemption, especially when such redemption is extremely unlikely as it is here (see, in that regard, the recommendations given by the OB minority for when an account reinstatement would be appropriate) and the risk of harm is continuing. A permanent ban can in principle be justified because the user has other opportunities to speak elsewhere, offline or online, and because it can administratively be easier to manage than constant on-and-off suspensions and reinstatements. This is precisely one of those situations in which a company can restrict speech more categorically or comprehensively on its platform than a state could, especially because the consequences imposed are less grave than civil or criminal responsibility.

7) Finally, yes, establishing a body such as the Board allows Facebook to cloak itself in the veneer of legitimacy, especially when that body uses the justifactory language of human rights to validate the company’s decisions. But Facebook is also inevitably finding itself constrained by that body. Having read the Trump decision and several others my impression is that the people who serve on the Board are taking IHRL seriously and are very much acting in good faith and in the right spirit (see here for an inside account of their deliberations). Whether this self-regulatory experiment is successful – in fostering genuine compliance with IHRL, in reducing the significant harms that Facebook has facilitated or enabled, but also in improving the reputation of the company itself – is a question that cannot yet be answered. The Trump decision, however, certainly seems like a step in the right direction – but let’s see how Facebook actually responds to it, and to the various policy recommendations that the Board has made with regard to restricting speech by political leaders.

Print Friendly, PDF & Email

Leave a Comment

Your comment will be revised by the site if needed.


Igor Popović says

May 6, 2021

Dear Marko, thank you so much for the post. It was about a time to see an article about content moderation on EJIL:Talk!

The vast majority of us would agree on Trump's violation of FB rules, I guess. But the issue of transparency and equality before platforms (the concept of non-discrimination) is a tricky one. FB did not provide several answers, especially about influence of "political officeholders or their staff" (1) and "the suspension of other political figures and removal of other content" (2). Without these answers one cannot tell if Trump's removal was arbitrary or if governmental officials impacted the suspension (see the recent Israeli judgment on the "invisible handshake" between the State and private undertakings). Bearing in mind these shortcomings, one can argue that the FB decision was politically motivated against Trump and his campaign. So, IMHO, the major threat for applying human rights appropriately in content moderation case relates to transparency of the whole take-down/suspension process and discrimination.

Gerry Jones says

May 11, 2021

Dear Marko,
Many thanks. As someone new to all this, can you explain how IHRL should be applied to companies? Presumably no weight should be accorded to FB’s discretion as it would to a government for the democratic legitimacy and accountability of its decisions? Or is the users’ ability to vote with their feet by using another platform analogous? Also, I’m struggling with how you would assess the proportionality of a ban given that Mr T’s ability to use other platforms is also a contractual enterprise and not something over which FB (as opposed to a state within whose jurisdiction and subject to whose laws such activities take place) has control.