The Implementation of Judgments of the European Court of Human Rights: Worse Than You Think – Part 1: Grade Inflation

Written by

Part 1 of this blog post will explore how the current narratives about the implementation of ECtHR judgments paint a misleading picture. In Part 2, a different set of statistics will be examined, in order to explore how well the implementation system is really functioning.

In some countries, exam results in schools and universities are improving every year. However, many doubt that this is because the students are actually doing better in their studies. The accusation is made that, though exam marks are improving, this is the result of tests being made easier, rather than the students becoming better educated. This “grade inflation” allows schools and universities to publish better results, but without the performance behind the results actually improving.

What applies to schools and universities can also apply to international institutions.

Over the last few years, the Council of Europe has advanced a consistent narrative about the state of implementation of judgments from the European Court of Human Rights. This narrative suggests that implementation is going very well indeed.

The most recent Annual Reports of 2017 and 2018 describe this apparently positive trend. The first of these stated that “[r]esults are very encouraging” and “[t]he number of cases closed reached an all-time high thanks to a new policy of enhanced dialogue with States…”. The 2018 report heralded “another encouraging year”. The general ‘good news story’ is told to those whose role it is to monitor the effectiveness of the ECHR system. This year Secretary General Jagland told members of the Parliamentary Assembly of the Council of Europe that “[t]he Court is stronger than ever. Execution of judgments – we have a good record.” This same story about the falling number of judgments pending implementation is circulated across the Council of Europe’s social media accounts. It is accepted and repeated by other international institutions. The pervasive narrative about the system for the implementation of judgments is that it is getting more and more effective.

However, it is not at all clear that this story is actually true.

To understand why this is the case, it is necessary to delve deeper into the statistics about the implementation of ECtHR judgments.

Implementation statistics – the basics

When looking at the figures for unimplemented cases, there is a technical distinction between ‘leading’ and ‘repetitive’ cases. Leading cases are those which represent a new significant or systemic problem. Repetitive cases are repeat instances of the same problem. A ‘group’ of cases is formed by one leading case, followed by all of the relevant repetitive cases.

The case of Ceteroni was one of the first cases highlighting the problem of excessively lengthy civil proceedings in Italy. It therefore became a leading case. It was followed by 1722 other cases, all involving the same problem. These were repetitive cases. Together the Ceteroni case and its repetitions formed the Ceteroni group. (There was a particularly high number of repetitive cases in the Ceteroni group – normally there are less than 10).

When a violation is found by the European Court of Human Rights, the state is obliged to provide justice for the individual, but also make sure that the same violation stops happening to others in the same situation. Steps required to obtain justice for the applicant(s) are called “individual measures”. This usually involves the payment of compensation. The steps required to stop the same problem happening in wider society – such as legislative or practical reforms – are known as “general measures”.

A change in approach

Overall implementation statistics used to be calculated in the following way. A group of cases would all remain “open” and considered to be pending implementation, until both the individual measures and the general measures were completed. So in the Ceteroni group, the applicants in the repetitive cases may have been paid the compensation ordered by the Court. However, their repetitive cases would all remain pending implementation, as long as the general measures required to address the underlying problem still needed to be carried out.

This high standard for case closure helped contribute to an extremely high number of overall pending cases – rising above the landmark figure of 10,000 in 2011.

However, this ‘closure policy’ changed around two and a half years ago. The Council of Europe’s Committee of Ministers started a new policy. According to this, repetitive cases could be closed as soon as the individual measures had been taken, regardless of whether the general measures had been carried out or not. This meant that, in 2017, all of the repetitive cases in the Ceteroni group were closed, even though the underlying issue has still not yet been resolved. The general measures remain pending under a single case.

This new policy created an opportunity for states to quickly close thousands and thousands of cases, many of which could be closed simply by paying the compensation due. According to the Department for the Execution of Judgments, 3,350 cases were closed through this new policy.

Some argue that this new approach was flawed, because it reduced the overall number of pending cases, thus reducing the pressure on states. Others argue that it was necessary, because the repetitive cases were clogging up the system. However, this blog post is not about whether the policy of repetitive case closure was or was not a good idea. It is about the narrative that resulted from it, concerning the overall effectiveness of the ECHR implementation system.

A good news story

The new case closure policy was accompanied by a huge reduction in the overall number of judgments pending implementation, with the top line of the graph below showing the steep fall in the number of pending cases following the adoption of the new closure policy:

If the old policy had continued to apply, the overall number of cases pending today would be around 9,500: not far off the landmark 10,000 number which was seen as showing a significant problem with the implementation system.

These closures are generally presented as resulting from states getting better at implementation, rather than a change in the way in which implementation is counted and assessed.

For example, the 2017 Annual Report stated that:

“In 2017, the Committee of Ministers closed 3 691 cases compared to 2 066 in 2016”, meaning that “the total number of cases pending at the end of the year has decreased by around 25%, and is now down to some 7 500 (as compared to some 11 000 in 2014).”

These results were said to be “thanks to a new policy of enhanced dialogue with States.” What was not so prominently stated was the fact that the bumper year for case closure arose not from better behaviour by states, but from the massive closures made possible due to the new policy (1722 cases were closed in the Ceteroni group alone; as well as 250 repetitive Hungarian cases). In this way, the new method of closing cases facilitated a doubling of the number of cases closed that year. Although it was possible to discern this fact from a deep reading of details of the report, the main summary sections did not mention it. Instead, the increased rate of case closure was presented as resulting from a better functioning system and improved behaviour by states.

The Annual Report for 2018 was more nuanced. It made more references to the change in policy and occasionally focused on the number of pending leading cases, rather than the overall number of cases. Nevertheless, it still repeatedly referred to the drop in the overall number of pending cases as evidence of better behaviour by states and a well-functioning implementation system.

The message is the key

The Annual Reports are important for the general understanding of the implementation system. However, perhaps more important in terms of impact are the shorter messages given to key actors and the general public. These include the statements made by the Secretary General, the messages published on social media, the messages given to the members of PACE, and to other institutions. Most of the people taking an interest in the ECHR system do not have the time or inclination to delve into the statistics in the way done above. The headline messages communicated by the Council of Europe are the ones that are absorbed and determine the general understanding of the effectiveness of the Convention system.

This is very concerning, because the general message being communicated is that the implementation of judgments has been hugely successful in recent years – frequently based on the fall of the overall number of pending judgments. For the reasons outlined above, there is no reliable evidence of the huge improvement which has been announced. The fall in the number of overall judgments seems mostly to have resulted from a more lenient policy of closures, rather than better behaviour by States.

You could argue that it is like a university making its exams easier, then claiming that it is better at educating students because more of them achieve higher grades – but without any marked improvement in student performance.

Why is this important?

In order for the ECHR system to function effectively, it is important for a number of key actors to be kept accurately informed about its functioning. Unfortunately, these key actors have not had access to such clear information. They include the following:

  • Human Rights NGOs have to decide whether to prioritise work on litigation or implementation. If they assume that judgments will be swiftly implemented, they may use their scarce resources to fight new cases, rather than focus on making sure existing judgments affect change. The vast majority of NGOs do not realise that they often need to fight to get judgments executed. The “good news story” in the Annual Reports may be deepening this misconception. Even if NGOs do choose to work on implementation, they have to secure funding for such work. I have seen funding applications submitted for financial support for working on implementation, which contain sections where the applicants have had to outline the overall nature of the implementation problem, and where they have had no option but to present the statistics produced by the Council of Europe. These misleadingly suggest that implementation is already improving dramatically, weakening the NGOs’ case for obtaining funding to work on the issue.
  • Funders of human rights also need access to accurate information from institutions in order to make strategic decisions about the placement of large sums of money. Funders have a lot of possible activities that they could prioritise. They might not support work on implementation of the ECHR if they think it is being implemented effectively already. It seems possible that many will have been misled by the “good news story” set out above. For example, staff in my organisation recently spoke to a manager of a European human rights fund, who believed that ECtHR implementation was going extremely well, based on the Annual Reports. Funders like this have a key role in instigating necessary work on the implementation of judgments. Currently they might well be disinclined to divert funds to this issue, because the official reporting suggests the problem is already being solved.
  • Guardians of the ECHR system also need accurate information about how well it is working. This is especially the case right now. The Interlaken process of ECHR reforms comes to a close at the end of this year. Thanks to the Brussels Declaration, improved implementation of ECtHR judgments is a fundamental goal of that reform process. This Autumn there will be a stock-taking exercise about the impact of the Interlaken reforms and an assessment about whether any further efforts are necessary. There is a huge range of people who will need to examine and properly understand the implementation system at this time. This includes the new Secretary General of the Council of Europe, the Committee of Ministers, members of the Parliamentary Assembly of the Council, the organisation’s secretariat, NGOs, and academics.

These actors have to be able to accurately audit the effectiveness of the system. A clear, well-understood evidence base is necessary in order to gauge the scale of the non-implementation problem. Only if we are able to see the true nature of the problem will we be able to respond appropriately to it in the next era of the Convention system.

At the moment, it is not possible to understand the true scale of the non-implementation issue, because clear data about this is not being circulated.

Part 1 of this blog explored how the current narratives about the implementation of ECtHR judgments paint a misleading picture. In Part 2, a different set of statistics will be examined, in an attempt to explore how well the implementation system is really functioning.

Print Friendly, PDF & Email

Tags

No tags available

Leave a Comment

Comments for this post are closed

Comments

Michelle says

October 7, 2019

Excellent article! Looking forward to Part 2!

Ed Bates says

October 7, 2019

Highly informative post, thank you.
You make some powerful points as to why this is important. Related to your last point on that, my concern would be that the picture painted of matters by these adjusted statistics is rosier than it really should be, as regards the States’ overall enthusiasm to implement. If the narrative portrayed at the end of the Interlaken decade was an undeservedly optimistic one then I think this would be really problematic. No time for States to rest on any laurels.
At the same time, I can understand the reasons for an approach which does not include repetitive cases in the overall list. Does their inclusion create an overly negative impression of the how bad things are as regards non-implementation, when there is ‘only one’ incompatibility at the root of the issue? In a system that relies on peer pressure, one does not want to create an overly negative impression of matters (or an overly positive one, of course). I still remember MPs in the UK stating that the worst that would happen if they refused to implement the prisoner voting ruling, would be that the UK would be on a long list of States refusing to implement. So, the list should not be misleadingly long?
In part the problem may be that there has been a change of approach, and the issues that this gives rise to (including the failure to highlight it?), rather than the actual new approach itself in the abstract? Ideally the Reports would highlight this more directly/ transparently?
Ed

Ausra Padskocimaite says

October 7, 2019

Dear George, first of all, thank you for your very interesting and important contribution! I completely agree that the presentation of the implementation data should be as open as possible and as an academic trying to make sense of the implementation situation, I would have appreciated more information regarding the new policy of “partial closure” both in the annual reports and especially on HUDOC EXEC, where cases simply appear as “closed” instead of something like “partially closed” or “only individual measures closed.” However, I am not sure that I agree with your assessment that the “high standard for case closure” no longer applies (are tests really getting easier?) as, in my understanding, it is only the methodology for handling the adoption of individual measures that has changed and not the criteria for assessing when the case is ready for closure with respect to individual or general measures. It seems that the pendulum has swung in the opposite direction – from qualifying the adoption of individual measures (where general measures are also required) as a “pending” (often interpreted as non-executed) case to “closed” (interpreted as executed) case. Neither of these categories captures the reality, where the state has in fact executed some, but not all measures (partial execution). The challenge is how to numerically describe this (complex) situation in an understandable and transparent manner.

Kishor Dere says

October 8, 2019

Thank you George for your critical analysis of the official reports on human rights and evaluation approaches, methods, criteria etc. It is rightly said that 'Eternal vigilance is the price of liberty'. The sum and substance of this article is that all of us need to improve at all levels. We individually, collectively and at official level - the institutions- need to be receptive to various views, feedback and suggestions for improvement. It also needs to be added that one should not make any sweeping generalization. It has to be appreciated that democratic state apparatus is also making at least some efforts to ameliorate the situation, although not at the desirable pace and of expected quality. We have to be perfectionists, not cynics and rejectionists. Further improvement can happen only by offering constructive criticism as the article does. Although one may not agree with the analysis in its entirety, yet the spirit with which it has been done, is to be lauded. Democracy can succeed only if citizens are alert, vigilant and careful. 'Power corrupts and absolute power corrupts absolutely'. So, do not bask in the glory of past or present laurels and thereby court complacency. Thus, get inspired by accomplishments and raise standards to scale new heights of excellence.