Part 1 of this blog post will explore how the current narratives about the implementation of ECtHR judgments paint a misleading picture. In Part 2, a different set of statistics will be examined, in order to explore how well the implementation system is really functioning.
In some countries, exam results in schools and universities are improving every year. However, many doubt that this is because the students are actually doing better in their studies. The accusation is made that, though exam marks are improving, this is the result of tests being made easier, rather than the students becoming better educated. This “grade inflation” allows schools and universities to publish better results, but without the performance behind the results actually improving.
What applies to schools and universities can also apply to international institutions.
Over the last few years, the Council of Europe has advanced a consistent narrative about the state of implementation of judgments from the European Court of Human Rights. This narrative suggests that implementation is going very well indeed.
The most recent Annual Reports of 2017 and 2018 describe this apparently positive trend. The first of these stated that “[r]esults are very encouraging” and “[t]he number of cases closed reached an all-time high thanks to a new policy of enhanced dialogue with States…”. The 2018 report heralded “another encouraging year”. The general ‘good news story’ is told to those whose role it is to monitor the effectiveness of the ECHR system. This year Secretary General Jagland told members of the Parliamentary Assembly of the Council of Europe that “[t]he Court is stronger than ever. Execution of judgments – we have a good record.” This same story about the falling number of judgments pending implementation is circulated across the Council of Europe’s social media accounts. It is accepted and repeated by other international institutions. The pervasive narrative about the system for the implementation of judgments is that it is getting more and more effective.
However, it is not at all clear that this story is actually true.
To understand why this is the case, it is necessary to delve deeper into the statistics about the implementation of ECtHR judgments.
Implementation statistics – the basics
When looking at the figures for unimplemented cases, there is a technical distinction between ‘leading’ and ‘repetitive’ cases. Leading cases are those which represent a new significant or systemic problem. Repetitive cases are repeat instances of the same problem. A ‘group’ of cases is formed by one leading case, followed by all of the relevant repetitive cases.
The case of Ceteroni was one of the first cases highlighting the problem of excessively lengthy civil proceedings in Italy. It therefore became a leading case. It was followed by 1722 other cases, all involving the same problem. These were repetitive cases. Together the Ceteroni case and its repetitions formed the Ceteroni group. (There was a particularly high number of repetitive cases in the Ceteroni group – normally there are less than 10).
When a violation is found by the European Court of Human Rights, the state is obliged to provide justice for the individual, but also make sure that the same violation stops happening to others in the same situation. Steps required to obtain justice for the applicant(s) are called “individual measures”. This usually involves the payment of compensation. The steps required to stop the same problem happening in wider society – such as legislative or practical reforms – are known as “general measures”.
A change in approach
Overall implementation statistics used to be calculated in the following way. A group of cases would all remain “open” and considered to be pending implementation, until both the individual measures and the general measures were completed. So in the Ceteroni group, the applicants in the repetitive cases may have been paid the compensation ordered by the Court. However, their repetitive cases would all remain pending implementation, as long as the general measures required to address the underlying problem still needed to be carried out.
This high standard for case closure helped contribute to an extremely high number of overall pending cases – rising above the landmark figure of 10,000 in 2011.
However, this ‘closure policy’ changed around two and a half years ago. The Council of Europe’s Committee of Ministers started a new policy. According to this, repetitive cases could be closed as soon as the individual measures had been taken, regardless of whether the general measures had been carried out or not. This meant that, in 2017, all of the repetitive cases in the Ceteroni group were closed, even though the underlying issue has still not yet been resolved. The general measures remain pending under a single case.
This new policy created an opportunity for states to quickly close thousands and thousands of cases, many of which could be closed simply by paying the compensation due. According to the Department for the Execution of Judgments, 3,350 cases were closed through this new policy.
Some argue that this new approach was flawed, because it reduced the overall number of pending cases, thus reducing the pressure on states. Others argue that it was necessary, because the repetitive cases were clogging up the system. However, this blog post is not about whether the policy of repetitive case closure was or was not a good idea. It is about the narrative that resulted from it, concerning the overall effectiveness of the ECHR implementation system.
A good news story
The new case closure policy was accompanied by a huge reduction in the overall number of judgments pending implementation, with the top line of the graph below showing the steep fall in the number of pending cases following the adoption of the new closure policy:
If the old policy had continued to apply, the overall number of cases pending today would be around 9,500: not far off the landmark 10,000 number which was seen as showing a significant problem with the implementation system.
These closures are generally presented as resulting from states getting better at implementation, rather than a change in the way in which implementation is counted and assessed.
For example, the 2017 Annual Report stated that:
“In 2017, the Committee of Ministers closed 3 691 cases compared to 2 066 in 2016”, meaning that “the total number of cases pending at the end of the year has decreased by around 25%, and is now down to some 7 500 (as compared to some 11 000 in 2014).”
These results were said to be “thanks to a new policy of enhanced dialogue with States.” What was not so prominently stated was the fact that the bumper year for case closure arose not from better behaviour by states, but from the massive closures made possible due to the new policy (1722 cases were closed in the Ceteroni group alone; as well as 250 repetitive Hungarian cases). In this way, the new method of closing cases facilitated a doubling of the number of cases closed that year. Although it was possible to discern this fact from a deep reading of details of the report, the main summary sections did not mention it. Instead, the increased rate of case closure was presented as resulting from a better functioning system and improved behaviour by states.
The Annual Report for 2018 was more nuanced. It made more references to the change in policy and occasionally focused on the number of pending leading cases, rather than the overall number of cases. Nevertheless, it still repeatedly referred to the drop in the overall number of pending cases as evidence of better behaviour by states and a well-functioning implementation system.
The message is the key
The Annual Reports are important for the general understanding of the implementation system. However, perhaps more important in terms of impact are the shorter messages given to key actors and the general public. These include the statements made by the Secretary General, the messages published on social media, the messages given to the members of PACE, and to other institutions. Most of the people taking an interest in the ECHR system do not have the time or inclination to delve into the statistics in the way done above. The headline messages communicated by the Council of Europe are the ones that are absorbed and determine the general understanding of the effectiveness of the Convention system.
This is very concerning, because the general message being communicated is that the implementation of judgments has been hugely successful in recent years – frequently based on the fall of the overall number of pending judgments. For the reasons outlined above, there is no reliable evidence of the huge improvement which has been announced. The fall in the number of overall judgments seems mostly to have resulted from a more lenient policy of closures, rather than better behaviour by States.
You could argue that it is like a university making its exams easier, then claiming that it is better at educating students because more of them achieve higher grades – but without any marked improvement in student performance.
Why is this important?
In order for the ECHR system to function effectively, it is important for a number of key actors to be kept accurately informed about its functioning. Unfortunately, these key actors have not had access to such clear information. They include the following:
- Human Rights NGOs have to decide whether to prioritise work on litigation or implementation. If they assume that judgments will be swiftly implemented, they may use their scarce resources to fight new cases, rather than focus on making sure existing judgments affect change. The vast majority of NGOs do not realise that they often need to fight to get judgments executed. The “good news story” in the Annual Reports may be deepening this misconception. Even if NGOs do choose to work on implementation, they have to secure funding for such work. I have seen funding applications submitted for financial support for working on implementation, which contain sections where the applicants have had to outline the overall nature of the implementation problem, and where they have had no option but to present the statistics produced by the Council of Europe. These misleadingly suggest that implementation is already improving dramatically, weakening the NGOs’ case for obtaining funding to work on the issue.
- Funders of human rights also need access to accurate information from institutions in order to make strategic decisions about the placement of large sums of money. Funders have a lot of possible activities that they could prioritise. They might not support work on implementation of the ECHR if they think it is being implemented effectively already. It seems possible that many will have been misled by the “good news story” set out above. For example, staff in my organisation recently spoke to a manager of a European human rights fund, who believed that ECtHR implementation was going extremely well, based on the Annual Reports. Funders like this have a key role in instigating necessary work on the implementation of judgments. Currently they might well be disinclined to divert funds to this issue, because the official reporting suggests the problem is already being solved.
- Guardians of the ECHR system also need accurate information about how well it is working. This is especially the case right now. The Interlaken process of ECHR reforms comes to a close at the end of this year. Thanks to the Brussels Declaration, improved implementation of ECtHR judgments is a fundamental goal of that reform process. This Autumn there will be a stock-taking exercise about the impact of the Interlaken reforms and an assessment about whether any further efforts are necessary. There is a huge range of people who will need to examine and properly understand the implementation system at this time. This includes the new Secretary General of the Council of Europe, the Committee of Ministers, members of the Parliamentary Assembly of the Council, the organisation’s secretariat, NGOs, and academics.
These actors have to be able to accurately audit the effectiveness of the system. A clear, well-understood evidence base is necessary in order to gauge the scale of the non-implementation problem. Only if we are able to see the true nature of the problem will we be able to respond appropriately to it in the next era of the Convention system.
At the moment, it is not possible to understand the true scale of the non-implementation issue, because clear data about this is not being circulated.
Part 1 of this blog explored how the current narratives about the implementation of ECtHR judgments paint a misleading picture. In Part 2, a different set of statistics will be examined, in an attempt to explore how well the implementation system is really functioning.