Horrible Metrics, Part Deux

Written by

Back in 2016 I wrote a post on horrible metrics, which was in essence an extended moan about the use of various metrics, including citations, indices and impact factors, to assess the quality of international law scholarship and journals or to evaluate the quality of colleagues applying for jobs or for promotion. The intervening years have not changed my views much, although I confess to the occasional guilty glances at Google Scholar. Especially when I am not familiar with someone’s work, GS can be a useful, if imperfect, shortcut.

There are differences between international law academics whose citation counts are in the dozens, in the hundreds, and in the thousands, while taking into account the fact that even within our discipline citation counts will depend on the sub-field. A legal historian, for example, is less likely to be cited than someone working on very current issues of the jus ad bellum. Similarly, some international lawyers will attract citations from other, larger disciplines, be it international relations/political science, or global trade and economics, or climate science. GS can also usefully point one to a scholar’s most important works. That said, I remain convinced that our discipline is so heterogenous and so different from the hard sciences in which these metrics were developed that they should be used with extreme caution, if at all.

I’m saying all this as a prelude to the main point of this post, just so that readers are aware of where I’m coming from. That point is to briefly discuss an article that Oona Hathaway and John Bowers have put up on SSRN this week. The article is an empirical study of international law scholarship, based on a dataset from the HeinOnline database. To quote from the abstract:

Analyzing this dataset, we arrive at a number of striking findings: Even though peer-reviewed journals publish far more articles, articles published in student-run journals (nearly all of which are based in the United States) are far more heavily cited. Globally, among the twenty-five most influential international law journals as ranked by h-index, only one is published outside the United States [EJIL], and twenty are student run. Of these twenty, fifteen do not primarily focus on international law. … Perhaps less surprising, the majority of heavily cited international law authors are male (91 of the top 100) and based in the United States (91 of the top 100).

See also this Twitter thread by Oona, summarizing some of these findings, including a list of top 25 international law scholars (topped, perplexingly, by Cass Sunstein, and including (of all people), John ‘Torture Memos’ Yoo).

I have the greatest respect for Oona’s work. But these findings were to me so counterintuitive and so contrary to my own experience as an international law academic that I have to say that they immediately struck me as unreliable. Upon reading the piece and its description of its methodology, I have to say that I was fortified in that conclusion. The core problem is that the HeinOnline dataset is heavily biased to favour US academics and student-run journals. A biased source dataset can only produce biased results. (Oona and John explain in the piece that they relied on Hein because they could purchase access to the data, which other databases don’t allow – but the mere fact that the dataset is large and available does not mean that it is reliable or lacking in bias or that it should be analysed). There are several reasons for this bias:

(1) Hein codes as international law scholarship articles that most international lawyers would not recognize as such. In particular, this covers a lot of US constitutional law/foreign relations law work that many of the top 25 scholars on Oona’s list engage in. (That’s how Cass Sunstein or John Yoo have become international lawyers). Obviously, this is a normative point – what counts as ‘real’ international law etc. – which makes a rigorous empricial study difficult, but that’s exactly my point! One can’t really rely on Hein’s own coding of what counts as international law scholarship.

(2) Hein covers US student-run law reviews much more comprehensively than non-US based journals. For example, many OUP and CUP published journals will be available on Hein, but the full-text versions are not available for the most recent 3-4 years. I imagine that the reason for this is so that OUP and CUP could sell their own (current) online databases to university libraries. Some journals, like the British Yearbook of International Law, have even more reduced coverage.

(3) Crucially, in footnote 10 of their piece, Oona and John explain that the citation counts themselves are calculated by Hein, and that they ONLY include citations in the US Bluebook format, with some variations. (They repeat this point later in the study). But because it is only US law reviews that use the Bluebook citation format, this means that Hein doesn’t count citations in all or virtually all non-US based journals. So, for example, Hein will count citatations to EJIL in US law reviews, but it will not count citations in articles written in the EJIL itself, because EJIL uses a citation format that Hein doesn’t capture, nor will it capture citations to the EJIL in other non-US journals.

(4) The bias is further compounded by the fact that Hein doesn’t cover books or citations to books, even though obviously a lot of international law scholarhip happens in books, and even though books are far more important as an outlet for scholarship outside the US than within the US. Indeed, for many non-US authors their books are their most cited works.

All of this leads to a fatally biased dataset, which systematically favours American scholarship. And then there is the silly cite-itis that US student-run law reviews are prone to, which leads to an explosion of citations for their own sake, rather than because they are necessary or because the author has learned something from the cited work. Taking all of this together, it is difficult to escape the conclusion that the Hein dataset is in no way representative of international law scholarship globally. No study based on that dataset alone could therefore be genuinely representative of that scholarship. What we really have here, therefore, is not an empirical study of international law scholarship globally, but a study of citation practices of scholars primarily publishing in US law reviews, most of whom are based in the US.

The bias is even more evident if we do some comparisons to citation counts in Google Scholar. To be fair, I have no idea whatsoever what citations GS captures, and which ones it misses. All I can say is that it is much more comprehensive than Hein (which doesn’t mean it’s not biased – it’s just arguably less so). For example, as reported in the piece, Hein data gives the total citation count for Oona as 1569, but Oona’s GS profile has her at 9484. Similarly, in the Hein data, John ‘Organ Failure or Death’ Yoo has 1576 cites, but more than 10k in GS. Crucially, in GS non-US based scholars peform much better than in the Hein dataset. For example, William Schabas, the demi-god of international criminal law, has 17k cites in GS but doesn’t show up in Oona’s and John’s list at all. Hilary Charlseworth has 9k, while Anne Orford has 5k, and so on, just to mention some scholars with public GS profiles. (I leave aside here the issue of whether raw citation count is the right point of comparison, or whether it should be something like the h-index. That is, scholars can have a very small number of works that are for some reason disproportionately highly cited, and the h-index alleviates that issue somewhat).

I would also note here that it’s possible to search GS even for those authors who do not have a public GS profile, by using a program quite adorably called Publish or Perish. Using that tool, for example, I could search for Martti Koskenniemi’s citation stats, getting 21k cites in total (of which 3k each go to his books, the Gentle Civilizer and From Apology to Utopia) – recall my point about the importance of books outside the US, citations to which are not covered in the Hein data at all. Bruno Simma has 10k, Christine Chinkin 12k, Joseph Weiler has 16k, Antonio Cassese has 22k, and so on. Yet none of these scholars are ranked in the Hein dataset.

To conclude, I remain of the view that it is not particularly helpful for international lawyers to focus on these kinds of metrics anyway. But if we choose to do so, we can’t remain blind to the obvious biases that exist in datasets such as the Hein one, and we need to be very careful about the conclusions we draw from any analysis of such data. The idea that US academics and journals have overwhelming dominance in the field of international law is simply at odds with reality, even putting aside for a moment sustained critiques from TWAIL scholarship and the like. It is equally at odds with reality to argue that somehow US student-run law reviews produce better international law scholarship than peer reviewed journals, or that (say) Columbia Law Review has a greater influence on our field than say the ICLQ or the Leiden JIL. The Hein dataset only confirms what we have already known, which is that American legal scholarship is insular and self-regarding. To be clear, I’m not saying that citation studies should never be done in our field. What I am saying is that doing them on datasets such as the Hein one is not likely to be useful.

Print Friendly, PDF & Email


Leave a Comment

Your comment will be revised by the site if needed.


Stewart Manley says

May 9, 2024

Thank you, Marko Milanovic. Your observations remind me why peer review, at least good peer review, can be so valuable for improving manuscripts. And perhaps also why, at a US student-edited law journal like the Yale Journal of International Law where this paper is forthcoming (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4817645) and where there is no blind peer review and CVs are required with submissions (https://www.yjil.yale.edu/submissions/article-submissions/), it might not be such a good idea for law professors to submit their work to an editorial board that might include their own current or future students. But perhaps the YJIL has procedural safeguards in place to properly handle this type of submission that it doesn’t explain on its website.

James Hathaway says

May 9, 2024

Bravo Marko.
The failure to include books (counting only journal articles) is just patently ridiculous. And the authors’ complacent Amerocentrism is mind-boggling: how could anyone possibly imagine that eliminating any citation not in the US “bluebook” citation style was a valid sorting criterion, much less that Sunstein is an international lawyer (not even his own bio makes this claim).
To call this a very, very shallow bit of work is too kind.

John Morss says

May 10, 2024

Thanks Marko for taking the time to itemise the faults of this regrettable exercise. If as blind to its partiality as seems to be the case, this is a textbook example of Gramscian hegemony. But Gramsci didn't publish in the Ivy League Second Year Overreachers' Journal of US International Law..

Dan Joyner says

May 15, 2024

Does seem like a particularly bad dataset to use.

I deal with the divide between US student edited journals, and peer reviewed journals, as well as the lesser importance placed on books in the US, all the time.

You are right that so much of what passes for "international law" in US student edited journals is really US constitutional and public law regarding foreign relations. Not actual public international law.

The continuing use of student edited journals by US legal academia is just an ongoing travesty. But for some reason we continue to do it. And I can tell you from serving on hiring and tenure committees that there is an absurd reliance on these journals, and the supposed hierarchy within them, as proxies for quality of faculty scholarship. It's just mind boggling, and rightly makes us the laughing stock of other academic disciplines.

I suppose I have dug my own scholarly grave by being employed at a US school, but principally publishing academic monographs, and in peer-reviewed journals, published abroad. So hiring committees at other US schools, and my own university higher ups, don't really know what to make of my publication record. But no regrets.