Horrible Metrics

Written by

I was visiting the site of the American Journal of International Law this morning, and this particular advertising blurb caught my eye:

The Journal ranks as the most-cited international law journal on Google Scholar. It is also considered by the nonprofit, scholarly periodical resource JSTOR to be “the premier English-language scholarly journal in its field.”

Wow, I thought – it’s no longer sufficient to say that the international law academic profession as a whole regards the AJIL and EJIL as the two most prestigious journals in the field, but even when we are self-promoting to our own readership we have to refer to some kind of metric or league table. Second wow, I had no idea that Google Scholar ranked international law journals, I should really check that out. Here’s the table:

Publication h5-index h5-median
1. American Journal of International Law 25 50
2. European Journal of International Law 23 43
3. Virginia Journal of International Law 22 32
4. Common Market Law Review 20 37
5. American Journal of Comparative Law 19 31
6. Journal of International Economic Law 19 26
7. Human Rights Quarterly 18 26
8. German Law Journal 17 29
9. International and Comparative Law Quarterly 17 25
10. Vanderbilt Journal of Transnational Law 17 24
11. International Journal of Transitional Justice 17 23
12. Journal of International Criminal Justice 16 25
13. Chicago Journal of International Law 16 22
14. Human Rights Law Review 16 22
15. International Journal of Constitutional Law 16 22
16. Indiana Journal of Global Legal Studies 15 25
17. American University of International Law Review 15 22
18. Fordham International Law Journal 15 22
19. European Law Review 15 20
20. Global Responsibility to Protect 15 18

Well, the top looks kinda okay. And at least this metric is not as manifestly arbitrary as the ranking of international law journals by impact factor, which is not only skewed towards student-edited US law reviews (which comprise the majority of the Thomson Reuters dataset), but punishes journals like the AJIL and EJIL that publish a lot of shorter pieces, since the impact factor is calculated as the average number of citations per published paper (see more Joseph Weiler’s extensive critique of the application of the IF to our field in his 2012 editorial). Thus, for example, in the Washington and Lee ranking of international law journals by IF the AJIL and EJIL are at the 28th and 44th place, respectively. Interestingly, some the leaders of the IF table (Harvard, Yale, Michigan, Berkeley, Duke, Minnesota JILs) are not even in the top 20 of the Google Scholar list, while the Indiana Journal of Global Legal Studies, which is 16th on the Google list, is 61st on the W&L IF list and 38th on the basis of raw journal cites.  Go figure.

That the GS ranking is not as obviously bad as the IF one doesn’t of course mean that it too is not arbitrary. The arbitrariness of the whole metric game when applied to the scholarly field of international law is even more manifest when we realize that (1) in comparison to other fields (especially hard science) a lot of international law scholarship is being done in books rather than in journals, and that (2) it is (still) being done in languages other than English, which these metrics virtually never capture. Obviously, a major problem in that regard is the systemic lack of online availability/searchability of books and journals not written in English, and their lack of availability in major research databases. (Seriously, THIS is in 2016 the website of the Revue Generale de Droit International Public.) Even so, should we, as academics and editors of journals, even implicitly endorse the validity of these metrics when we know for a fact that they are not an accurate representation of our profession? Should OUP really be pointing out, for example, that the EJIL has an impact factor of 0.913, the Chinese JIL has an IF of 1.186, or that the JICJ has an IF of 0.542, when we know that the explanatory power of these numbers (up to their oh-so-precise third decimal point!) is zero?

The big issue, of course, is that the turn towards metrics comes from outside our relatively small field, especially from the hard sciences. And it is driven by even wider trends, such as the corporatization of universities, the flow of power from departments/academics to university presidents/vice-chancellors and their ever increasing legions of senior managerial staff, as well as outside pressures from governments (e.g. through the UK REF and similar acronyms) and other funders. The turn towards metrics and rankings also has other systemic repercussions, for example with regard to staff hiring and promotion.

For example, when I applied for promotion to associate professor at Nottingham a couple of years ago, the application form told me (and still does!) that I had to provide my citation, h-index and i10-index counts – incidentally the first time I heard of these. I then had to set up a Google Scholar account, so that I could see how big my h-index thingy was (13, in case you’re wondering… yeah, I know, right?). And then I saw that a bunch of other unfortunate people had to do this before me; the obligatory half-hour of h-index comparison/envy naturally followed, made even dirtier by the fact that the only reason I did this in the first place was filling in a form (enjoy).

So, can academic lawyers somehow resist the invasion of horrible metrics, or is resistance futile? Should we embrace Google Scholar only because it is less arbitrary than the alternatives? Or should we design a law or international law-specific metric, and how exactly would one proceed to do that? Any thoughts and comments welcome.

Print Friendly, PDF & Email

Categories

Leave a Comment

Comments for this post are closed

Comments

Jordan says

August 24, 2016

Regardless, in the US most of the top international law articles are published in other journals, and have been for many years. Look at your chart, consider other ranking processes, consider what articles published in what journals are most often cited, etc. One can google "top 40 international law reviews" to pull up some of the lists.

Jordan says

August 24, 2016

p.s. the chart in your post is noticeably inconsistent with other lists of US int'l law journals.

Jordan says

August 24, 2016

Readers might check:
Lawlib.wlu.edu
Then. Scroll down in All Subjects to international law.
Then in Language, choose English.
Then check specialized; and check print.
On right side, Year 2015, check combined score.
Then hit Submit.
AJIL is high, but note the others above and just below.

Jordan says

August 24, 2016

p.s. the Washington & Lee list (above) has:
Harvard
Virginia
Yale
AJIL
Chicago
Vanderbilt
Michigan
EJIL
Etc.

Jordan says

August 24, 2016

p.s.p.s. there are several forms of measurement. I don't know why impact factor is supposedly irrational. If others cite one article frequently and rarely another, the rarely cited article may be "better" using some other criteria, but it is also relevant that the other is frquently cited. Brian Leiter lists of top professors in international law in the US uses one criterion: citation to her/his articles in articles by others. Sure it's incomplete and sure it seems suspicious when one of the "top" persons has a lot of "but see" cites, but this imperfect measurement of impact is relevant.
And what about another interesting occurence-frequency of downloads of articles from SSRN?
It is interesting to note what articles on SSRN "take off" re downloads, like over 5,000; over 3,000 -- regardless of where they are published.
To European scholars: if you are not "on" SSRN you might consider putting a pdf copy of your article on SSRN. See http://ssrn.com

Kai Ambos says

August 24, 2016

Marko, thanks for this great and, sadly enough, necessary blog. All these hard science and market driven assessment instruments only lead to a situation where nobody reads anymore but just looks at the statistics. I remember a selection meeting of a mixed, interdisciplinary committee of the Alexander von Humboldt Foundation where my science colleagues only were interested if the papers of the candidates have been included in the world web of science or not. If you asked them if they have actually read the papers they said that this is not necessary, at least re the ones which are not in the wws. Needless to say that some of the science colleagues do not understand why we do not have a wws (and why we still write books, papers in books and many of us in other languages than English). Or take practice of indexation in many parts of the planet, eg Latin America, driven by global publishing companies (Marko mentioned one), forcing young colleagues to publish in indexed journals as if indexation were a guarantee of quality (in fact it is not about quality but about publishing in an indexed journal).
At the same time academics in research universities lose more and more ground to an ever increasing assessment bureaucracy- just talked today to a Cambridge colleague about the infamous RAE (Research Assessment Exercise) ... One wonders how our ingenious forefathers (Kant, Locke, Rousseau and the like but also Newton, Gauss etc) could do without all this academic bureaucracy .... Why not reading them and thinking about their impact instead of looking at all these statistics?

Margot Salomon says

August 25, 2016

Marko, Many thanks for raising this issue. A key element for consideration in the bias of citation metrics is also their gendered nature. As repeated studies have shown, men are considered more authoritative than women and as such are cited more frequently. Findings have also shown that men are more likely than women to cite themselves.

Here is a link to a taster of some findings:

http://blogs.lse.ac.uk/impactofsocialsciences/2016/03/08/gender-bias-in-academe-an-annotated-bibliography/

'You cannot simply count “outputs” in making an evaluation of someone’s worth and reputation if there is a “biased filter” at the first stage of evaluation, prejudicing judgment at the outset.'

David Scott Lewis says

August 25, 2016

Seems like the EJIL crowd is crying because AJIL is a more highly cited journal. Come on, grow up. Stop acting like a child. They're both great journals. The two best. But, fact is, AJIL is better.

Besides, the comments that have been made demonstrate a limited understanding of bibliometrics. For one thing, different databases have different metrics. Impact factor is not created equal across databases since each database covers different sources. Also, one database may be more concerned about the impact of a paper solely within the legal community, whereas another database may be looking at the broader impact of a paper outside the legal community, further divided by the reach and scholarship of the database. (I've just described three of the largest commerical databases.)

SSRN. Love it. Awful for metrics, though. First, no peer review. Second, no realistic limits as to who can upload what. Third, it's still rather limited relative to published journal articles. Free is good -- and most papers hosted on SSRN can be downloaded for free. And it integrates well with the likes of ReadCube & Co. But it's way too much hit and miss.

Google Scholar. Indiana Journal of Global Legal Studies. Fordham International Law Journal. Global Reponsibility to Project. You've got to be kidding. Top 20? Maybe not even top 100!

Finally, Marko, where have you been? The GS rankings have been around for a while. This leads me to believe that you really don't understand very much about bibliometrics. Can you really describe the differences between the databases and their coverage? Impact versus immediacy? Usage counts? (And each has its own utility for different purposes.) My guess is that you can't. Hence, you should be careful about making comments about something that you know so little about. There are over a dozen databases with extensive coverage of scholarly legal papers, yet you mentioned a half-dozen (or less). What about the others? Again, don't comment on something that you evidently know so little about. And don't cry because EJIL wins the ''Silver'' to AJIL's ''Gold''. ''Silver'' for our ''sport'' (international law) isn't so bad, especially when there are hundreds of legal journals and law reviews in total.

Marko Milanovic says

August 25, 2016

Many thanks to everyone for their comments.

Kai, I can also add anecdotally that in Serbia it is a requirement for professorial promotion that candidates publish on the Soc Sci Thomson Reuters list (http://science.thomsonreuters.com/cgi-bin/jrnlst/jlresults.cgi) which has it own level of arbitrariness - for example, even among US student law reviews it weirdly includes some but not others without any apparent reason (e.g. the specialist Northwestern int law journal is included, but not the more highly regarded Michigan or Yale ones, etc). So for example publishing a book or book chapter with OUP or CUP wouldn't count.

Margot, fully agreed on the gender bias point.

David, can I first ask you to mind your tone? There is no need to be impolite (e.g telling your interlocutors to "grow up, stop acting like a child"), and we certainly have low tolerance for that kind of behaviour on this blog. Please be civil.

Second, I feel you have completely misunderstood the point of my post. It was certainly not to "cry" because EJIL is ranked lower than the AJIL on some list. I never said anything of the sort. Indeed I'm happy to concede that AJIL should be ranked higher. My post was about the (1) wholly arbitrary nature of these rankings (including the GS one) as they are applied to the field of international law, (2) their increasing use for inappropriate purposes, e.g. academic hiring and promotion, and (3) their relationship to other pernicious trends, such as the increasing bureaucratization and corporatization of academia.

Finally, I am also happy to concede you know more about bibliometrics than I do, and that I know little. What little I know I was forced to learn, as explained in the post. I honestly didn't see the GS rankings of int law journals until yesterday, which is what provoked my post. But I would note that you, with all of your apparent bibliometric knowledge, didn't point to any specific mistake I made in my post. Nor did you tell us what exact database of scholarly legal papers can provide us with non-arbitrary (or at least significantly less arbitrary) metrics.

Rossana Deplano says

August 26, 2016

Marko, thank you for your post. As an insider's perspective, it is actually very useful to understand the current requirements for promotion. Getting published in one journal or another does make the difference today - for whatever reason.

Margot, thank you for stressing the impact of gender bias.

PS. David, you totally missed the point of the discussion. The critique was about the compulsory use of metrics for the purpose of academic promotions: as explained in the post, academics do not write papers and submit to journals with the type of impact reflected in bibliometrics in mind.
(Having said that, the author of the post has published in AJIL, EJIL, Harvard, with OUP etc - he did not even mention that in the post because he does not need to do so. He already secured plenty of gold medals for academic excellence and the trend is upwards. But again, that was not the topic of the post...).

David Scott Lewis says

August 28, 2016

Marko, my tone was meant to be tongue-in-cheek. No offense intended.

Actually, I do support the use of metrics for the purpose of academic promotions, so I'm on a different page from the others who disagree with this position. How schools use this is up to each school and should be tailored for each school. For example, what HYS or a T14 should expect isn't the same as what a barely functioning law school should expect. (Sadly, there are way too many barely functioning law schools. There are simply too many law schools, schools that are nothing more than degree mills. My 2 cents.)

Rosanna stated that ''academics do not write papers and submit to journals with the type of impact reflected in bibliometrics in mind.'' Well, maybe they should. I think they should. I'd go even further and look at reach. Does a paper live and die only within the legal academy, or does it have broader reach? This matters to me. To me, a paper written for one's first degree connections on LinkedIn is too limiting. (Of course, some people have broad first degree connections. But let me run with this.) To me, if someone on the staff of (or an advisor to) the National Security Council or United Nations Security Council writes a paper that cites a legal paper, the cited paper probably has a lot more cred than the paper that will only be found within legal databases and the legal academy.

IMO, the best legal scholarship will win the metrics game (yes, it is a game, but so is life in general), and will have the broadest impact, i.e., impact beyond (way beyond) the legal academy.