I was visiting the site of the American Journal of International Law this morning, and this particular advertising blurb caught my eye:
The Journal ranks as the most-cited international law journal on Google Scholar. It is also considered by the nonprofit, scholarly periodical resource JSTOR to be “the premier English-language scholarly journal in its field.”
Wow, I thought – it’s no longer sufficient to say that the international law academic profession as a whole regards the AJIL and EJIL as the two most prestigious journals in the field, but even when we are self-promoting to our own readership we have to refer to some kind of metric or league table. Second wow, I had no idea that Google Scholar ranked international law journals, I should really check that out. Here’s the table:
|1.||American Journal of International Law||25||50|
|2.||European Journal of International Law||23||43|
|3.||Virginia Journal of International Law||22||32|
|4.||Common Market Law Review||20||37|
|5.||American Journal of Comparative Law||19||31|
|6.||Journal of International Economic Law||19||26|
|7.||Human Rights Quarterly||18||26|
|8.||German Law Journal||17||29|
|9.||International and Comparative Law Quarterly||17||25|
|10.||Vanderbilt Journal of Transnational Law||17||24|
|11.||International Journal of Transitional Justice||17||23|
|12.||Journal of International Criminal Justice||16||25|
|13.||Chicago Journal of International Law||16||22|
|14.||Human Rights Law Review||16||22|
|15.||International Journal of Constitutional Law||16||22|
|16.||Indiana Journal of Global Legal Studies||15||25|
|17.||American University of International Law Review||15||22|
|18.||Fordham International Law Journal||15||22|
|19.||European Law Review||15||20|
|20.||Global Responsibility to Protect||15||18|
Well, the top looks kinda okay. And at least this metric is not as manifestly arbitrary as the ranking of international law journals by impact factor, which is not only skewed towards student-edited US law reviews (which comprise the majority of the Thomson Reuters dataset), but punishes journals like the AJIL and EJIL that publish a lot of shorter pieces, since the impact factor is calculated as the average number of citations per published paper (see more Joseph Weiler’s extensive critique of the application of the IF to our field in his 2012 editorial). Thus, for example, in the Washington and Lee ranking of international law journals by IF the AJIL and EJIL are at the 28th and 44th place, respectively. Interestingly, some the leaders of the IF table (Harvard, Yale, Michigan, Berkeley, Duke, Minnesota JILs) are not even in the top 20 of the Google Scholar list, while the Indiana Journal of Global Legal Studies, which is 16th on the Google list, is 61st on the W&L IF list and 38th on the basis of raw journal cites. Go figure.
That the GS ranking is not as obviously bad as the IF one doesn’t of course mean that it too is not arbitrary. The arbitrariness of the whole metric game when applied to the scholarly field of international law is even more manifest when we realize that (1) in comparison to other fields (especially hard science) a lot of international law scholarship is being done in books rather than in journals, and that (2) it is (still) being done in languages other than English, which these metrics virtually never capture. Obviously, a major problem in that regard is the systemic lack of online availability/searchability of books and journals not written in English, and their lack of availability in major research databases. (Seriously, THIS is in 2016 the website of the Revue Generale de Droit International Public.) Even so, should we, as academics and editors of journals, even implicitly endorse the validity of these metrics when we know for a fact that they are not an accurate representation of our profession? Should OUP really be pointing out, for example, that the EJIL has an impact factor of 0.913, the Chinese JIL has an IF of 1.186, or that the JICJ has an IF of 0.542, when we know that the explanatory power of these numbers (up to their oh-so-precise third decimal point!) is zero?
The big issue, of course, is that the turn towards metrics comes from outside our relatively small field, especially from the hard sciences. And it is driven by even wider trends, such as the corporatization of universities, the flow of power from departments/academics to university presidents/vice-chancellors and their ever increasing legions of senior managerial staff, as well as outside pressures from governments (e.g. through the UK REF and similar acronyms) and other funders. The turn towards metrics and rankings also has other systemic repercussions, for example with regard to staff hiring and promotion.
For example, when I applied for promotion to associate professor at Nottingham a couple of years ago, the application form told me (and still does!) that I had to provide my citation, h-index and i10-index counts – incidentally the first time I heard of these. I then had to set up a Google Scholar account, so that I could see how big my h-index thingy was (13, in case you’re wondering… yeah, I know, right?). And then I saw that a bunch of other unfortunate people had to do this before me; the obligatory half-hour of h-index comparison/envy naturally followed, made even dirtier by the fact that the only reason I did this in the first place was filling in a form (enjoy).
So, can academic lawyers somehow resist the invasion of horrible metrics, or is resistance futile? Should we embrace Google Scholar only because it is less arbitrary than the alternatives? Or should we design a law or international law-specific metric, and how exactly would one proceed to do that? Any thoughts and comments welcome.