A loyal reader recently sent me the following email:
Just a quick note to let you know that EJIL and I.CON get the first and third position respectively in the general ranking of NON US Law Journals elaborated by Washington and Lee University School of Law (sections non US law journals) http://lawlib.wlu.edu/LJ/index.aspx Congratulations!
The only reason I was happy to learn this exciting news was that no one will be able to dismiss what I am about to write as prompted by ‘sour grapes’.
But let us backtrack a bit. I invite you to visit this Washington and Lee University School of Law website. It requires some getting used to, especially in setting the search parameters. Experiment a bit (after you read this Editorial!) In its own way it is admirable and provides an important tool for legal academics. Its purpose is simple enough. When an author has to choose in which journal to publish his or her article, is there a way of making a choice based not on an impression of prestige or importance but on some hard data on readership, citations, impact (whatever that may mean) and the like? This meticulously constructed database (not the most user friendly, but it should not be a challenge to smart law professors and the like) tries to help in this worthy endeavour. In the USA, in which most, though not all, law journals are edited by students and associated with a law school, the typical choice used to be based on the ‘ranking’ of the law school with which the journal is associated. The Washington and Lee database tracks instead impact through citation and shows the law school ranking (itself a problematic notion) to be a crude and approximate measure. Especially when it comes to specialized, rather than ‘general’, law journals, the law school ranking is a bad proxy for readership and influence.
Like credit rating agencies, there is more than one outfit which tries to provide this service. The Washington and Lee database is interesting since, aware of the problematic nature of establishing criteria for influence, it allows the user to vary the parameters according to which tables of influence will be generated. The overall methodology seems to be the same: an electronic database of legal journals is selected and then citations to articles are computed. Simply counting citations, might, however, skew the impression of influence of a journal. You might, for example, have one or two highly cited articles published by this or that journal whereas almost everything else is hardly ever cited, and yet those one or two star pieces could skew the overall influence ranking of the journal compared to others.
Though I have somewhat simplified, this is how the famous Impact Factor (IF) has come into being – it looks at the overall number of citations but divides it by the number of articles published, so that the ‘one hit wonder’ phenomenon does not inordinately skew the impression of journal influence as a whole, and one has a kind of average. But you can see the difficulty here: a journal with a small number of long articles will fare better than a journal with a larger number of shorter articles, even if their overall citations are the same – and though these things are hotly contested, to many this appears to give a misleading picture in and of itself: a yearbook will structurally tend to generate a better IF than a quarterly.
The IF psychosis has become such that I have had friends of EJIL warn me that my tendency to publish short reaction pieces, Debates and similar pieces will be detrimental to our IF. The poem on the Last Page? No, it does not get cited, but it brings down our IF. Roaming Charges? I had a large number of positive reactions to the recent photograph ‘Places of Entry – Tel Aviv Airport’, but it, too, will drop our IF. It does not get cited, after all, it just gives aesthetic and intellectual pleasure.
Washington and Lee have spent considerable time thinking through these difficult methodological issues. They have an admirable explanation of the way the different parameters are used – though I advise you to have a bottle of aspirin handy as you read through it. They have developed what they call a Combined Factor (CF) – which, as its name indicates, balances raw citations with the IF to try and give a realistic measure of influence. They are modest enough to allow users of their database to vary the parameters which determine the CF should such users not agree with the database designers’ choices.
Be that as it may, if you go to the Washington and Lee database and set ‘Non US Law Journals’ among your parameters you will see that EJIL, reaction pieces, Debates, The Last Page and Roaming Charges notwithstanding, scores very high in absolute citations, in IF, and has been the number one non-USA journal in their CF for some years. I.CON, our sister publication, is number three. As I said, no sour grapes.
So what, then, is my gripe? It is a little bit like the classic Jewish joke used in the title, from the Borscht Belt (the Catskills in Upper State New York): two Yiddisher Mammas heard kvetching about the lunch they had just finished: ‘The food was bad and what’s more there was not enough of it….’
The congratulatory email from our loyal reader which opened this Editorial is not atypical. Our distinguished publishers, Oxford University Press, somewhat shyly, track the IF and make all ‘their’ Editors aware of their vicissitudes in the IF tables. A colleague and friend, Managing Editor of a distinguished European law journal, sent out an excited email to all of us on the Advisory Board of that journal with news of a high IF score. My inbox was filled with notes of congratulations by no less excited colleagues on the Board responding to the good news.
These indicators, like television ratings, are increasingly shaping the journal publishing world. There is merit to this, a dose of realism perhaps. But there is also a danger: television ratings have not always been conducive to quality television. I have a pretty shrewd idea which articles, in terms of author and subject matter, will generate more attention, more downloads and consequently more citations. Should I become inordinately concerned with our IF, will this, consciously or subconsciously, not create its very own, rather pernicious, impact factor on editorial decisions? Will it not militate against the theoretically difficult, the esoteric subject matter, the new and unknown author?
One could argue that the very experience and record of EJIL, in which we do our best to base our publication decisions on merit, intrinsic quality, and subject matter importance as set by our Editorial Board and the Editor, dispel this concern. After all, we make our own, often idiosyncratic choices (grant me that), and yet we seem to be doing well enough by the measurements of some of these academic rating agencies? At the risk of sounding arrogant, even if that is so, that may be a privilege of EJIL, where its prestige guarantees a certain interest in what it publishes. But if publishers and advisory boards and indeed the field as a whole fall captive to the IF trap, will this not risk that pernicious effect on new and less centrally located journals?
So much for the ‘Bad Food’ part of my kvetch.
The ‘Not Enough of It’ is easily stated and is a matter of considerable chagrin. All the IF databases of which I am aware are hopelessly skewed by American legal publication and publications. They are dominated by the hundreds of student-edited American law journals and by the habits of reading and scholarship of American academia. For EJIL this is a matter of particular pique. These databases not only exclude most international law journals from non-English speaking countries, which is where a huge part of our readership and authorship come from, but they exclude all but a few handfuls of non-American English-language international law journals. In other words, the current generators of IF essentially measure the number of times EJIL articles are cited in mostly American law journals. I would bet that our total number of citations in non-American law journals far exceeds our citations in American law journals. I would be disappointed if this were not the case. And I would bet that the gap between our overall number of citations if all these other journals were computed and the number as generated by the American-dominated databases would be greater than for most American journals which are mostly cited within the USA.
That is why, although I accept in principle that some form of objective, quantifiable measurement of influence has its uses, I refuse to accept the current American-dominated academic rating agencies as a valid measurement for European journals such as EJIL, and I urge the same attitude of disdain among my fellow European and, indeed, non-American law journal editors.
I also want to register my chagrin that OUP, CUP, Kluwer and their fellow European legal publishers do not get together to produce a database which could be used more accurately to measure the influence of the journals they publish. It is just convenient (and cheap) for them to rely on the American-dominated ratings and then complain when none of their journals appear even in the top 10 of overall (as distinct from non-US) ratings.
Have we not seen this somewhere else?