The Human ChatGPT – The Use and Abuse of Research Assistants

Written by

Recent meetings of the Advisory Boards of EJIL and I•CON were dedicated, among other issues, to, surprise surprise, the ChatGPT challenge. In the context of law faculties and legal education, one acute problem, as a recent Editorial noted, relates to the possible use of AI by students in exams and, even more acutely, when writing seminar papers.

A different set of problems arises in the context of scholarly publications. How should we deal, we asked ourselves, with submissions to EJIL and I•CON where AI has been used by the author? Some cases are easy enough. We regularly receive submissions that were clearly written by, say, ChatGPT, the quality of which is such that even after only a cursory read they can be consigned to the dustbin. But as the technology develops (and ChatGPT 4.0 is already significantly better than the 3.5 version), and the skill in using prompts intelligently improves, one can well imagine a submission where the use of AI will not be detectable and where the quality is high and would be welcome were it written by a human author.

Some took the view that learned journals are in the business of publishing high-quality scholarship. Consequently, they argued, if a submission passes the quality test, we should not be concerned by the use of ChatGPT, even in a case where the article was substantially written by AI and the human author did little more than embellish the content. Others, quite forcefully, for reasons which are both obvious and intuitive, took the opposite view. It is, it was argued, simply a different form of plagiarism.

The dilemma is further sharpened by the fact that publication in high-quality peer-reviewed journals plays an important role in a variety of career contexts – academic appointments and promotions, to state the most obvious examples. And since scholarly journals such as EJIL and I•CON receive many more quality submissions than they are able to publish, the selection of an AI-generated submission might lead to injustice and harm to others. No conclusion was reached and we are still in the process of deliberation.

Be all this as it may, in the course of deliberation one member of our Board threw a little verbal hand grenade: How is the use of ChatGPT, he asked, different from the use of research assistants? How indeed?

One difficulty in answering this question is a result of the very different traditions of using research assistants in different jurisdictions.

Here, too, there are two easy cases. For the sake of preserving anonymity, I will not use names in describing the first easy case. This one comes from Germany. A very famous German scholar, respected by all of us, in an earlier stage of their career, included in their publication list a book and a couple of articles which were published under the name of their professor alone, indicating that they were in fact the result of their work too.

The only exceptional thing about this incident is the courage of our colleague in bringing this truth out of the closet. The practice itself, of putting one’s name to an article which in large part or even entirely was written by a research assistant, and merely acknowledging, if at all, this ‘assistance’ in a footnote rather than giving full authorship or co-authorship, is, though diminishing, still quite common in Germany and elsewhere.

I can already hear across the Atlantic the shrieks of protest by my German colleagues. Ja Ja. But who are you kidding? I particularly like the defence of ‘The voice is Jacob’s voice, but the hands are the hands of Esau’ (Gen. 27:21-23). ‘Those were my ideas, he or she only put them into writing’. Another Ja Ja. This may well be true (not always, but not infrequently both the voice and the hands are those of the hapless RA) but, even if so, both names should feature as authors. Easy case.

The other easy case, in my view, is when the research assistant has done the valuable task of, say, ‘find me all the cases in which animal rights were discussed by this or that court’. Or, ‘prepare for me a bibliography of recent secondary literature on this subject’. Here, a mere thank you footnote will suffice.

The hard cases lie in that vast grey zone in between those two easy cases. I cannot offer a bright-line rule. But there reaches a point where the help of the research assistant moves from the technical/clerical to the actual development of ideas and formulation of text. One possible way to think of this is as follows: If the input came from a colleague, and not a RA, would the expectation be one of co-authorship? Wherever you may draw the line, grant me that a line does exist somewhere between the second easy case (where a simple thank you note will suffice) to the first easy case, the crossing of which should result in co-authorship.

A particularly delicate case occurs in the growing field of empirical work. This, to give but one example, often involves the coding of a large number of cases (court cases or other types of ‘cases’). This can involve a considerable amount of work by RAs. How and where does one draw the line? My own view is that if the principal investigator designed the research question(s) and formulated and tested the coding scheme, probably the ‘manual’ work of actual coding by RAs might not justify co-authorship. I put ‘manual’ in quotes since it is not merely manual: in coding cases, judgment and analytical prowess are indispensable if the coding is to be executed well. It might still not be the kind of creativity which rises to authorship, though it would certainly merit a generous and explicit recognition in the body of the published piece. Still, this is not a hard and fast preference, and a lot will depend on specific circumstances such as, for example, significant revisions to the coding suggested by the RA during their work. Many other examples of these types of hard cases can be experienced and there is no mathematical formula (today we would say algorithm) that can produce easy answers.

In this context I might also mention the opposite type of abuse, that of overreaching research assistants themselves: namely, where any contribution that goes beyond the second easy case forms the basis of a demand for co-authorship. I sympathize with the sentiment, given the ruinous quantitative milieu imposed these days on early career academics. Claiming co-authorship would result not only in another line in one’s publication list, but the prospect, perhaps, of appearing in prestigious fora and alongside, perhaps, a prestigious senior colleague. But it can still be abuse if the contribution of the research assistance is not such that would merit true authorship. No easy solutions.

One could argue that a best practice would be to discuss ab initio with the research assistant the question of co-authorship and be as clear as possible about the prospect, positive or negative. Patti chiari, amicizia lunga, as the Italians say. Such a conversation may be useful not only in settling the issue of co-authorship but also more generally in understanding the scope of the assistance to be given.

(A nice Talmudic question concerns the order of the names in a co-authored piece. The convention is that following a strict alphabetical order indicates the equal contribution of all co-authors. By contrast, if the first name mentioned disrupts the alphabetical order, it is an indication that s/he is the principal author. It is not a fully satisfactory solution since it privileges the Zacharias of this world. But what of the Abrahams? Even if they appear first, it will be assumed that the list simply follows the alphabet. We can leave this second-order conundrum to the deliberations of the sages.)

Be this as it may, the reality of academic research is that it is not – and should not be – an industrial process. Research is not predictable. What begins with an assignment for a literature review may go nowhere because the literature review shows that everything has already been said. Or the research assistant may make such sharp observations in the literature review that one realizes that co-authorship is the way forward, not only in the sense of putting two names under the title but also in the sense of actually developing the arguments and writing together. Again, no easy answers, other than the need for flexibility and reflexivity during the ongoing research and writing process.

Other than that, the only words of wisdom I can offer are that awareness of the issue, transparency with the RAs and ultimately discernment in cutting one way or another, increase the chance of an equitable solution. Research assistants as human ChatGPTs? No, they are first and finally human.


The text below is culled from the Guiding Principles of Good Scientific Work in Public Law prepared by the German Association of Constitutional Law (Vereinigung der Deutschen Staatsrechtslehrer)


  1. The publication of another’s text [even] with their consent under one’s own name (‘ghostwriting’), with or without remuneration, is … scientifically dishonest.
  2. It is scientifically dishonest for a professor to have their employees draft texts and then publish these under their own name as a single author.
  3. Any input which makes a substantial intellectual contribution to a publication shall lead to (co-)authorship.
  4. Mere changes to wording and language do not lead to a loss of authorship by the author of the draft. Whether or not the professor can claim authorship depends on whether they have made a substantial qualitative or quantitative contribution to the draft.
  5. Only where support by scientific employees is limited to mere assistance such as research, gathering materials, footnoting and similar routine activities shall such support not lead to authorship. In these cases, it is sufficient to note thanks in a footnote.
Print Friendly, PDF & Email


Leave a Comment

Comments for this post are closed


John Morss says

February 3, 2024

May I express my support for decency in authorship attributions and suggest that those junior 'assistant' or 'associate' editors of the larger edited collections receive better recognition in subsequent citations than one often sees. More pointedly I'd propose (through the chair so to speak) that ESIL consider the prohibition of the use of chatbots eg for the purposes of initial drafting of a call for papers on a given topic (as has happened). Or mandates that any chatbot use must be indicated in the subject line/title/declared authorship of any piece of writing issued under the aegis of ESIL. (So this goes beyond publication in EJIL or EJIL:Talk!) Predictive text software seems regrettably inescapable but scholars/professionals however pushed for time should expect to be relied upon for coming up with their own words (or of "very expressly quoting" another author or clever machine).