The AI Credit Rule: Give Credit to AI the Way You Would Give Credit to a Human

Recently, Xavi Amatriain asked whether AI has a place in acdemic writing. In his opinion, AI can be of great help to those writing in a non-native language (usually English). He also believes everyone will soon be using these tools so it is irrelevant whether authors acknowledge their use. [I previously wrote that he didn’t think it was necessary to ask authors to acknowledge them, but he corrected me.]. It’s a little more subtle than that, of course, as AI tools can be used in many different ways. Some of these seem reasonable to allow, and others clearly don’t. So how do we decided when is it acceptable to use AI, and how should it be acknowledged? My take on this is that a single rule can get us most of the way there. It is:

AI Credit Rule: credit, acknowledge, remain silent about, or otherwise treat any AI-generated text in your work exactly as you would if it had been written by another human being.

In a lot of cases this is simple. If you asked a colleague to read over your work and their only comment was, “you used, ‘it’s’, in the first paragraph on page 2 when you should have used, ‘its,’ you would make the change but probably not credit them. If there were a large number of such changes, you might thank them in an acknowledgements section. They acted as an editor, but they didn’t really contribute any content to the work.

Other the other hand, suppose you were writing a paper about a new and novel solution to a problem that relied, in part, on Voronoi diagrams. Assume a colleague provided a couple of paragraphs comparing and contrasting well-know algorithms for constructing Voronoi diagrams and you included them in the paper. You’d definitely acknowledge their contribution and you would probably want to name them as an additional co-author. You’d also want to verify that they didn’t just copy and paste text from wikipedia or a computational geometry textbook. So if you asked ChatGPT to write that same background for you, why not credit it? In neither the human nor the LLM case did this section make or break the paper, but it did make the paper more complete and more readable by providing readers with some relevant background.

Similarly, if you originally wrote a paper in your native language, then had a human translator translate it for publication in another language, you would credit the translator. You wouldn’t call them a co-author, but you would acknowledge that they were a translator, i.e. someone who wrote the words so as to as closely as possible approximate your original. So if you used an LLM-based system to do the same, why not acknowledge it just as you would a human?

We can also apply the rule from a reviewer’s or editor’s perspective. While it may sometimes be difficult to detect AI-generated prose, that is not the actual problem. The problem is to detect when prose is passed off as original in cases where work written by a human other than the author(s) would either be unacceptable or necessitate proper credit. And if there is one thing academic reviewers are good at, it is digging up prior work or summaries thereof that isn’t properly cited. Similarly, if AI-generated text is biased or ethically questionable, it should be called out in just the same way that human-generated text with the same issues would be.

If publishers are clear about how authors should apply the AI Credit Rule, most authors will apply it. It simply isn’t worth the risk of not applying it, particularly once doing so becomes conventional. And furthermore, if they are clear about what uses are acceptable, e.g. translating or correcting grammer, and what uses are not, e.g. writing a section of the paper from a prompt, then it will be obvious to authors what is acceptable and should be disclosed and what is not acceptable whether disclosed or not. We are in a transitional period now, so there is confusion, but that’s exactly what the AI Credit Rule is designed to get us past.

arXiV is in a uniquely difficult position since it is explicitly open-access and not peer reviewed. It has done wonders for discoverability of work that was previously spread across hundreds of individual and departmental tech report sites and has been great for giving early access to work that is still making its way through the peer-review process. But maybe this fully open-access model doesn’t work as well in the age of LLMs.

Things gets trickier if an author maliciously ignores the rule and claims non-trivial LLM-generated text as their own. But we could still apply a variation of the rule during review or even after publication. We would do so by treating any unacknowledged AI text as we would treat any plagiarized or otherwise unfairly used human-generated text. The threat of forced and public retraction of work that breaks the rules is what prevents rampant misuse of human text today, so there is no reason to believe it would not prevent abuse of AI in the future.

There will always be content farm corners of the internet where text is blatantly ripped off, remixed, republished, and monetized. Similarly, there will be abuse of AI in writing garbage content for monetary or other gain. But credible publishers who adhere to the AI Credit Rule and publish clear regulations about how they will interpret and enforce it will continue to attract high-quality original work. Those that don’t will lose credibility, lose citations, and cease to be attractive destinations for quality original work.

Various AI writing assistance tools may soon be as ubiquitous as spell checkers are today. But that doesn’t mean we should either ban them from use in scholarly publications or abandon all hope of reigning them in. The AI Credit Rule, which treats AI contributions the same way as human contributions, can go a long way in helping us reach the right compromise.

AI Credit Rule Attestation: No LLMs were used in generating the text of this post.