False Accusations of AI Cheating: What's an Educator to Do?

Ian Hartley

|

June 08, 2023

main image description

How many students being caught for using AI are actually innocent? With today's 'detector' tools that give instructors a 'likelihood' that a student used AI to complete their assignment, it is impossible for educators to know with certainty whether AI was really used. Rolling Stone recently published an article about this phenomenon, titled "She Was Falsely Accused of Cheating with AI - And She Won't Be the Last", wherein a UC Davis student was accused of using AI tools to write her essay, despite being completely innocent. This demonstrates the perils of using "AI Detector" tools that offer no certainty that a student has actually cheated, and thus cannot be relied upon to make an academic determination. Furthermore, just as many students are being falsely accused, many more are getting away with using AI to cheat on assignments, especially as students can use the same AI detector tools as educators, and tweak their essay until it appears to be 'likely human written'.

The ethical implications of using AI detectors are considerable. The only reason that this student who was falsely accused of using AI was able to clear her name, was because she knew how to use Google Documents to prove that she had written the essay herself, and without the use of AI. While her forensic google document method was enough to prove that she didn't use AI in this case, it is not something that can be done conveniently and widely by instructors. However, if this student was not someone who was intimately familiar with technology, perhaps someone who was younger or did not have the socioeconomic background to be exposed to technology and gain familiarity from a young age, it is likely that she would not have been able to prove her innocence, and in an extreme case, may have been kicked out of college altogether. Given the severity of plagiarism, and the seriousness of the academic judgements that can follow episodes of plagiarism, AI 'detectors' simply cannot be used to make such determinations, which can materially alter the course of a young person's entire life.

Even if AI detectors improve in their accuracy, they still cannot ever offer certainty that AI was used to write an essay. When further considered in context of students tweaking their essay with other AI tools, and editing their essay themselves after using AI to make their essay appear human, the potential efficacy of AI tools decreases even further. Ultimately, once words have been written, there is no way to know whether they have been written by AI or a human author.

This is why Authoriginal takes a 'prevention' approach rather than a 'detection' approach, by creating a secure google doc that flags use of any disallowed tools or apps while the essay is being written, without impacting the student's writing experience. In so doing, Authoriginal empowers instructors to set an AI policy, whether they seek to allow ChatGPT and prevent other AI tools, or whether they aim to prevent the use of AI altogether on assignments. If you're interested in learning more about Authoriginal, please reach out for a demo, to sales@authoriginal.com, and we look forward to hearing from you!

Get a Demo of Authoriginal

Contact us
Twitter
Facebook
Instagram
Authoriginal