Since their creation, AI generators have posed a problem for educators. With students able to generate essays with only a short prompt and a click of a button, it’s understandable how worries about originality surfaced.
Here at Loyola, the official policy is that students should only use AI generators, like ChatGPT, if there is specific permission from an instructor and if they pertain to the course. In my experience, most classes have a zero-tolerance policy regarding the use of AI for assignments. But how do you tell the difference between work that is automatically generated and work that is written by students?
For many of my professors, the solution has been AI detectors, which may seem like a good response to academic dishonesty. After all, haven’t teachers also been using plagiarism detectors for years and haven’t those been successful? But while detectors might be an effective solution for plagiarism, for AI they are nothing more than a band-aid solution; one that simply does not work as well as many want it to.
Traditional plagiarism detectors work by comparing a student’s response with other written material available and seeing if they match word-for-word. This makes it possible for them to determine, with close to absolute certainty, whether a submission is original or not. AI detectors, on the other hand, work by using a language model to guess whether a response could have been automatically generated. They do so by detecting patterns of predictability, the more predictable and common the text, the more likely that it was written by AI. They can also detect changes in words and sentence structures, so a response with a more varied and unpredictable style is more likely to be original.
However, these detectors can’t know with 100% accuracy if something was written by AI because every model is going to have its limitations. There are a multitude of factors that can cause false positive or negative results.
Even a response written entirely by a human will contain some level of predictability, even more so when written within an academic context. Detectors will unfairly punish students who write in a more concise, predictable style for this reason. And the shorter the response, the harder it is to accurately analyze its originality, making it pointless to attempt to detect responses under 1,000 words.
Additionally, since the models used for most AI services are usually trained with texts written in English, anything written by someone with a different first language will be more likely to be detected as AI. And even something as small as checking your grammar or using synonyms can be enough to set off specific detectors.
With this much potential for detectors to come back with false results, it’s a wonder that they were trusted by so many in the first place. Even big names like Open AI have discontinued their detection services due to repeated inaccuracies.
Not only do these detectors simply not work most of the time, but they can also lead to larger problems. If professors are so focused on making sure that every assignment turned in is original when a student’s response is determined to be AI, even if it’s a false result from a faulty generator, they are guilty until proven innocent. Even if they prove their work is original, the extra stress and burden put on them due to these policies isn’t worth it. When students and teachers can’t trust each other, no one is able to learn.
These detectors aren’t solving a problem, only contributing to a negative learning environment.
AI isn’t going to stop being a problem. But AI detectors are far from the perfect solution. Surely, we can find different ways to approach the problem that allow teachers to work together with students rather than needlessly punishing them.
Some other solutions could be having students show their work using a medium that’s hard to generate, having students do work in class, grading based on discussion rather than essays, breaking assignments down into sections, so students can show each step of their work, or even incorporating AI into the course however it suits.
These solutions might not work for every class, but once we eliminate the false notion of the usefulness of AI detectors, we can continue experimenting and coming up with new ideas to tackle the problem of AI in higher education.
OPINION: AI detectors are NOT the solution
April 11, 2024
0
More to Discover