I’m not a cop. I hate having to deal with this issue. But we are seeing AI generated writing turned in by students and this thwarts our efforts as educators trying to help our students learn. So, we’d better have some tools. One of our Philosophy Adjuncts, Davis Smith, has compiled a few tell-tale signs and found a few online tools (links below) for detecting AI generated writing. Here are the contents of his email this morning:
Like me, you read a lot of student writing. This gives you a good nose for telling when something is off in the writing. In the cases of Philosophy writing which I have seen from AI, I have noticed:
- We humans have a thousand things bouncing around our skulls at any given moment. Try as we might to prevent it, this will impact our writing. There will be sudden changes in word choice and tone, obviously inserted notes after the fact, and sudden spurts of creativity. This gives our writing a degree of bounciness, it sort of jumps up and down. AI writing isn’t like that. AI writing is WAY too smooth, going from A to B with no detours and never takes the scenic route.
- There will be meaningless platitudes and very neutral language. For example, in all of of the essays I assign my students, I have a fourth of the points being whether they take a stand and give reasons for their stance. I call this the PEE method (Present Explain and Evaluate). AI writing is really bad at the evaluation part so it will take on an air of neutrality and not try to rock the boat. Humans aren’t like that. We have opinions and reasons for them and I want the students to pull them out.
- For my classes, I use certain terms and phrases differently than how they would find them on the internet, such as ‘Cultural Relativism’, ‘Objectivism’, and ‘Libertarianism’. If a student is paying attention to the content, then they will understand that, for example ‘Cultural Relativism’ (talking about the term) does not describe a complete stance, they need to say what is relative to the beliefs of the culture. AI would automatically assume that Cultural Relativism refers to Moral Cultural Relativism. For Objectivism, an AI might just assume that the paper is on Ayn Rand’s Ethical Egoism (because she called it that). And for ‘Libertarianism’, I can almost guarantee that the AI will write about the socio-economic stance rather than the metaphysical stance about free will. Fundamentally missing the mark like this on terms which are particular to a field is a sign that it’s AI.
That said, Here are the three AI detectors I use, in order (if the first one flags, I move to the second, and if the second flags, I move to the third). Though, I will admit that a recent update to GPTZero makes it 99% accurate for detecting human writing, so maybe I will not need one of the others.
- https://gptzero.me/ This is GPTZero, which was the first chat-bot detector made for academia. I would really like this one included in Canvas to do an auto screening of submitted work.
- https://copyleaks.com/ai-content-detector This one is my default second check. It even gives a percent likelihood that it was AI.
- https://writer.com/ai-content-detector/ This one I am looking to replace because the character-limit is far too small for my students’ papers, especially the AI generated ones.