Skip to main content

On social media, people have compared being accused of using artificial intelligence (AI)  to the 21st-century equivalent of a witch accusation. In many ways, this parallel is adequate, because at school, what can you do if you are accused of AI usage…when you did not use AI? If put through an AI detector, why would teachers believe your word over the detector’s? How do we prevent this 21st-century witch trial? 

Is this digital stake burning actually happening? Yes. College students have been wrongly accused of using AI to cheat on their assignments, and in an ironic twist, these accusations are often based on findings of another AI system. A major Australian university used AI and accused more than 6,000 students of academic misconduct. Most of the students had done nothing wrong. 

For one student, it took six months for the university to clear the student of any wrongdoing. In the meantime, the university had written “results withheld” on their transcript, which may have impacted the student’s applicability for graduate positions. The students were notified at the end of the semester, given very little time to respond or create a defense, and then had to wait months for the university to find that they had done nothing wrong. 

Evidence requested included handwritten and typed notes, and internet search histories to rule out AI use. Schools aren’t the police, and while internet history is private, who is going to argue when their academic future is on the line? Students have resorted to extreme measures, including hours-long screen recordings of their homework sessions to prevent AI accusations. Should you end up accused, your options are limited, and the consequences possibly catastrophic. 

University of North Georgia student Marley Stevens was given a zero on her paper and accused of using artificial intelligence, when she had only used Grammarly to proofread the paper. Grammarly had been listed as a recommended resource on the university’s website, yet Stevens was put on academic probation and went through a misconduct and appeals process that lasted six months. The zero impacted her GPA, and she lost her scholarship as a result. 

Liberty University student Maggie Seabolt received a notification that her paper was flagged for 35% AI when she had typed it in one sitting on Microsoft Word, and her professor marked 20% of her paper. She had no idea what to do or who to turn to. Avoiding Oxford commas, em-dashes, words like “accordingly,” “notwithstanding,” “indeed,” “vibrant,” and  “innovative,” have all been linked to AI-generated text. As AI adapts and in many ways grows smarter, it will learn to write more and more like humans and students. AI detectors are never going to be 100% accurate. How on earth are students expected to write without being unfairly accused? 

These accusations have reached the point of lawsuits. Adelphi University is facing a lawsuit after an AI-assisted plagiarism accusation against students. The student was not allowed to be heard or defend himself when the school accused him, and he chose to sue the school. This is one of multiple lawsuits and emerging legal cases that have arisen out of this new battle between AI, students, and teachers. 

Beyond students being accused, it also creates an atmosphere in which teachers look down on students, considering that they can no longer think critically, and can do assignments without AI. The immediate assumption now for the teacher is that students use AI, not that they think critically. While it is true that studies such as one published by MIT hint that ChatGPT may be eroding critical thinking skills, critical thinking and AI usage aren’t mutually exclusive. 

Kiara Nirghin, a Stanford technologist and Gen Z entrepreneur, says Gen Z’s comfort with AI is an asset. “The biggest misconception is that young people are using AI to not think things through, [but] I think that really intelligent Gen Z individuals are using it to think even deeper,” While there are certainly going to be times and examples in which students simply use AI as shortcuts or to avoid thinking through assignments, its the fact that it is a new stereotype against an entire generation that is the issue. 

Teachers are more likely to assume you are dumb and seeking shortcuts than to view your usage of AI as smart or designed to help you study or learn better. Weirdly enough, students receive all the pushback for using AI when teachers also use it, and in no discreet manner. AI made presentations with the traditional ChatGPT bullet point list decorated with emojis, multiple choice quizzes in which teachers did not even bother to remove the horizontal line Chat puts between questions, or even regular quizzes in which the wording is so obtuse and obviously AI. 

In extensive roundups of educator opinions, teachers have expressed immense frustrations and criticism of AI and students. Teachers have witnessed students using AI for assignments and are unable to even respond to follow-up questions, relying on AI instead to think of responses. This is again related briefly to the MIT study and research showing that extensive AI usage could be making us dumber through cognitive offloading. 

Yet some teachers have said that while LLM usage is dominating, it is not the end-all. Ben Pryterch, a statistics professor at Colorado State University, moved to in-class writing assignments and found that student performance improved remarkably. Students can still write. Students can still think. We just have to learn how to use AI within academic contexts and manage it appropriately. 

AI is not going away. In fact, it is realistically going to grow more dominant and smarter. It is part of the academic environment. Students will use it to study, students may use it on assignments, students may cheat with it, but students may also never use it to help them on any submitted assignment. Assuming they will and immediately believing intent to cheat will only worsen the deteriorating trust between professors and students. 

Dan Levy, a Senior Lecturer in Public Policy, Harvard Kennedy School, highlights that there is no such thing as “AI is good for learning” or “AI is bad for learning” – it can be used in ways that are good or hindering learning. There are ways to figure out and collaborate with AI to advance academic goals. The challenge is that there is no uniform approach or clarity in how to handle the presence of AI in academic environments. 

Without clarity and transparency from teachers and universities, and without clear instructions on the process of what to do if wrongfully accused, when and how you can use AI in assignments many students are truly going to sacrifice themselves. An “I” for AI. 

Cover image: gmast3r / iStock

Other posts that may interest you:


Discover more from The Sundial Press

Subscribe to get the latest posts sent to your email.

Elektra Gea-Sereti

Author Elektra Gea-Sereti

More posts by Elektra Gea-Sereti

Discover more from The Sundial Press

Subscribe now to keep reading and get access to the full archive.

Continue reading