‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘
Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01


I’d just fail every kid that obviously used ai. You’re just gonna waste both of our time with this shit I may as well waste your money. Like what are you even doing here?
The point of education is to learn something. I think “using AI to generate complex ideas will prove you don’t know what you’re talking about to anyone that does” and “if your use of AI is suspicious, you’re better off blaming AI than dying on that hill defending it” are decent lessons. One can hope that shame would be a course correction to a successful education.
If I’m being generous, usually the second chance comes in primary education. The AI stuff is new enough maybe they could get one shot at reformation. Though they should plainly see it’s cheating, so I’m not sure I’m really convinced by my own argument, but I’ll at least try to put forth the devil’s advocate argument…
Its cheating, they should be treated the same as any other cheater on the course. A lot of UK unis would go as far as kicking some one off the entire degree for persistent cheating and its highly probable this was not the only paper they cheated on.
Until there are actual consequences it wont stop being so blatant. Only real way to prevent this is an actual defense of any end of module work and exams, either in person or via Zoom/Teams.