‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘
Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01


I think that rewording wikipedia is slightly better though. It still requires you to digest some of the information. Kind of like when your teacher let you create notes on a note card for the test. You have to actually read and write the information. You get tricked into learning information.
Ai, just does it for you. There’s no need to do much else, and it’s reliability is significantly worse that random wiki editors could ever be. I see little real learning with ai.
Another thing is, you often gain interest on the topic, and Wikipedia indeed has the neat little thing of articles being related to each other, so it’s very plausible to start on Chandler Bing and end on the Atlantic slave trade, for instance. With LLMs, this is much, MUCH rarer, considering whatever you find interesting must be researched manually, since LLMs are more or less useless.