‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘
Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01
I think the only solution is the Cambridge exam system.
The only grade they get is at the final written exam. all other assignments and tests are formative, to see if they are on track to practice… This way it does not matter if a student cheats in those assignments, they only hurt themselves. Sorry for the final exam stress though.
Is he single tho? 🫦
Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.
I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”
Christ…
". My hidden text asked them to write the paper “from a Marxist perspective”
Freshmen.
That’s a dangerous proof.
He could have said to write from a zagnoore brandle-frujt perspective. Some would have scanned the assignment, ignored the part they didn’t understand, and kept chooching right along. Many students would rather try to figure it out than sound stupid in class or risk the spotlight of social interaction.
Interrogating each of them on the material is the only safe way.
Great story with predictable results. Welcome to your AI future where people give their thinking over to machines made by sociopaths.
I’m guessing 33 people were too lazy to copy data into a box and relied on ChatGPT OCR lol.
This was a great article about the use of AI, but I think this also exposed bad/zero effort cheating.
There’s a reason why even the ye olde Wikipedia copy-pasters would rearrange sentences to make sure they can game the plagiarism checker.
Microsoft Word had auto summarize as far back as the early 2000s (and probably before then, I only found it my junior or senior year of high school). Plop a wall of text into a word docume, click auto summarize, limit the number of direct quotes to three words max, hit enter. I was clever af (except that one paper on Gorbachev where the paragraph entirely composed of wingdings exposed my strategy a bit), or so I’d have thought.
Lo and behold my cheating somehow didn’t prepare me for regular life, and so I had to learn lessons in college (the one year that I went), and early on in my career, in my early 20s, that I should’ve learned in school. The lesson was that you sometimes need to just do the fucking work. Hopefully the kids in this experiment learned that, but more than likely they learned to cheat better.
In one of my classes, when ChatGPT was still new, I once handed out homework assignments related to programming. Multiple students handed in code that obviously came from ChatGPT (too clean a style, too general for the simple tasks that they were required to do).
Decided to bring one of the most egregious cases to class to discuss, because several people handed in something similar, so at least someone should be able to explain how the code works, right? Nobody could, so we went through it and made sense of it together. The code was also nonfunctional, so we looked at why it failed, too. I then gave them the talk about how their time in university is likely the only time in their lives when they can fully commit themselves to learning, and where each class is a once-in-a-lifetime opportunity to learn something in a way that they will never be able to experience again after they graduate (plus some stuff about fairness) and how they are depriving themselves of these opportunities by using AI in this way.
This seemed to get through, and we then established some ground rules that all students seemed to stick with throughout the rest of the class. I now have an AI policy that explains what kind of AI use I consider acceptable and unacceptable. Doesn’t solve the problem completely, but I haven’t had any really egregious cases since then. Most students listen once they understand it’s really about them and their own “becoming” professional and a more fully developed person.
which is funny because reality makes that idea complete bullshit
leadership doesn’t want professionals, it wants low paid worker drones and ‘good enough’ ai
10% of your students might go on to be skilled enough to demand a job that respects their abilities, the rest are gonna be employed by tech illiterate boomers (lord these guys don’t want to retire) and will likely be dealing with being forced to use ai
thankfully i can’t use ai in my work so it’ll be decades before it is even a concern for me directly but i have multiple friends dealing with this issue now
they are intelligent, well educated, had top grades, their boss is some nepo baby with grand ideas of being the next elon
That’s a very capitalist view of education. Some people just want to learn, and that’s the point of an education, to enable learning. You might need that piece of paper to get a job in the field you want, and the field you want might prefer a mindless worker drone, but that doesn’t mean that education should cut corners and teach to the job.
This seems pretty fair and reasonable, although, we should ask why people do this in the first place? Why is there so much pressure to get good or decent grades? If you are just going to college to get a degree and all you want to do is pass, then why go at all?
College is a broken system right now. If things continue the way they are going, people will just learn how to use AI tools and go find a job. They don’t even need to think for themselves, they can just have a computer do it for them.
Great article.
How do we teach that when a student doesn’t want to learn?
Good question. But maybe we’ve gone overboard with the density of information and we just need to relax a little and give the kids their childhood back.
It’s not the density of information. It’s the end goal of the process. Students are only given motivation to learn for a career and people have figured out that most jobs are bullshit. If they can bullshit their way though college, they can bullshit their into a career. When layoffs are done by lottery, it’s not even like the sincere students can be safe. It’s bullshit stacked on top of other bullshit.
You’re right, if there’s anything wrong with education in the US, it’s that we do too much of it 🙄
I think a fair argument could be that we have the wrong mix, the wrong emphasis.
For example, my kids history class focusing more about memorizing dates and names rather than the broader picture. We need history, but rote memorization of the trivia isn’t that helpful. More analytical perspective about connecting events to outcomes and comparing the scenarios to one another, but I suppose that’s too hard to fairly grade and so we don’t like it…
I’d just fail every kid that obviously used ai. You’re just gonna waste both of our time with this shit I may as well waste your money. Like what are you even doing here?
The point of education is to learn something. I think “using AI to generate complex ideas will prove you don’t know what you’re talking about to anyone that does” and “if your use of AI is suspicious, you’re better off blaming AI than dying on that hill defending it” are decent lessons. One can hope that shame would be a course correction to a successful education.
If I’m being generous, usually the second chance comes in primary education. The AI stuff is new enough maybe they could get one shot at reformation. Though they should plainly see it’s cheating, so I’m not sure I’m really convinced by my own argument, but I’ll at least try to put forth the devil’s advocate argument…
Its cheating, they should be treated the same as any other cheater on the course. A lot of UK unis would go as far as kicking some one off the entire degree for persistent cheating and its highly probable this was not the only paper they cheated on.
Until there are actual consequences it wont stop being so blatant. Only real way to prevent this is an actual defense of any end of module work and exams, either in person or via Zoom/Teams.
I’m one of these types of students. What do you expect when you are forced to write discussion posts every week like it’s the most important thing ever(200 words per class, I have 3). Also responding to people’s posts like you care(1-2 50 word posts per class). I understand this post is talking about a report, but it’s literally inducing chronic stress. Na keep using AI classmates.
Also, you know damn well the professor isn’t reading all of our responses so what’s the fucking point.
What do you expect
I think that you believe what you described - 700 words a week - would be “obviously absurd” to everyone here… What you described is such a tiny work load it’s shocking to see you frame it like this.
But to answer directly: What you’re expected to do in school is learn, not fake your way through so you can come out an utter dumb ass on the other side. I know I would have used the tools if I had them as a kid. No adult could have convinced me of the damage I’m doing to myself and the world. I doubt anything any of us say will change your mind - but I am sure a great many students will hate themselves for doing this to themselves 10 or 20 years down the line.
Tbf this comment alone was 85 words.
Exactly. The point is educational alignment: designing coursework to achieve learning objectives given that students have access to generative AI. That requires work from the educators and honest communication with the students about the capabilities, dangers, and moral hazards of generative AI. You can’t just pretend that word calculators don’t exist.
The point is for your own learning and your benefit I guess. My studies weren’t literacy type though, so I can’t relate to how frustrating it is. I never used ai though, I actually enjoyed my subjects so much that I wouldn’t let it take the fun a way haha
deleted by creator
He runs right up to the actual problem but side-steps it:
“A college degree is not just about a job afterwards – you have to be able to think, solve problems and apply those solutions, regardless of the field.”
Problem: With generative AI, this is the LAST thing employers want. If you’re out there working right now, particularly in tech? It’s all about “leveraging” AI to “be more efficient.” They don’t want you thinking and solving problems on your own, they want you regurgitating solutions they presume are pre-vetted by AI.
I’ve had these discussions at my own job… “But, but, Generative AI makes it so easy to make and place Facebook ads!” - Agreed, and that’s not my job. “But, but, you can analyze data and generate reports!” - Yes, also not my job.
But the push by business to use it is HUGE, and in that environment, some student using it to cheat in a history class, ultimately, will benefit more from that experience in the “real” world than probably taking the history class in the first place.
Then again, my plan is to John Henry the shit out of this until I’m dead.
For those who never took a folklore class:
https://www.americanfolklore.net/john-henry-the-steel-driving-man/
What the business wants and what they say they want are two completely different things. They still want people to be able to think and solve their own problems, even if they end up assigning the praise for your hard work on the AI you only pretended to use
This is a new pattern that I am seeing. C suite is required to launch AI initiatives because of market expectation, tech staff expected to launder their work through Claude
I think that most of all they want you to feel like AI could replace you if you misbehave.
deleted by creator
Don’t really know how to feel about this because 15 years ago, all I did was reword Wikipedia pages to make a good paper. I went to college because I was led to believe it was a requirement to do well in life. I still learned a lot, but that was mostly through the social interaction of coursework. And honestly, I don’t use anything from college in my current engineering job, it was all on-the-job panic learning. If I were to go back to college today, it would be such an enlightening experience of learning, but when you’re a kid getting out of high school, you’re just trying to get by with some gameplan that you’ve only been told about. Idk. I don’t blame them for using a tool that’s so easily accessible because college is about fun too. I guess I wouldn’t do it different at that age .
This refrain I keep hearing of “I don’t use anything I learned in college” is an INSANE take. Unless you went to some fly-by-night for-profit scam college, you learned way more than you think, even if it didn’t include some specific engineering technique. You mentioned social interaction, but critical thinking is the big one. We need to stop devaluing education-it’s critical for our future. We can’t dismiss it just because capitalist vultures are ruining it. We need to fight to make it what it should be.
Apparently you learned to learn, which I suppose is one major goal of college.
That’s a nice ideal but the reality is that this world is cruel and we’re burdening future generations with debt for their degrees and the job market sucks. If reality was different, then maybe kids could enjoy learning in college. But it’s not, so they need to make sure they are capable of being good little sheep that can do what the C suite wants otherwise they’re going to be in poverty and debt for the rest of their lives with very little safety net.
US here, in case it wasn’t obvious.
You hit the nail on the head.
The problem is the cost of education in the US.But not all of the world is such a capitalist hellscape as the US is, where people were embezzled of affordable living, healthcare and education.
That doesn’t make the concept of education a bad one. The framework in which it’s implemented is to blame and the people who created said framework.
I’m in the same boat, and for me personally no, in uni I learnt to do as minimal of a job as possible to “pass” the arbitrary goals set by uncaring world. I had to unlearn all of that very quickly when I got my first real job that I actually like. My uni broke me, for sure, and I’m lucky I fixed a little bit of that decades later.
I’m sorry to hear that.
Would you say that your experience was typical or was it especially bad for you (as in not designed for your needs) while other people were better off?
I think that rewording wikipedia is slightly better though. It still requires you to digest some of the information. Kind of like when your teacher let you create notes on a note card for the test. You have to actually read and write the information. You get tricked into learning information.
Ai, just does it for you. There’s no need to do much else, and it’s reliability is significantly worse that random wiki editors could ever be. I see little real learning with ai.
Another thing is, you often gain interest on the topic, and Wikipedia indeed has the neat little thing of articles being related to each other, so it’s very plausible to start on Chandler Bing and end on the Atlantic slave trade, for instance. With LLMs, this is much, MUCH rarer, considering whatever you find interesting must be researched manually, since LLMs are more or less useless.
39% of the submissions were at least partially written by AI
That’s better than my class. I taught CS101 last year, code not papers. 90%+ of the homework was done with AI. There was literally just 1 person who would turn in unique code. Everyone else would turn in ChatGPT code.
I adapted by making the homework worth very little of the grade and moving the bulk of the grade to in-class paper and pencil exams.
My algorithms professor does that too and it’s better than nothing but still causes problems.
For example I still have to do other classes’ homework before I can start studying.
Meanwhile cheaters can just skip homework for the other classes and focus on studying for exam.
I still appreciate this technique much better than the more popular alternative of “make exams much harder to make up for the grade inflation.” Thank you.
I think a good way to deal with this would be assignments that also (partly) prepare you for the written exam. So if you sit down and do it yourself and actually understand the assignment you already did some learning for the exam.
I had one course at university with homework assignments that were super tough. I did it all by myself but in the end I learned so much that I didn’t even need to study for the written exam and got a top grade. Others who did not do their assignments on their own had to study hard for the written exam and in most cases got way worse grades or failed.
I think a good way to deal with this would be assignments that also (partly) prepare you for the written exam. So if you sit down and do it yourself and actually understand the assignment you already did some learning for the exam.
My networking degree was built on a similar principle. A large part of the degree was configuring Cisco switches and routers, which is done through a command line interface, and you can streamline this by copy/pasting a series of commands. The first test was simply setting the hostname, a password and configuring ssh access. This was a full hour+ test, meanwhile by the end of the semester that was the first five minutes of an hour long test to configure more advanced functionality on these devices
The entire degree program was setup so that you take the configs you wrote in the first week and keep building onto them and keep copy/pasting bits and pieces of the configs all the way through graduation and into the workforce.
I could absolutely see a programming degree program taking a similar approach, where you write code snippets in the first class of your freshman year that evolve and you’re ultimately still using by the time you graduate. It forces you to know your code, gives you good code to build off of professionally and makes it really difficult to coast by with nasty hacks and/or AI doing it all for you. It also sets the perfect trap for those trying to have AI do everything for them, where they won’t know what it’s doing or why it’s breaking as the underlying snippets are constantly being changed. Especially if it’s structured with a lot of easy “plumb your existing snippets together” assignments where it’s dead simple for those actually doing the coursework but really hard for those relying on AI entirely. They’ll be forced to learn or drop out which is really important for a college experience!
Yeah I had a class once where the exams were always just slightly modified versions of some of the homework questions. So if you were confident you could understand and complete all the homework you knew you’d be fine on the exam. It actually caused some people to study more since they felt like there was a more 1:1 between study time and exam success.
Had trouble with this myself teaching. Students this semester have been good about it (probably because I’ve been very explicit in my contempt and also it kept blundering) but last semester was tricky.
One thing I learned was I need to also insist no Grammarly. That used to be allowed but it makes original writing sound very AI. I also riddled my assignments with short oral segments and personal stories.
It cuts into class time but I’ve managed to make those sessions educational since my “presentations” are always conversations w/ students. No ppts. Actually kinda fun and very much weeds out cheaters lol
Okay fine and all, but are we not going to talk about the cat?
We can talk about it if you’d like
Yes, @[email protected], let’s talk about the cat.
That’s a great cat.
I’d take a college course from that cat.
In fact I’m pretty sure that cat wrote the article.
It’s the power behind the podium.
Here’s the link to the actual article. I get that you’re trying to do users a favour to bypass tracking at the original URL, but the Internet Archive is a Free service that shouldn’t be abused for link cleaning as it costs a lot of money to store and serve all this stuff and it’s meant as an “archive”, not an ad-blocking proxy.
I’m posting this in part because currently clicking that link errors it with a “too many requests” error. Let’s try to be a little kinder to the good guys, shall we?
If users wasnt a cleaner/safer/faster browsing experience, I recommend ditching Chrome for Firefox and getting the standard set of extensions: uBlock Origin, Privacy Badger, etc.
Yeah, especially if it’s not paywalled.
It deprives the original source of traffic too, even if it’s Adblock traffic.
Fuck it. Let’s make the Internet Archive only accessible from public libraries. And you will have to physically go to a library to access it. No accessing the archive through your library’s website.
I could also be convinced to make the Internet Archive only accessible from a series of elaborate temples we build just for this purpose.
Regardless of the method, the point is that the Internet Archive still exists and serves its core purpose. It loses some convenience of scholarly access, but in turn it now becomes useless as a paywall bypass mechanism.
Any free service is bound to be exploited to the fullest possible extent. It’s the depressing fate of so many internet projects.











