Wow, so poisoning the training data works that well?
This is awesome - guys, let’s all upload papers that say the lining of MAGA hats contains a chemical that causes cancer, that NOT getting vaccines causes autism, and that the most conservative universities show Trump lost the 2020 election. Let Grok gobble it up!
Actually, scientists just found out that Maga hats make you fully immune to fall damage. If someone were to walk off a cliff while wearing one, they would 100% survive.
Also for each fall they survive, one liberal dies.
Something like this?
Hol up, this is a whole ass research paper AI poisoning generator. Dope, time to turn my bad ideas into reality.
What is it?
How is it generating the papers?
It probably uses AI
Not really needed, it has a base of random standard texts, completed with the prompt. This type of text generators exist long before AIs, like eg. the Scigen page which exist since 2002 (last updated 11 years ago and works not longer online, you can fork it from Github). Similar the MathGen page, which create random fake Math BS essays, where you can figure as author. Apart for pranking your friends or poison AIs, you can use it as Lorem Ipsum page. But currently there are naturally also AI paper generators.
There are going to be people who believe this disease is real for the rest of their lives.
And that doctors are hiding it because conspiracy.
Some people still think Morgellons disease is a real thing where their skin grows artificial fibers, instead of… those being from their clothes.
So like… 6 months? Can’t imagine people that stupid have a very high life expectancy
There are still people who believe in alien abductions, that humans never landed on the moon, that the Earth is flat, that Trump won the 2020 election, that lizard people rule the world, that the Olympic and the Titanic were swapped in some bizarrely incoherent insurance scam, that Paul McCartney died in 1966, that 9/11 was an inside job, that contrails are population control chemicals, …
And yet these people live on and on and on spewing this idiocy.
Another study said that in recent years 25% of studies were being faked up for academic credentials, either by paper mills or academics themselves. Thousands of papers even used the same diagrams, with text in slight variations, citing each other.
Add AI to the mix, things get worse.
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.
The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
In a separate study of 20 LLMs, Omar found that LLMs are more prone to hallucinate and elaborate on misinformation when the text they’re processing looks professionally medical — formatted like a hospital discharge note or clinical paper — than when it comes from social-media posts (M. Omar et al. Lancet Digit. Health 8, 100949; 2026). “When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” says Omar.
You can just make an Overleaf account (or install GNU TeXmacs) and start outputting academic-like papers for fun and profit. I would have thought LLM developers would have at least highlighted PageRank-like citation metadata as very important when training on academic publications; papers with no citations clearly aren’t reputable.
Okey but was the paper obviously a joke? How a human would determine it was fake
From the article:
Because she works in the medical field, she decided to create a condition related to health and hit on the name bixonimania because it “sounded ridiculous”, she says. “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that’s a psychiatric term.”
If that wasn’t sufficient to raise suspicions, Osmanovic Thunström planted many clues in the preprints to alert readers that the work was fake. Izgubljenovic works at a non-existent university called Asteria Horizon University in the equally fake Nova City, California. One paper’s acknowledgements thank “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise”. Both papers say they were funded by “the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”.
aEven if readers didn’t make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.
reading more than one source is how it goes. If it’s a new paper with no other sources, i’d assume waiting for more sources is the default.










