In a separate study of 20 LLMs, Omar found that LLMs are more prone to hallucinate and elaborate on misinformation when the text they’re processing looks professionally medical — formatted like a hospital discharge note or clinical paper — than when it comes from social-media posts (M. Omar et al. Lancet Digit. Health 8, 100949; 2026). “When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” says Omar.
You can just make an Overleaf account (or install GNU TeXmacs) and start outputting academic-like papers for fun and profit. I would have thought LLM developers would have at least highlighted PageRank-like citation metadata as very important when training on academic publications; papers with no citations clearly aren’t reputable.
You can just make an Overleaf account (or install GNU TeXmacs) and start outputting academic-like papers for fun and profit. I would have thought LLM developers would have at least highlighted PageRank-like citation metadata as very important when training on academic publications; papers with no citations clearly aren’t reputable.