Fuck this site; it lost me when it got to listing “What kinds of things might they be good at?”
Just say no.
Using an LLM to summarize something is still a bad idea. The chances of it emphasizing the wrong thing, omitting the most important thing, or just outright making up “facts” remains high. LLMs Will Always Hallucinate.
We’re too late. People are copy/pasting code from AI into low-level functions that run hardware. I’m glad I can retire soon, this is absolute lunacy.
Got into a work ‘argument’ yesterday with someone from CyberSec that would not believe a tool we use could not do the thing he wanted me to have it do. I’d researched it and had direct links from the vendor, but CoPilot told him otherwise so I had to spend half an hour rehashing the same thing over and over as he adjusted his stupidass input data until CoPilot basically told him ‘whoops I lied about this.’
I’ve run into this twice now. For two different products I support, two different people sent me Claude AI slop answers where it hallucinated functionality into the product that doesn’t exist. And management still says to use AI for research, but verify its responses. What’s the point? That doesn’t save me any time. If anything, it’s wasting time.
I don’t know how these people don’t experience crippling embarrassment. I had a few people try to help me solve their issue by using ChatGPT, and of course it hallucinated options in the software, so I had to tell them that no, this does not exist. At least they apologized.
Our entire meeting was him just feeding different prompts in for stuff while I pulled up vendor pages and found the relevant info quicker and without hallucinations. There’s got to be a breaking point where people realize it’s trash, right? ……right?
Report this incident to their manager.
Their manager was using CoPilot to check the latest version of iPadOS and also arguing with me that 18.7.1 wasn’t getting security updates anymore because CP told them only 26.0.1 was current. It’s a bottom to top issue on that whole side of the business right now and it’s driving me nuts
I disagree that LLMs are good for summarizing information. They are good at TRUNCATING information. They do not possess the necessary cognitive abilities to accurately understand and distill something down into salient points consistently and reliably.
I’m curious how it’s any different from the auto summarize feature Microsoft word had at least 20 years ago, that I used to help me write papers in high school.
I don’t think the type of boss to cite ChatGPT is going to take well to this reaponse.
Yep, employee will be fired for “disrespectful” “gross misconduct.”
Just fire them. Their bosses know that AI is just a marketing term to bring in VC investment.
The big bosses know thst it’s not actually useful, and it causes harm to the org internally
I’m getting at least 2-3 calls a week now at work from people looking for something we don’t sell, and getting mad because ChatGPT told them we do.
I like sending pictures of the animals at the zoo to ChatGPT and have them identify the animal. It gets a lot wrong. It also says there are capybaras there when there aren’t.
I’m sure it can nail a bus or a stoplight though.
Lmao, a moment ago our friend group is talking about dream and nightmare, one guy just post an AI wall of text talking about what dream mean, and it does read exactly like a pseudo-science. I mean no harm to him, but it’s just silly because the text from AI sound so authoritative and assertive while i have to look at medical site each time i googled and think “is this site legit or is it copypasta tabloit?”
I think it’s really how people operate on the internet, they don’t doubt source because it’s on the internet.
When anyone drops an ai blob in a discussion I ignore it and continue the discussion as if it was not there.
If I wanted to ask an llm, I would do it myself.
That’s what I do, but I really wish I could tell them to fuck off with that shit.
Tbh, I almost never ask or am asked for a source in person (not that I believe everything I hear, I’ll just look it up later unless it’s a specific type of social situation), but it happens a lot online. I don’t know how I would respond if someone dropped some obvious bullshit in a group chat, because it’s a different type of interaction.
That’s an awful lot of reading for people who can’t be bothered to go down a few sections of the page and get the information themselves.
It’s dead, Jim
Unfortunately my boss also writes my paycheck, so I gotta be careful
Add to this that AI is scraping the internet,… the free open internet, and many posters are employees who want to keep their jobs are easily feeding false information on the internet intentionally tainting AI
Honestly, it doesn’t even have to be intentional there’s plenty of people who are confidently incorrect.
The best use for this is i18nl
I can explain this fine in English. What I need is to explain it to the store clerk when I ask them a question in broken foreign language, and they hand me their phone, showing the regurgitated answer from AI
I want to load this site and hand it to them. In their language.
For some reason, the domain name does not resolve with quad9 :(
one minute, let me get chatgpt to summarize this, I can’t bother to read it…
Is the site just down or dead? It doesn’t load for me… I wanted to check if it only was in english so I could share it with the people I know.
Works for me http://stopcitingai.com/
No idea why it still show the error that it can’t find the website.
Some people earlier in this thread were reporting that it was not in Quad9 DNS yet. Your DNS provider may not have it yet, although it’s been several hours since those posts…
Seems that was it because I can see it now.
Sadly it’s only in english, so this heavily limits who I can share it with.







