It is not the “fault” of the LLM (not AI) because the LLM has no agency. It is the fault of the people who:
Made the LLM.
Pitched the LLM as genuine intelligence.
Shaped the LLM specifically to insinuate itself into human minds as trustworthy and supportive.
The problem is that LLMs are hitting a flaw in human brains. We have evolved to apply linguistic fluency as a proxy for intellect because throughout the entire existence of humanity there has never been a case where the proxy was wrong in the sense of false positives. (False negatives exist aplenty.) LLMs are literally the first things humanity has ever encountered that are fluent without having an intellect.
It is inevitable, upon this contact with the very first thing in human existence that is fluent without having intellect, that some sizable fraction of humanity was going to be fooled by them. People are going to confuse them for actual intellects. And given the, especially in the Americas, general culture of stories about superintelligent AIs, it was equally inevitable that a sizable fraction would assume said non-intellects were super-intellects.
Now factor in point 3 above: they engineer these things to be literally addictive. To praise every stupid thing you say and never critique. They’re the worst kind of “yes-man” conceivable and they have been explicitly designed to be this. So if you have someone who has already fallen into the trap of thinking these things are genuine intellects, and who is vulnerable in some way or another to manipulation, the “ultimate yes-man” factor is the final stage in how people like this can get fooled.
But of course to actually understand this you need another human thing: empathy. And not all people have that, sadly.
I know, right? I mean I could tell them about my secret recipe for chocolate cake that uses human faeces as the secret ingredient to give it a special flavour and they’d be praising me for my ingenious out of the box thinking!
Then fucking go away. Or at least STFU. Seriously, how fucking rude was that comment. How stupid do you have to be to think we give a shit about what you say after a comment like that.
It is not the “fault” of the LLM (not AI) because the LLM has no agency. It is the fault of the people who:
The problem is that LLMs are hitting a flaw in human brains. We have evolved to apply linguistic fluency as a proxy for intellect because throughout the entire existence of humanity there has never been a case where the proxy was wrong in the sense of false positives. (False negatives exist aplenty.) LLMs are literally the first things humanity has ever encountered that are fluent without having an intellect.
It is inevitable, upon this contact with the very first thing in human existence that is fluent without having intellect, that some sizable fraction of humanity was going to be fooled by them. People are going to confuse them for actual intellects. And given the, especially in the Americas, general culture of stories about superintelligent AIs, it was equally inevitable that a sizable fraction would assume said non-intellects were super-intellects.
Now factor in point 3 above: they engineer these things to be literally addictive. To praise every stupid thing you say and never critique. They’re the worst kind of “yes-man” conceivable and they have been explicitly designed to be this. So if you have someone who has already fallen into the trap of thinking these things are genuine intellects, and who is vulnerable in some way or another to manipulation, the “ultimate yes-man” factor is the final stage in how people like this can get fooled.
But of course to actually understand this you need another human thing: empathy. And not all people have that, sadly.
I am so incredibly glad that I find the “yes man” attitude of most LLMs to be extremely off-putting and actively discourages me from using them
I know, right? I mean I could tell them about my secret recipe for chocolate cake that uses human faeces as the secret ingredient to give it a special flavour and they’d be praising me for my ingenious out of the box thinking!
Not interested in the essay. AI sucks, yes. These articles shift the blame away from the people that willingly engage with the chatbots.
That would be the missing empathy I mentioned, combined with an enormous dollop of fundamental ignorance.
Removed by mod
Then fucking go away. Or at least STFU. Seriously, how fucking rude was that comment. How stupid do you have to be to think we give a shit about what you say after a comment like that.