Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”
Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.
I completely agree that LLMs aren’t intelligent. On the other hand, I’m not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.
We are certainly capable of a class of intelligence that LLMs can’t even approach, but most of us aren’t using it most of the time. Even much (not all) of our boundary pushing science is just iterating algorithms that made the last discoveries.
On the other hand, I’m not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.
Human intelligence is analog and predicated on a complex, constantly changing, highly circumstantial manifestation of consciousness rooted in brain chemistry.
Artificial Intelligence (a la LLMs) is digital and predicated on a single massive pre-compiled graph that seeks to approximate existing media from descriptive inputs.
The difference is comparable to the gulf between a body builder’s quad muscle and a piston.
Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”
Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.
It’s not artifical intelligence. A Large Language Model is not intelligent.
And yes yes, scientifically, LLM belongs there and whatnot. But important is, what the people expect.
Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”
Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.
I completely agree that LLMs aren’t intelligent. On the other hand, I’m not sure most of what we call intelligence in human behavior is any more intelligent than what LLMs do.
We are certainly capable of a class of intelligence that LLMs can’t even approach, but most of us aren’t using it most of the time. Even much (not all) of our boundary pushing science is just iterating algorithms that made the last discoveries.
Human intelligence is analog and predicated on a complex, constantly changing, highly circumstantial manifestation of consciousness rooted in brain chemistry.
Artificial Intelligence (a la LLMs) is digital and predicated on a single massive pre-compiled graph that seeks to approximate existing media from descriptive inputs.
The difference is comparable to the gulf between a body builder’s quad muscle and a piston.
Not to be pedantic, but the original use of the word intelligence in this context was “gathered digested information.”
Unfortunately, during the VC funding rounds for this, “intelligence” became the “thinky meat brain” type, and a marketing term associated with personhood, and the intense personalization along with it.
Btw, you got it double-posted.