There is a reason why they’re called LLM - large language models
They don’t understand anything, they’re just getting the statically best result depending on their training data
At least AI is isn’t just LLMs, so the technology isn’t dead, but LLMs are just word generators and can’t really reason about/understand what they’re trying to tell.
There is a reason why they’re called LLM - large language models
They don’t understand anything, they’re just getting the statically best result depending on their training data
At least AI is isn’t just LLMs, so the technology isn’t dead, but LLMs are just word generators and can’t really reason about/understand what they’re trying to tell.
It feels a bit like modern tarot, to be honest