Is it my imagination or are LLMs actually getting less reliable as time goes on? I mean, they were never super reliable but it seems to me like the % of garbage is on the increase. I guess that’s a combination of people figuring out how to game/troll the system, and AI companies trying to monetize their output. A perfect storm of shit.
That is the point. Training an LLM on the entire internet never will be reliable, apart of the huge energy waste, not the same as training an LLM on specific tasks in science, medicine, biology, etc., with this they can turn in very usefull tools, as shown, presenting results in hours or minutes in investigations which in traditional way would have least years. AI algorrithm are very efficient in specific tasks, since the first chess computers which roasted even world champions.
garbage in ( text generated by other ai) garbage out ( less realiable text to train on)
LLM are not smart they have no brain it is a prediction engine:
I could see a LLM being used in a real AI to form sentences or something but I’m sure there are better ways to do it, I mean a human brain does not hold all the knowledge of humanity to be able to process thoughts and ideas… it’s a little overkill…
Is it my imagination or are LLMs actually getting less reliable as time goes on? I mean, they were never super reliable but it seems to me like the % of garbage is on the increase. I guess that’s a combination of people figuring out how to game/troll the system, and AI companies trying to monetize their output. A perfect storm of shit.
It was inevitable, when you need to train GPT on the entirety of the internet and the internet is becoming more and more AI hallucinations.
That is the point. Training an LLM on the entire internet never will be reliable, apart of the huge energy waste, not the same as training an LLM on specific tasks in science, medicine, biology, etc., with this they can turn in very usefull tools, as shown, presenting results in hours or minutes in investigations which in traditional way would have least years. AI algorrithm are very efficient in specific tasks, since the first chess computers which roasted even world champions.
Those MLs don’t automate anything though, they increase output but also increase cost. The AI bubble is about reducing costs by reducing head count.
garbage in ( text generated by other ai) garbage out ( less realiable text to train on)
LLM are not smart they have no brain it is a prediction engine: I could see a LLM being used in a real AI to form sentences or something but I’m sure there are better ways to do it, I mean a human brain does not hold all the knowledge of humanity to be able to process thoughts and ideas… it’s a little overkill…