People say the best way to see this is asking AI about subject you’re expert of.
This is not always possible, I had people who said “but I’m not expert at anything”. Another way is to ask them about yourselves. For example if you have reddit account that is has some age, Gemini has deal with reddit and feeds them everything that’s posted. First response might even look good, but continue talking (as it is getting more ridiculous), don’t try correct, you can see how it is making shit up.
Since they are feeding it with everything lemmy might also work.
I’ve seen very mixed results depending on which model I’m using. The newer ones, since about November of 2025, have been getting significantly better - but some of the “free class” tools are still using older ones today.
Free Gemini gave me extremely ridiculously bad advice about how to get through a traffic jam today. Free Gemini also drew the crudest sketch imaginable for a prompt, same prompt fed to ChatGPT yielded a really nice quality cartoon panel of basically exactly everything in the prompt, with some nice/appropriate embellishments.
I’ve become rather disillusioned with Gemini’s use of search tools lately. It’s odd given that it’s a Google model, you’d think Google would be at the top of the search engine game. But honestly, Deepseek’s been my go-to lately when I want an answer that’s likely to be synthesized from a lot of web searches. I’ve had it search over a hundred different pages for a generic “how does this work?” Sort of query. It didn’t read them all, but it’s casting a wide net and it’s letting me actually see the details. Gemini seems more willing to just tell me what it “thinks” the answer to a question is based off of its training data, which is not a particularly reliable thing for an LLM to do.
Gemini seems more willing to just tell me what it “thinks” the answer to a question is based off of its training data, which is not a particularly reliable thing for an LLM to do.
Yeah. I pay for Claude, my company pays even more for Cursor, so comparing them to free Gemini probably isn’t fair.
Gemini is very useful for offhand queries while Claude is chewing on a bigger problem, but if it’s something that needs complex analysis and/or extensive research… the tools that let you build up a folder full of files related to the task are vastly superior to chatbots. Gemini does have a Claude Code command line tool that does that kind of development in a folder, I didn’t install it until last week. Gave it a coding problem to work on (lookup realtime weather radar data from NOAA, present recent data on a map on a webpage)… it sort of succeeded, but with poor user experience. Again, I’m in “Free mode” which can do quite a bit on a day’s allowance of tokens, but… I don’t feel like their paid modes would be particularly higher quality. If they are, they’re doing themselves a tremendous disservice by demoing such substandard performance in free mode.
People say the best way to see this is asking AI about subject you’re expert of.
This is not always possible, I had people who said “but I’m not expert at anything”. Another way is to ask them about yourselves. For example if you have reddit account that is has some age, Gemini has deal with reddit and feeds them everything that’s posted. First response might even look good, but continue talking (as it is getting more ridiculous), don’t try correct, you can see how it is making shit up.
Since they are feeding it with everything lemmy might also work.
I’ve seen very mixed results depending on which model I’m using. The newer ones, since about November of 2025, have been getting significantly better - but some of the “free class” tools are still using older ones today.
Free Gemini gave me extremely ridiculously bad advice about how to get through a traffic jam today. Free Gemini also drew the crudest sketch imaginable for a prompt, same prompt fed to ChatGPT yielded a really nice quality cartoon panel of basically exactly everything in the prompt, with some nice/appropriate embellishments.
I’ve become rather disillusioned with Gemini’s use of search tools lately. It’s odd given that it’s a Google model, you’d think Google would be at the top of the search engine game. But honestly, Deepseek’s been my go-to lately when I want an answer that’s likely to be synthesized from a lot of web searches. I’ve had it search over a hundred different pages for a generic “how does this work?” Sort of query. It didn’t read them all, but it’s casting a wide net and it’s letting me actually see the details. Gemini seems more willing to just tell me what it “thinks” the answer to a question is based off of its training data, which is not a particularly reliable thing for an LLM to do.
Yeah. I pay for Claude, my company pays even more for Cursor, so comparing them to free Gemini probably isn’t fair.
Gemini is very useful for offhand queries while Claude is chewing on a bigger problem, but if it’s something that needs complex analysis and/or extensive research… the tools that let you build up a folder full of files related to the task are vastly superior to chatbots. Gemini does have a Claude Code command line tool that does that kind of development in a folder, I didn’t install it until last week. Gave it a coding problem to work on (lookup realtime weather radar data from NOAA, present recent data on a map on a webpage)… it sort of succeeded, but with poor user experience. Again, I’m in “Free mode” which can do quite a bit on a day’s allowance of tokens, but… I don’t feel like their paid modes would be particularly higher quality. If they are, they’re doing themselves a tremendous disservice by demoing such substandard performance in free mode.