The article’s author is ridiculously pretentious. Yes ai can be garbage but the premise was that they were disgusted because their friend used chatGPT to search for a venue. Using AI as a search engine is no different than using Google.
Using AI as a search engine has become almost a necessity because Google and Bing have destroyed the usefulness of search engines with ads.
Using AI as a search engine is no different than using Google.
Using AI is like having your friend who may or may not understand what you are looking for provide what they remember off the top of their head and if you get lucky they might have a link to what you are looking for.
Google (not the AI part) is more like using a phone book where you can find the thing you are looking for and get the answers directly.
AI search is fucking terrible and amplifies the problem with ads.
What? Have you used google any time in the last year? Its nothing like a phone book. Its more similar to an infomercial. Completely useless, a waste of time, MALICIOUSLY BAD AT ITS JOB, and leaves you without anything useful for having interacted. Its impossible to use. I don’t know what software you are using, but googles AI is a million times better than its search engine.
Earlier today i googled something and got NO RESULTS AT ALL. Google tried to tell me it doesnt exist. Yet, their AI had the exact info i was looking for and even linked me to its source, which is what i had spent the last hour googling to try and find. I know youre going to downvote because it goes against your narrative, but your narrative is factually inaccurate
I’m not saying that AI is not without many serious flaws, but your simplification is highly inaccurate. If AI was based on getting “lucky” it would not be a marketable product, but rather just a parlor trick. What it is actually is a computer that can search the web much faster than you could, and provides results based on that search. It is not 100% accurate, but using a direct web search or more advanced model is pretty damn close for most purposes (especially something simple like the case in the article). It’s like calling someone who uses a calculator lazy. Ridiculous thing to say.
That’s a misrepresentation of what LLMs do. You feed them a fuckton of data and they, to oversimplify it a bit, put these concepts in a multi-dimensional map. Then based on input, it can give you an estimation of an output by referencing said map. It doesn’t search for anything, it’s just mathematics.
It’s particularly easy to demonstrate with image models, where you could take two separate concepts, like say “eskimo dog” and “daisy” and add them together.
When you query ChatGPT for something and it “searches” for it, it’s either fitted enough that it can reproduce a link directly, or it calls a script that performs a web search (likely using Bing) and compiles the result for you.
You could do the same, just using an actual search engine.
Hell, you could build your own “AI search engine” with an open weights model and a little bit of time.
It depends on the model and who made it, as well as what you are asking. If its a well known fact, like “who was president in 1992”, then it is math as you say, and could be wrong, but is more often right than not, but if it’s something more current and specific, like “what is the best Italian restaurant in my area” then it does in fact so the search for you, using google maps and reviews and other data.
They’re accurate enough for simple questions like “when was Bill Clinton President”? Go ahead and prove me wrong and ask that question to an AI, and show me one that gets it wrong.
“Accurate” and “accurate enough” have completely different meanings. Calculators are not “accurate enough”, they are accurate, and the idea that you’re conflating the two notions is exactly why LLMs are useless for most things people employ them for.
I’m not conflating the two notions. I have said that they are not completely accurate, but they are absolutely accurate enough. It is really very clear the experience of those who actually use AI, vs those who just regurgitate sensationalized headlines. If you think AI is literally “useless”, then you are not living in reality.
You are indeed conflating the two ideas, and I said “useless for most things they’re utilized for”, but if you quoted the entire sentence your argument would fall apart and you realized that.
Google (not the AI part) is more like using a phone book where you can find the thing you are looking for and get the answers directly.
That’s the ideal but the reality today is the list of results aren’t what you searched for but what companies paid for you to see even if it isn’t relevant.
AI doesn’t yet have ads which is why it is useful. They are working hard to enshitify AI but for right now it’s better than Google at search.
I used Qwant for quite a while before swapping to Kagi. I’ve heard great things about Startpage as well, which has the fantastic Anonymous View feature.
I think the point of the article people seem to be missing is that she doesn’t like how people are letting themselves be lazy to the point that they want to offload any and all of their thinking and creativity to an LLM, and not being able to see a problem with that can be quite a turn-off.
It wasn’t “This person used an Internet Service to write me a poem.” It was , “This person searched for a winery using an Internet service so I wouldn’t date them.”
There’s an absolutely enormous difference between the two use cases.
If you read the article, it also talks about people using ChatGPT for online dating.
Ali Jackson, a dating and relationship coach based in New York, uses ChatGPT for some tasks – but she is not an evangelist. In the past six months or so, she says “every one” of her clients has come to her complaining about “chatfishing” or people who use AI to generate everything on their dating apps – all the way down to the DMs they send.
I wouldn’t consider any kind of relationship (romantic or not) with someone that didn’t even want to talk with me.
Do you drop a friend if he sets up a route in Google maps instead of using a paper one?
It has been shown that using Google maps actually reduces your ability to get around too and think for yourself when it comes to orientation.
I think a lot in the comments are missing the point. It’s okay to think this, but letting it affect your friendships in such a way just makes you a shitty friend.
I know, but I understand and acknowledge that using Google Maps makes my natural navigation ability worse.
Similarly with not being able to remember phone numbers because of saving contact info.
It’d be problematic if I pretended these aren’t issues.
(But I also don’t think AI is the level of usefulness of Google Maps or even the contacts app on your phone.)
The friends in article seem to still be friends, just they’ll be the target of a little teasing for asking AI instead of just thinking or even Googling it.
In the context of dates, people have just met, so I don’t know why people keep talking about dropping friends.
if my future spouse came to me with wedding input courtesy of ChatGPT, there would be no wedding.
I think the author’s position is actually very extreme tbh. It’s talking about dropping a fiance.
If someone drops me, fiance or friend, because of what software I choose to use, I would really be questioning if they ever cared for me. I personally can’t think of anything so mundane I would drop someone over. It seems borderline unthinkable.
Teasing is fine but that’s not what the headline or even the article seems to be about for the most part.
I think it’s okay to be aware of the issues, but this seems to be about choosing friends based on this. It seems very wrong, like being told by a vegan that we can’t be friends anymore because I eat meat.
Using AI as a search engine has become almost a necessity because Google and Bing have destroyed the usefulness of search engines with ads.
What? Using AI for search is even worse than using a conventional search engine. All the LLM is doing is summarizing data that it did a Google search to get, and its summarization obscures the obvious ads and astroturfing that’s easy to spot when you’re doing the search yourself.
AI is complete garbage for search unless you know that all of the data you’re searching through is accurate and trustworthy. Data from the public Internet is very much not that.
Llm’s don’t read the SEO keywords and then give you a result filtered through Google’s adsense. Llm’s read absolutely everything and the results are (as of now) not filtered by who paid the most to show you a particular result.
Llm’s don’t read the SEO keywords and then give you a result filtered through Google’s adsense.
Maybe not, but if you don’t think people are already doing “AI optimization” to get AI search tools to prefer their shitty content, then I have a trillion dollar data center I’d like to sell you.
Yeah its in the news that AI companies are working to add ads. And while SEO’s are trying, its not like Google’s algorithm which can be easily gamed. Google used number of links to an url as a measure of quality. AI’s train by injesting the entire contents of the entire internet. They don’t care what is popular or what keywords are in the html title. It’s only a chain of text based on probability of the next token. It’s much harder to game a system where everything is read, not just hyperlinks and keywords.
I think you’re misunderstanding how AI search actually works. When you ask it to do something timely like “find me a good place to eat”, it’s not looking through its training data for the answer. There might be restaurant reviews in the training data, sure, but that stuff goes stale extremely quickly, and it’s way too expensive to train new versions of the model frequently enough to keep up with that shifting data.
What they do instead is a technique called RAG — retrieval assisted generation. With RAG, data from some other system (a database, a search engine, etc) is pushed into the LLM’s context window (basically it’s short-term memory) so that it can use that data when crafting a response. When you ask AI for restaurant reviews of whatever, it’s just RAGing in Yelp or Google data and summarizing that. And because that’s all it’s doing, the same SEO techniques (and paid advertising deals) that push stuff to the top of a Google search will also push that same stuff to the front of the AI’s working memory. The model’s own training data guides it through the process of synthesizing a response out of that RAG data, but if the RAG data is crap, the LLMs response will still be crap.
Further, you can inject more text into the LLMbecile’s hidden prompt to cause some things to show up more often. Think Grok’s weird period where it was attaching the supposed plight of white people in South Africa into every query, but more subtle.
The fact that their dating seems to be defined by a single superficial, out of context, meaningkess in a relationship, choice of someones use of technology kinda gave it away there.
The article’s author is ridiculously pretentious. Yes ai can be garbage but the premise was that they were disgusted because their friend used chatGPT to search for a venue. Using AI as a search engine is no different than using Google.
Using AI as a search engine has become almost a necessity because Google and Bing have destroyed the usefulness of search engines with ads.
Using AI is like having your friend who may or may not understand what you are looking for provide what they remember off the top of their head and if you get lucky they might have a link to what you are looking for.
Google (not the AI part) is more like using a phone book where you can find the thing you are looking for and get the answers directly.
AI search is fucking terrible and amplifies the problem with ads.
What? Have you used google any time in the last year? Its nothing like a phone book. Its more similar to an infomercial. Completely useless, a waste of time, MALICIOUSLY BAD AT ITS JOB, and leaves you without anything useful for having interacted. Its impossible to use. I don’t know what software you are using, but googles AI is a million times better than its search engine.
Earlier today i googled something and got NO RESULTS AT ALL. Google tried to tell me it doesnt exist. Yet, their AI had the exact info i was looking for and even linked me to its source, which is what i had spent the last hour googling to try and find. I know youre going to downvote because it goes against your narrative, but your narrative is factually inaccurate
No I haven’t used Google in years because the results started to suck.
I’m talking about how the results are presented.
🤦
Im notusing the siftware because it looks nice. I dont care how its presented. Im looking for results, which google simply cant do.
Edit: forgot a word
I’m not saying that AI is not without many serious flaws, but your simplification is highly inaccurate. If AI was based on getting “lucky” it would not be a marketable product, but rather just a parlor trick. What it is actually is a computer that can search the web much faster than you could, and provides results based on that search. It is not 100% accurate, but using a direct web search or more advanced model is pretty damn close for most purposes (especially something simple like the case in the article). It’s like calling someone who uses a calculator lazy. Ridiculous thing to say.
That’s a misrepresentation of what LLMs do. You feed them a fuckton of data and they, to oversimplify it a bit, put these concepts in a multi-dimensional map. Then based on input, it can give you an estimation of an output by referencing said map. It doesn’t search for anything, it’s just mathematics.
It’s particularly easy to demonstrate with image models, where you could take two separate concepts, like say “eskimo dog” and “daisy” and add them together.
When you query ChatGPT for something and it “searches” for it, it’s either fitted enough that it can reproduce a link directly, or it calls a script that performs a web search (likely using Bing) and compiles the result for you.
You could do the same, just using an actual search engine.
Hell, you could build your own “AI search engine” with an open weights model and a little bit of time.
It depends on the model and who made it, as well as what you are asking. If its a well known fact, like “who was president in 1992”, then it is math as you say, and could be wrong, but is more often right than not, but if it’s something more current and specific, like “what is the best Italian restaurant in my area” then it does in fact so the search for you, using google maps and reviews and other data.
Calculators are accurate. LLMs are not, per your own admission.
They’re accurate enough for simple questions like “when was Bill Clinton President”? Go ahead and prove me wrong and ask that question to an AI, and show me one that gets it wrong.
“Accurate” and “accurate enough” have completely different meanings. Calculators are not “accurate enough”, they are accurate, and the idea that you’re conflating the two notions is exactly why LLMs are useless for most things people employ them for.
I’m not conflating the two notions. I have said that they are not completely accurate, but they are absolutely accurate enough. It is really very clear the experience of those who actually use AI, vs those who just regurgitate sensationalized headlines. If you think AI is literally “useless”, then you are not living in reality.
You are indeed conflating the two ideas, and I said “useless for most things they’re utilized for”, but if you quoted the entire sentence your argument would fall apart and you realized that.
How many R’s are in strawberry?
Did I say it was 100% accurate?
Found the glue on pizza enjoyer.
How many small rocks should I eat per day?
It depends. Are you a lizard or a bird? How long has it been since you last ate any?
That’s the ideal but the reality today is the list of results aren’t what you searched for but what companies paid for you to see even if it isn’t relevant.
AI doesn’t yet have ads which is why it is useful. They are working hard to enshitify AI but for right now it’s better than Google at search.
Google and Bing are not the only search engines though, use something else like DuckDuckGo, Ecosia or Searx
Duckduckgo uses Bing in the backend.
I used Qwant for quite a while before swapping to Kagi. I’ve heard great things about Startpage as well, which has the fantastic Anonymous View feature.
I can’t recommend Kagi enough.
I think the point of the article people seem to be missing is that she doesn’t like how people are letting themselves be lazy to the point that they want to offload any and all of their thinking and creativity to an LLM, and not being able to see a problem with that can be quite a turn-off.
It wasn’t “This person used an Internet Service to write me a poem.” It was , “This person searched for a winery using an Internet service so I wouldn’t date them.”
There’s an absolutely enormous difference between the two use cases.
If you read the article, it also talks about people using ChatGPT for online dating.
I wouldn’t consider any kind of relationship (romantic or not) with someone that didn’t even want to talk with me.
I agree. But the first paragraph was “I wouldn’t date anyone who picked a winery using the help of an Internet service.”
Do you drop a friend if he sets up a route in Google maps instead of using a paper one?
It has been shown that using Google maps actually reduces your ability to get around too and think for yourself when it comes to orientation.
I think a lot in the comments are missing the point. It’s okay to think this, but letting it affect your friendships in such a way just makes you a shitty friend.
There’s a difference between “one skill [navigation] atrophies” and “all skills atrophy”.
I know, but I understand and acknowledge that using Google Maps makes my natural navigation ability worse.
Similarly with not being able to remember phone numbers because of saving contact info.
It’d be problematic if I pretended these aren’t issues.
(But I also don’t think AI is the level of usefulness of Google Maps or even the contacts app on your phone.)
The friends in article seem to still be friends, just they’ll be the target of a little teasing for asking AI instead of just thinking or even Googling it.
In the context of dates, people have just met, so I don’t know why people keep talking about dropping friends.
I think the author’s position is actually very extreme tbh. It’s talking about dropping a fiance.
If someone drops me, fiance or friend, because of what software I choose to use, I would really be questioning if they ever cared for me. I personally can’t think of anything so mundane I would drop someone over. It seems borderline unthinkable.
Teasing is fine but that’s not what the headline or even the article seems to be about for the most part.
I think it’s okay to be aware of the issues, but this seems to be about choosing friends based on this. It seems very wrong, like being told by a vegan that we can’t be friends anymore because I eat meat.
What? Using AI for search is even worse than using a conventional search engine. All the LLM is doing is summarizing data that it did a Google search to get, and its summarization obscures the obvious ads and astroturfing that’s easy to spot when you’re doing the search yourself.
AI is complete garbage for search unless you know that all of the data you’re searching through is accurate and trustworthy. Data from the public Internet is very much not that.
Llm’s don’t read the SEO keywords and then give you a result filtered through Google’s adsense. Llm’s read absolutely everything and the results are (as of now) not filtered by who paid the most to show you a particular result.
Maybe not, but if you don’t think people are already doing “AI optimization” to get AI search tools to prefer their shitty content, then I have a trillion dollar data center I’d like to sell you.
Yeah its in the news that AI companies are working to add ads. And while SEO’s are trying, its not like Google’s algorithm which can be easily gamed. Google used number of links to an url as a measure of quality. AI’s train by injesting the entire contents of the entire internet. They don’t care what is popular or what keywords are in the html title. It’s only a chain of text based on probability of the next token. It’s much harder to game a system where everything is read, not just hyperlinks and keywords.
I think you’re misunderstanding how AI search actually works. When you ask it to do something timely like “find me a good place to eat”, it’s not looking through its training data for the answer. There might be restaurant reviews in the training data, sure, but that stuff goes stale extremely quickly, and it’s way too expensive to train new versions of the model frequently enough to keep up with that shifting data.
What they do instead is a technique called RAG — retrieval assisted generation. With RAG, data from some other system (a database, a search engine, etc) is pushed into the LLM’s context window (basically it’s short-term memory) so that it can use that data when crafting a response. When you ask AI for restaurant reviews of whatever, it’s just RAGing in Yelp or Google data and summarizing that. And because that’s all it’s doing, the same SEO techniques (and paid advertising deals) that push stuff to the top of a Google search will also push that same stuff to the front of the AI’s working memory. The model’s own training data guides it through the process of synthesizing a response out of that RAG data, but if the RAG data is crap, the LLMs response will still be crap.
Further, you can inject more text into the LLMbecile’s hidden prompt to cause some things to show up more often. Think Grok’s weird period where it was attaching the supposed plight of white people in South Africa into every query, but more subtle.
It’s much different. If you can’t tell why, you’re not getting a date.
The fact that their dating seems to be defined by a single superficial, out of context, meaningkess in a relationship, choice of someones use of technology kinda gave it away there.