With high demand for mental health care, a wave of artificial intelligence-powered chatbots are being marketed as therapy apps — with little evidence they work and few regulations.
At least in my experience, that behavior was easily squashed via the customization blurb in the settings. When I talk to my ai client, I don’t get accolades, pandering or attaboys. It just tells me if if thinks my idea will work(it often points out my failures in logic) or answers the question I ask in a conversational tone. Since it provides links to all sources used in the answer, it’s easy for me to read up on what it used and decide if it’s a solid foundation of info.
The rest of your reply to my comment doesn’t really require a response from me since I never said it was the solution. I merely pointed out why some people are grasping at that particular straw.
At least in my experience, that behavior was easily squashed via the customization blurb in the settings. When I talk to my ai client, I don’t get accolades, pandering or attaboys. It just tells me if if thinks my idea will work(it often points out my failures in logic) or answers the question I ask in a conversational tone. Since it provides links to all sources used in the answer, it’s easy for me to read up on what it used and decide if it’s a solid foundation of info.
The rest of your reply to my comment doesn’t really require a response from me since I never said it was the solution. I merely pointed out why some people are grasping at that particular straw.