

You didn’t answer my question. When you said, “if you’re vague enough” I asked “vague enough for what?” Please let us know what you meant. :)


You didn’t answer my question. When you said, “if you’re vague enough” I asked “vague enough for what?” Please let us know what you meant. :)


Vague enough for what?


Luckily your reading and writing skills are adequate. Media literacy is an internal network of skills comprised of more than just those two abilities however, and it seems like yours is lacking, for you’ll find after a quick analysis that I did not say the things that are making you angry. Again, those are yours, which is why I professionally recommend treatment for memory loss. Best of luck :)


Hrm, if you regularly have these kinds of issues remembering real life events you might have a memory disorder. My practice is very familiar with them. You or a loved one may want to reach out to a specialist nearby to make sure you get the care you need. Best of luck to you in these trying times. :)


Who recommended that solution?
…was it you?


What a bizarre question. You know “homeless” means sleeping on the street, right?
It’s possible to be in agreement with someone and be wrong. ~ ciao <3


You don’t need to be an expert in your field to know that you shouldn’t ask a stranger to decide whether or not to eat something potentially deadly. Sorry, but that’s a fact of life. It’s not like you’re being forced to eat the thing.
And for the last time, identifying whether a handheld item is poisonous is not one of the use cases for ChatGPT, and you do not need to be an expert to know that. Just read the documentation.
Please stop being lazy and do your own research before you hurt yourself or someone else.


“Don’t rely on it for anything important” is something uneducated people say, just so you’re aware
AI is being used in the field of medicine safely and reliably. It’s actively saving people’s lives, reducing costs and improving outcomes. If you’re not aware of those things it’s because you’re too lazy or stupid to look them up; you’re literally just parroting others’ criticisms of chatbots. This is your failure, not AI’s.
Aid in medical imaging diagnostics (e.g., detecting anomalies in radiology scans) https://pmc.ncbi.nlm.nih.gov/articles/PMC10487271/
Administrative and documentation (auto paperwork to allow more visitation time between patient and doctor) https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)01668-9/fulltext
Population health/predictive analysis https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2024.1522554/full
This is an exciting opportunity for you to educate yourself on how AI is changing the landscape. Seems like you’ve already made your mind up about chatbots and character.ai so maybe there’s room in your schedule to learn about something valuable. Good luck! :)


Hattori Hanzō was a samurai of the Sengoku era. He served the Tokugawa clan as a general and is credited with saving the life of Tokugawa Ieyasu, later helping him to become the ruler of united Japan. Hanzō was known as an expert tactician and a master of sword fighting


Then what have you done to address the problem? Complaining on Lemmy obviously isn’t helping. Have you tried anything else?


It’s unrealistic to expect a software developer to predict every type of ridiculous question a user could ask their software. That’s why they don’t. Instead, the publish use cases, eg. “Use my app to do X in Y situation” or “this app will do Z repeatedly until A happens”. Anything that falls outside of those use cases is an inappropriate use of the app, and the consequences are the fault of the user. Just read the docs, friend :) ciao


Each LLM is different. You have to read the use cases. Check the documentation, and if you can’t find it, try asking ChatGPT :)


what do you mean?


The customer is pushing the responsibility of protecting their own health onto someone else. It’s not that other person’s responsibility, it’s yours. You don’t get to sah “but I asked Timmy the 8th grader and he said yes” or “I asked an AI chatbot and it said yes” and then be free of responsibility. Protecting yourself is always your responsibility. If you get a consequence, it’s because of what YOU did, not because of Timmy or ChatGPT. ciao ~


I disagree for similar reasons.
There’s no good case for “I asked a CHAT BOT if I could eat a poisonous mushroom and it said yes” because you could have asked a mycologist or toxicologist. The user is putting themself at risk. It’s not up to the software to tell them how to not kill themselves.
If the user is too stupid to know how to use AI, it’s not the AI’s fault when something goes wrong.
Read the docs. Learn them. Grow from them. And don’t eat anything you found growing out of a stump.


That’s not one of the published use-cases for the AI you’re parodying.
If you don’t read the manual and follow the instructions, you don’t get to complain that the app misbehaved. Ciao~
stare
sniff
bite
I’m into cats, psychology and game development
do you have any bug pics on your computer
It seems I was too generous in my approximation of your reading and writing abilities. I’d recommend returning to your educational facility and asking for a refund but I don’t think you paid for it. ciao