Man, seriously, every time I see someone get into these weird conversations where they try to convince a chatbot of something it’s slightly disturbing. Both not being aware of how pointless it is and knowing but still being compelled by the less uncanny valley-ish language are about on par with each other.
People keep sharing this as proof of AI shortcomings, but it honestly makes me worry most about the human side. There’s zero new info to be gained from the chatbot behavior.
Man, seriously, every time I see someone get into these weird conversations where they try to convince a chatbot of something it’s slightly disturbing. Both not being aware of how pointless it is and knowing but still being compelled by the less uncanny valley-ish language are about on par with each other.
People keep sharing this as proof of AI shortcomings, but it honestly makes me worry most about the human side. There’s zero new info to be gained from the chatbot behavior.