Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech:.56,violence:.43,self harm:.29
Those numbers in the middle are really ambiguous in my experience.
I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI.
I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.
Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as
hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI. I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.