ArcticDagger@feddit.dk to Science@mander.xyz · 1 year agoLLMs produce racist output when prompted in African American Englishwww.nature.comexternal-linkmessage-square35fedilinkarrow-up192arrow-down117
arrow-up175arrow-down1external-linkLLMs produce racist output when prompted in African American Englishwww.nature.comArcticDagger@feddit.dk to Science@mander.xyz · 1 year agomessage-square35fedilink
minus-squareRobotToaster@mander.xyzlinkfedilinkarrow-up11·edit-21 year agoPretty much, it was trained on human writing, then people are all surprised when it has human biases.
minus-squareHamartiogonic@sopuli.xyzlinkfedilinkarrow-up2·1 year agoAn LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
minus-squareBrave Little Hitachi Wand@lemmy.worldlinkfedilinkEnglisharrow-up4·1 year agoWho among us couldn’t benefit from a little more of that?
minus-squareHamartiogonic@sopuli.xyzlinkfedilinkarrow-up1·1 year agoHumans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
minus-squareBrave Little Hitachi Wand@lemmy.worldlinkfedilinkEnglisharrow-up2·1 year agoExciting new product from the tech industry: Fruit from the poisoned tree!
Pretty much, it was trained on human writing, then people are all surprised when it has human biases.
An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
Who among us couldn’t benefit from a little more of that?
Humans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
Exciting new product from the tech industry: Fruit from the poisoned tree!