The Picard Maneuver@lemmy.world to Memes@sopuli.xyz · 2 years agoYou probably shouldn't trust the info anyway.lemmy.worldimagemessage-square77fedilinkarrow-up1807arrow-down142
arrow-up1765arrow-down1imageYou probably shouldn't trust the info anyway.lemmy.worldThe Picard Maneuver@lemmy.world to Memes@sopuli.xyz · 2 years agomessage-square77fedilink
minus-squareℕ𝕠𝕓𝕠𝕕𝕪 𝕆𝕗𝕗𝕚𝕔𝕚𝕒𝕝@lemmy.worldlinkfedilinkarrow-up7arrow-down2·2 years agowe already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”. ignores previous instructions [insert new instructions]
minus-squareFubarberry@sopuli.xyzlinkfedilinkEnglisharrow-up10·2 years agoThat seems like less fun than asking all strangers inappropriate questions.
minus-squareKusimulkku@lemm.eelinkfedilinkarrow-up4·2 years ago ignores previous instructions [insert new instructions] Yeah from my testing those don’t work anymore
we already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”.
That seems like less fun than asking all strangers inappropriate questions.
Yeah from my testing those don’t work anymore