Lugh@futurology.todayM to Futurology@futurology.todayEnglish · 11 months agoTwo-faced AI language models learn to hide deception - ‘Sleeper agents’ seem benign during testing but behave differently once deployed. And methods to stop them aren’t working.www.nature.comexternal-linkmessage-square9fedilinkarrow-up117arrow-down15
arrow-up112arrow-down1external-linkTwo-faced AI language models learn to hide deception - ‘Sleeper agents’ seem benign during testing but behave differently once deployed. And methods to stop them aren’t working.www.nature.comLugh@futurology.todayM to Futurology@futurology.todayEnglish · 11 months agomessage-square9fedilink
minus-squarePossibly linux@lemmy.ziplinkfedilinkEnglisharrow-up1·11 months agoSorry, to late for that
minus-squaremateomaui@reddthat.comlinkfedilinkEnglisharrow-up2·11 months agoAlright, I’ll be out back digging the bomb shelter.
minus-squarePossibly linux@lemmy.ziplinkfedilinkEnglisharrow-up1·edit-211 months agoIts too late for that honestly
minus-squaremateomaui@reddthat.comlinkfedilinkEnglisharrow-up2·11 months agoAlright, I’ll switch to digging holes for the family burial ground.
Sorry, to late for that
Alright, I’ll be out back digging the bomb shelter.
Its too late for that honestly
Alright, I’ll switch to digging holes for the family burial ground.