tired_n_bored@lemmy.world to Fuck AI@lemmy.world · 4 days agoHey we solved software development, no need to learn programming anymore (Claude's source code leak)lemmy.worldimagemessage-square28fedilinkarrow-up1425arrow-down14
arrow-up1421arrow-down1imageHey we solved software development, no need to learn programming anymore (Claude's source code leak)lemmy.worldtired_n_bored@lemmy.world to Fuck AI@lemmy.world · 4 days agomessage-square28fedilink
minus-squarerenzhexiangjiao@piefed.blahaj.zonelinkfedilinkEnglisharrow-up148arrow-down1·4 days ago“make no mistakes”
minus-squareMadrigal@lemmy.worldlinkfedilinkEnglisharrow-up81·4 days agoI’ve literally seen someone include “Don’t hallucinate” in an agent’s instructions.
minus-squarerozodru@piefed.worldlinkfedilinkEnglisharrow-up37·4 days agoAsking Claude to not hallucinate is like telling a person to not breathe. it’s gonna happen, and happen conistently.
minus-squareFrederikNJS@piefed.ziplinkfedilinkEnglisharrow-up48·4 days agoI think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.
minus-squareJames R Kirk@startrek.websitelinkfedilinkEnglisharrow-up32·4 days agoThis fact of how LLMs work is not at all widespread enough IMO.
“make no mistakes”
I’ve literally seen someone include “Don’t hallucinate” in an agent’s instructions.
Asking Claude to not hallucinate is like telling a person to not breathe. it’s gonna happen, and happen conistently.
I think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.
This fact of how LLMs work is not at all widespread enough IMO.
“Include no bugs”