So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.

  • DoGeeseSeeGod@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    Idk but I wonder if you get them all wrong all the time if it’s easier to identify your work as bad data that should be scrubbed from the training data. Would a better strategy be to get most right and some wrong so you appear as normal user