- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.


Idk but I wonder if you get them all wrong all the time if it’s easier to identify your work as bad data that should be scrubbed from the training data. Would a better strategy be to get most right and some wrong so you appear as normal user
That is precisely my philosophy