- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.


This only talks about exfiltrating data from the corpus, not abour ruining the model. It’s not nightshade.