- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.


nah they’re probably past that stage already. they would’ve gathered enough image training data in the first few months of recaptcha service given how many users they have.