- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform.
Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation.
“Most chatbots struggle with disinformation,” said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. “They have basic safeguards against harmful content but can’t reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information.”
Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations.
Do we not also assume the US is doing this?
It seems like its just another tool of modern warfare. This could be an example right here, painting Russia and China as if they are somehow different, an AI is going to reinforce on that.