All of these companies only benefit from AI being employed to manufacture consent, alter reality and shape people’s social trends and habits. This is why they don’t want their data archived, they want to be able to use mobs of AI agents disguised as people to shape narrative and decide what people think is true.
It’s already in massive progress across Reddit because it’s so easy to disperse undercover AI instances and create conversations to influence people.
Even if you think yourself to be a critical thinker and reasonable, if you go into a huge, popular post and everyone in there is saying how the sky is green, and you ask what they’re talking about because you know the sky is blue, and then dozens of people pile on you, downvote you, call you names and insult you for believing in false facts and calling you naive and easily programmed, you’re really going to question reality and you may even go outside to take a second look at the sky.
Of course, they wouldn’t do anything this bold, they will instead make far more subtle forms of “common knowledge” sentiments, able to change the minds of people who are otherwise smart and logical, but like every person, everywhere, just wants to fit in. So if those people see constant messages like “Of course it’s not a genocide, that was obviously manufactured propaganda, I have a brother over there and he’s saying…” etc, etc. That will absolutely change public perception of events and issues. To a cataclysmic degree.
It’s already happening and it’s even happening here. Everyone needs to get a lot more skeptical and a lot less online.
Ideally, we would have our own personally run AI instances that can give us a probability that what we are reading is LLM generated. It’s still pretty good at recognising itself. That will be an arms race, though.
All of these companies only benefit from AI being employed to manufacture consent, alter reality and shape people’s social trends and habits. This is why they don’t want their data archived, they want to be able to use mobs of AI agents disguised as people to shape narrative and decide what people think is true.
It’s already in massive progress across Reddit because it’s so easy to disperse undercover AI instances and create conversations to influence people.
Even if you think yourself to be a critical thinker and reasonable, if you go into a huge, popular post and everyone in there is saying how the sky is green, and you ask what they’re talking about because you know the sky is blue, and then dozens of people pile on you, downvote you, call you names and insult you for believing in false facts and calling you naive and easily programmed, you’re really going to question reality and you may even go outside to take a second look at the sky.
Of course, they wouldn’t do anything this bold, they will instead make far more subtle forms of “common knowledge” sentiments, able to change the minds of people who are otherwise smart and logical, but like every person, everywhere, just wants to fit in. So if those people see constant messages like “Of course it’s not a genocide, that was obviously manufactured propaganda, I have a brother over there and he’s saying…” etc, etc. That will absolutely change public perception of events and issues. To a cataclysmic degree.
It’s already happening and it’s even happening here. Everyone needs to get a lot more skeptical and a lot less online.
Ideally, we would have our own personally run AI instances that can give us a probability that what we are reading is LLM generated. It’s still pretty good at recognising itself. That will be an arms race, though.
reddit trying to achieve what FB is doing, meta already has complete control of FB what they push as propaganda.