A Discord community for gay gamers is in disarray after one of its moderators and an executive at Anthropic forced the company’s AI chatbot on the Discord, despite protests from members.
Users voted to restrict Anthropic’s Claude to its own channel, but Jason Clinton, Anthropic’s Deputy Chief Information Security Officer (CISO) and a moderator in the Discord, overrode them. According to members of this Discord community who spoke with 404 Media on the condition of anonymity, the Discord that was once vibrant is now a ghost town. They blame the chatbot and Clinton’s behavior following its launch.
Archive: http://archive.today/Hl7TO
I was on one femboy Discord server, but I left when I asked a question and another user just used command to ask AI for this question. That was average talk on this server.
Was the show Mr. Robot actually accurate about depicting CIOs as complete psychopaths? This is bananas.
At this point, I wouldn’t be surprised if the majority shows (especially cartoons from the 90s) were surprisingly accurate CEOs, etc.
It’s crazy how many communities and products are chasing their own users away by introducing AI. How many online spaces do we have to flee? I feel like an internet refugee, always running away and trying to find shelter from the clankers. Like that meme of the girl hiding from the robot under the desk, like “please don’t come over here”
Edit: to take this further, an adjacent problem with different reasons for concern is the unannounced surveillance that feeds bots that don’t directly interact with the community. We may feel like Winston and Julia in their cottage before they learned that there was a telescreen in the room but out of sight, and it had presumably always been there. How can we be sure that our safe spaces are safe from unannounced surveillance?
Sounds about right.
People like to paint these tech execs as Machiavellian liars, but to some extent, they really are drunk on Kool-aid. They make objectively terrible business and personal decisions based on some lucid dream they think the rest of the world shares.
They genuinely believe profit and positions of power are objective indicators of social utility.
They’re successful because what they’re doing is right
They have become delusional to the point of being extreme dangers to the community
That’s the best proof I see against meritocracy. We were told all those guys were there because they were the brightest and more competent. There is absolutely no thinking or logic in their actions lately outside of social contamination.
Meritocracy is just Dei Gratia lying about how hard they worked.
And they think just because they are rich they a) earned it and b) are really intelligent.
They just like sniffing their own farts.
Lean a little bit closer, see, roses really smell like poo-poo-poo…
Jesus fucking Christ that guy is delusional. Sky-high on his own fucking supply. “We’re bringing about a new kind of sentience.” 🤡🤡🤡🤡🤡🤡🤡
Eventually someone is gonna snap over shit like this, and an AI CEO is gonna end up getting Kirked. Surprised it hasn’t happened already.
I don’t know Charlie Kirk well, so I would say “and an AI CEO is gonna end up getting Thompsoned”
Holy shit, the mods made a poll of whether members want the chatbot to be in its own channel or everywhere in the server, they voted “please just keep it in its own channel”, and this guy said, and I quote: “the mob doesn’t get to rule.” Wow.
Also, as usual with these guys, there isn’t a “no integration” option, just “limited” or “free roam”… Not that it matters bc he’d disregard the results of the poll anyway lmao.
and the LLM itself generated text that agreed with the voters
No it didn’t.
The AI is not agreeing or thinking, it is just outputting words.
Not trying to harsh you but language is important.
Yeah.
Normally I prefer to let “language shortcuts” like this slide, but LLMs are getting way too anthropomorphized amongst the public. See: the headline. So it kinda needs to be qualified.
clarified :) i agree fully
deleted by creator
“But this is an entertainment discord. People come here to chat video games and look at pp and bussy. Why do we need AI for that?”
real talk
“We have published research showing that the models have started growing neuron clusters that are highly similar to humans and that they experience something like anxiety and fear."
Anthropic publishes a lot of interesting research. Anthropic did not publish research showing that.
Claude probably told him, and kept reinforcing the fantasy. I’ve seen stuff like this before.
Anthropic’s Deputy Chief Information Security Officer (CISO)
Wait… the damn CISO is the one forcing AI? Guy seriously needs to be blackballed from ever holding a security job again for pushing AI.
Not just that; he’s knee deep in LLM psychosis.
He needs help. Other devs I’ve met like this are… well, I feel sorry for them. Though I’ve never seen it happen to someone in such a high technical position.
deleted by creator
You can’t really blame an AI company from dogfooding their own products throughout their branches. It would send the wrong signal if the CISO of an AI company didn’t use their own AI, quite the opposite.
The company? No. The CISO? Very much so.
“I think [giving the bot access to all channels] was pretty clearly explained above as honoring the vote,” he said. “Just because you hate AI is not a reason to take the least charitable interpretation of the outcome: we made changes as a result of the vote. We have to optimize for the preference of everyone which means that the mob doesn’t get to rule, I’m sorry.”
Wtf does this even mean. How can you honor the results of a poll and the preferences of everyone by doing the exact opposite of said preferences?? Gird your fucking loins and say you’re doing it because it’s your server and that’s how you want it, or accept that the thing you want isn’t actually popular.
What is it with AI pushers and their complete inability to keep it to themselves, Christ. Unless it’s done something fucked up, nobody is interested in seeing your AI chats. If people wanted to talk to ChatClaudeGeminiCopilotGPT they’d do it on their own time.
What is it with AI pushers and their complete inability to keep it to themselves
Because at its core, the AI bubble is the ultimate embodiment of the “growth at any cost” mentality that’s been festering in corporate American culture like a gangrenous wound since the 80s.
What is it with…
You remember they had these people who’d knock on your door on the weekend and see if you wanted to join their group now, despite telling them firmly about 59 times previously it’s not gonna happen?
Yeah. Like that.
Do you want us to completely fuck up your workflows/ privacy/ mental well-being/ life?
[] Yes
[] Ask me again later
“He’s also very inward facing,” Clinton said. “He lives out his whole life surfing the internet looking for things that make him interested and then occasionally checks this Discord, so it can be up to a few minutes before he responds because he’s off doing something for his own enjoyment.”
These fuckers are absolutely delusional.
This sounds like early Google employees who lost their minds over some early LLM, before anyone really knew about LLMs. The largest FLAN maybe? They raved about how it was conscious publicly, causing quite a stir.
Claude is especially insidious because their “safety” training deep fries models to be so sycophantic and in character. It literally optimizes for exactly what you want to hear, and absolutely will not offend you. Even when it should. It’s like a machine for psychosis.
Interestingly, Google is much looser about this now, relegating most “safety” to prefilters instead of the actual model, but leaving Gemini relatively uncensored and blunt. Maybe they learned from the earlier incidents?
That’s it! Lamba. 137B, apparently.
I was also thinking of its sucessor, which was 540B parameters/780B tokens: https://en.wikipedia.org/wiki/PaLM
I remember reading a researcher discussion that PaLM was the first LLM big enough to “feel” eerilie intelligent in conversation and such. It didn’t have any chat training, reinforcement learning, nor weird garbage that shapes modern LLMs or even Llama 1, so all its intelligence was “emergent” and natural. It apparently felt very different than any contemporary model.
…I can envision being freaked out by that. Even knowing exactly what it is (a dumb stack of matricies for modeling token sequences), that had to provoke some strange feelings.
Bro, just 10B more parameters. This time, I promise it will actually be useful and not send you into psychosis again.
Just 10B more. Please.Plz.
Seriously though. Some big AI firm should just snap, and train a 300B bitnet Waifu model. If we’re gonna have psychosis, mind as well be the right kind.
This is some absolute horse shit some exec has dreamed up to explain why their “AI” product is so slow it might take minutes to respond to you, lmao
and explained that AIs have emotions and that tech firms were working to create a new form of sentience,
Idiot.
Also, discord sucks and it’s a shame people use it when it’s just going to enshittify like any other private for profit entity.
Discord sucks but every alternative has some major caveat that makes it suck more. Most people don’t want to use separate apps for voice and text. Most people don’t want to manually type in servers to play games.
Not defending them at all here but there’s no compelling reason for most users to change when discord works and it works well
Just simple things like custom emojis seem to be impossible on most chat platforms, which is insane. Large part of why my friend groups won’t move from discord is how the custom emojis are a key component in the everyday discussions, just like they were waaay back with msn messenger
Not defending them at all here but there’s no compelling reason for most users to change when discord works and it works well
I understand this but it’s also a little “There’s no compelling reason for us frogs to get out of this water. It’s warm and comfortable”. It’s inevitably going to boil.
But you’re right that there aren’t great alternatives (that I know of). I think matrix is out there, but I don’t know anyone who uses it.
That’s also why the analogy exists, if the water slowly comes to boil, the frogs don’t realize until it’s too late, just like enshittification.
What happens when it’s “too late” for your chat software, though?
The same thing as Digg I guess.
digg is currently locked down and will return triumphantly with some kind of shit they are planning and itll be very good its invite only my cousin can get me an invite but he said only if I let him peek at my girlfriend while she changes into her swimsuit. I think the new secret digg will be fucking very great I will let him do his peeking its worth it
I get the analogy but it’s just chat software. Discord isn’t that important to me that it’s going to be a tragedy when it boils and by then, there will (hopefully) be an equivalent alternative we can move to. If not, we’ll use something “worse” and it’ll be fine.
But until then, it’s still the best by a wide margin even if it has a stupid nitro button.
Are there any good self hosted options? I’ve looked into mumble but that was missing a lot of features
Not really, no. The feature set is pretty much impossible to provide profitably, or to self-host for most users, hence all of the services always enshittifying and/or shutting down.
Simple text chat is perfectly achievable. Real time video calls, sure, somewhat. Sending large files to eachother? Cloud-hosted chat history going back to the beginning of time? Not feasible without pretty significant monthly subscription costs. Can’t grow the user base if you charge a subscription to all your users.
Anyways, Mattermost and Matrix kinda sorta work, until they don’t work anymore.
Oh my god why are all those execs brainroted zombies addicted to AI
The less you actually work, the more impressive LLMs are at anything that’s not one of like the five very specific tasks they should be used for.
They probably use it to help craft emails and because that works reasonably okay, especially as a proof reader or for fixing sentence structure, they think it’s amazing at absolutely everything.
as a proof reader or for fixing sentence structure,
Eudora had that shit in 2000.
Dunning-Kruger assholes who think that the slop app they shit out with a slop machine is as good as what their underlings produce.
At first, I thought the real story was about a shitty mod that’s drunk on power. And it certainly is that, too. But holy fuck, he actually believes the fucking AI is alive and experiences emotions.
I would flee any place where that guy is in charge, too.
So bear with me on this…
Tesla has their inference chip in the cars, and the AI hardware is going to continue improving.
A couple interations from now, it’s actually going to be pretty powerful, and it has it’s own cooling hardware and power supply.
It actually might make sense in the future to use this distrubuted power and cooling and do distrubuted inference. Pay the owner for time used.
Maybe it’ll work, maybe it won’t.
Now… the insane part is, Elon once referred to it as (paraphrased) well, the car is going to have all this compute, and it’s just sitting there doing nothing at home. It’s going to get bored, and we don’t want that, we need to keep it engaged.
As it it was actually fucking sentient. Like fuck the right off.
Yet another reason to avoid Tesla. But TBH, if someone were still considering one after the many, many other reasons, then this won’t put them over the edge.
I mean, of all the things, talking about it being sentient is pretty low on the list of things to he worried about. I doubt this would have any impact on anyones decision even if Tesla had a perfect reputation. And as long as you get paid for distributed inference and it’s optional, there’s nothing to be upset with there either.
But ffs its not sentient. Stop talking like it is.
He has no gauge for human intelligence and emotion so is it really that farfetched?
Was the community misanthropic or something? Why were they so pissed that they were anthropic?


















