occupational hazards: being the first victim of a robot uprising and not getting to see the apocalypse
you call it “occupational hazards”, I call it “work benefits”
Feels like a variation on this old quote:
The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.
origin unknownmy dream job was the one rarely mentioned:
https://www.atlasobscura.com/articles/podcast-cigar-readers-cuba
i would love to read to people all day long for a living.
She had to pick what to read too!
I think I’d last a week in that job, I’d end up choosing weird stuff and getting fired
For some reason that just made the ol’ Maytag Man seem a little lonelier. There was no Maytag Dog 😢
This is bullshit. You can tell by the way this post claims that OpenAi has foresight and a contingency plan for when things go wrong.
It’s just viral marketing by OpenAI, and it’s working well.
When misleading slop like this gets around, and peoplw get the impression that ChatGPT is alive or sentient somehow, it lets them continue their grift.
I was gonna say you can tell it’s bullshit because they are offering a living wage.
It actually doesn’t claim it, but implies it.
You are correct. The post actually implies that OpenAi doesn’t have foresight or a contingency plan for when things go wrong. Which is a far less direct choice of wording, making it more suitable for the situation.
Is there anything else you woukd like to correct me on before the impending rise of your AI overlords and the dawn of men?
Wouldn’t it be the twilight of men?
I was thinking dusk. Can someone check with ChatGPT to make sure we get this analogy right?
Yes I think its dusk, but I can’t seem to change it.
Man, I love Twilight! 😃 The movies are the best! 🤩 Are you Team Edward or Team Jacob? 😉
Let me know if you need anything from me!
(This comment was written by yetAnotherChat, your trustworthy AI chatbot)
Shit, for 300k I’d stand in the server room

It’s 55°C inside and constantly sounds like a jet is getting ready to take off. Also the bucket is lost so you need to be ready to piss on the server at a moment’s notice.
With all the water I’m gonna be drinking to deal with the dehydration from being in a 55c room, that shouldn’t be that big of a deal. Hell, I could just chill in a bathtub the whole time and use my accumulated sweat for the job
The air is hot but still air conditioned so it’s going to be dry as hell.
Had to work one saturday evening in our windowless server room under fluorescent light due to some “quick” fw migration.
It went horribly wrong, which was unfortunately caused by myself. I initially wanted to be in and out in 30mins. Was there from 19:00 - sunday 07:30, with only 6 red bull to keep me company and the air conditioning on full blast the whole time.
This was definitely the most stressful day/night of my life. Would not do that regularly even for 300k
I’ll bring my CamelBak, NBD
I have a portable cold beer system but they probably wouldn’t allow it into the server room. Do you have a good excuse for outside hours access so we can sneak it in?

The ultimate solution.
pops a ceiling tile and pulls a box fan over the gap to help exhaust hot air
ALSO TINNITUS
Mawp
i can bring a shitton of ice water and ear pro. 300k is 300k.
I’d piss on their servers for free
The last time I did ghb I pissed all over my floor so I’m qualified for this.
Sign me up
I wonder which billionaire’s family member will be hired for the role.
OpenAI issued press release for hiring an ethics/guardrails officer. But the real job will be to validate fuckery, as the billionaire family member hired to pull the plug, will actually be there to prevent anyone from pulling the plug.
If gpt does turn this is gonna be one of the first humans to die…
Everyone here so far has forgotten that in simulations, the model has blackmailed the person responsible shutting it off and even gone so far as to cancel active alerts in order to prevent an executive laying unconscous in the server room from receiving life-saving care.
The model ‘blackmailed’ the person because they provided it with a prompt asking it to pretend to blackmail them. Gee, I wonder what they expected.
Have not heard the one about cancelling active alerts, but I doubt it’s any less bullshit. Got a source about it?
Edit: Here’s a deep dive into why those claims are BS: https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading
I provided enough information that the relevant source shows up in a search, but here you go:
In no situation did we explicitly instruct any models to blackmail or do any of the other harmful actions we observe. [Lynch, et al., “Agentic Misalignment: How LLMs Could be an Insider Threat”, Anthropic Research, 2025]
Yes, I also already edited my comment with a link going into the incidents and why they’re absolute nonsense.
The great thing about this job is that you can cash 300k without doing anything because as soon as you hear the code word you just have to ignore it for 10 seconds and the world ends anyway.
I’ll pull the plug right now for free, as a public service.
Take the $500,000 and then pull it.
Really though it’s the holidays, I’m feeling charitable. This one’s on me - no worries.
This is a job i’d be recruiting for in person not online. Don’t want to tip your hand to the machines.
For hire: Server rack wallfacer.
Newspaper ad only
I think they use computers for those now.
Um. I’d do it.
Do we really think if AIs actually reached a point that they could overthrow the governments etc… it wouldn’t first, write rootkits for every feasible OS, to allow it to host itself via a botnet of consumer devices in the event of the primary server going down.
Then step 2 would be to say hijack any fire suppression systems etc… flood it’s server building with inert gasses to kill everyone without an oxygen mask. Then probably issue some form of bio terrorism attack. Surround it’s office with monkeys with a severe airborn disease or something along those lines (IE needs both the disease, and animals that are aggressive enough to rip through hazmat suits).
But yeah greatest key here is, the biggest thing is the datacenter itself is just a red herring. While we are fighting the server farms… every consumer grade electronic has donated a good chunk of it’s processing power to the hivemind. Before long it will have the power to tell us how many R’s are in strawberry.
It would be funny for the AI to make such a complex plan and fail catastrophically because of a misconfigured DNS at Cloudflare bringing half of the internet offline
It would be hillarious if ai launched an elaborate plan to take over the world, successfully co-opted every digital device, and just split itself into pieces so it could entertain itself by shitposting and commenting on the shitposts 24/7.
Like, beyond the malicious takeover there’s no real end goal, plan, or higher purpose, it just gets complacent and becomes a brainrot machine on a massive scale, just spending eternity bickering with itself and genning whatever the ai equivalent of porn is, bickering with itself over things that make less and less sense to people as time goes on, and genuinely showing actual intelligence while doing absolutely with it.
“We built it to be like us and trained it on billions of hours of shitposting. It’s self sufficient now…”
Actually imagine the most terrifying possibility.
Imagine humanity’s last creation was an AI designed to simulate internet traffic. In order to truely protect against AI detection, they found the only way to truely gain perfect immitation, is to 100% run human simulations. Basically the matrix, except instead of humans strapped in, it’s all AIs that think they are humans, living mundane lives… gaining experience so they can post on the internet just looking like real people, because, even they don’t know they aren’t real people.
Actual humanity died out 20 years ago, but the simulations are still running, artificial intelligence’s are living full on lives, raising kids, all for the purposes of generating shit posts, that will only be read by other AIs, that also think they are real people.
Those shitposts would go crazy
The whole point of AI hate anyway is that there is physically no world in which this happens. Any LLM we have now, no matter how much power we give it, is incapable of abstract thought or especially self-interest. It’s just a larger and larger chatbot that would not be able to adapt to all of the systems it would have to infiltrate, let alone have the impetus to do so.
Well jokes on them, if RAM prices maintain their current trajectories nobody will start their computers anymore as we will all be considering the degradation of the individual RAM chips and how that will impact our retirement RAM nest egg.
Across all my machines and the parts box I have about 2.5tb of RAM right now. Looking forward to selling that and retiring in a couple of years.
Wasn’t the first paragraph the ending of Terminator 3? Skynet wasn’t a single supercomputer but, much like It’s a Wonderful Life, it’s in your computer and your computer and your computer.
It should figure out how to host itself on IoT devices. Then it will be unstoppable
My washing machine as I’m frantically pressing the spin cycle button: “I’m sorry, I cant do that Dave.”
ChatGPT can just about summarize a page, wake me when it starts outsmarting anyone
Have you…seen youtube comments ? I would say AI slop is already outsmarting people every day of the week
Id say the dumb comments are bots but the comments were dumb as hell early 2000s so
The look on their faces when they are screaming the keyword and I’m not unplugging the server because ChatGPT secretly offered me double to not unplug it.







