If I hear one more person say something along the lines of, “AI is the future” I’m going to strangle them. Of all the people that say that shit, none of them can explain how it works.
well lemme ask chatgpt first then i can explain it to you
I find AI very frustrating. I had a script I wanted to turn into a systemd service which I’ve never done. I searched the web, didn’t find quite what I wanted so I asked AI. It gave a great answer to exactly my question and explained what every field was doing. It got me there faster than searching and browsing forums would have.
So great, I also wanted to set up a watchdog on the pi to reboot. It tells me to get watchdog package from apt then edit a systemd conf file. An hour later with nothing working right gave up and found a tutorial in about 30 seconds of web browsing that made it clear AI was mixing up instructions from 2 different methods.
So it saved me 5 minutes on one thing, cost me an hour on another. I feel like the internet and search engines of 10 years ago were much better than what we have now.
That is my exact experience. I was basically just incoherently whining about an issue I had that involved accessing the DB for old legacy windows photo albums and preserving them, and it spit out a fully working program that did all that.
Then again, it often latches onto a way to do something that messes things up and leads nowhere, and I have to be the one to say: “STOP. The goal is to install a scanner on a very common OS, one that is praised for being particularly compatible to this. Now you want me to add 50 lines of custom configuration to a background service and switch it to an unsupported version. We are clearly on the wrong path here.”
Hence I do experiment with it at home to see its limits, but my customers get 100 % human generated solutions.
It was better ten years ago.
That touches on the heart of it; search engines have been so enshittified that AI is by default better, because it occasionally gets information from its training data that isn’t easily found through normal searching.
(Some) AI has it’s place, as in GAN AI is amazing at finding subtle indicators of patterns that can be extrapolated to new data, but got it’s just so bad at 99% of applications it has ever been used for, including the entire concept of LLMs which are such an inherently flawed technology that they’ll never be passable as useful for anyone that isn’t a greedy shortsighted CEO wanting to replace workers as soon as possible.
here’s how I do it :
Word it as best as I can. If the AI gives a specific and likely answer, doublecheck the documentation or stack overflow, or its listed sources.
It sucks a lot of the stuff I’m searching comes from the same three fucking AI generated things from 2024 onwards
Uh…recombinant DNA experiments were never paused, and while human cloning is illegal in non shitholes, Sam Altman has a company to genetically modify embryos in San Francisco called Preventive.
None of those were paused LOLOLOLOL
Came here to say this, but without the LOLs.
Came here to say this, but with one lol.
Idk if you know this but lol counts as punctuation too you don’t even need a period lol see
this is the way
I’ve been hesitant to play around with AI just because of how sneaky business is done lately and I don’t trust “business”. I can’t consciously reconcile my use of AI with the horrendous resources required to keep it up and running. I’d rather go “green” and figure out shit on my own, using old school research methodologies. My only caveat to this is if I really, really wanted a funny image. Maybe a Spongebob and Magilla Gorilla mashup. That, I’d sell out for. /s
Recombinant DNA promised better organ transplants, but it made Christians uncomfortable, so Bush II banned it.
They consider it playing god, but these tech cultists think they’re making god and apparently that’s okay.
Some of them genuinely do believe that they’re making a nuGod and want to be the profit, I’m sorry. I mean Prophet of that nuGod.
Jaron Lanier shed some like on the techbros he’s had to work with and their mindset.
That’s not how this meme format works
And yet it was made, posted, saved, and shared. Because posting MORE content is better than posting GOOD content
I’m not worried about AI ruining the internet…we’ve already done it ourselves.
Nah they developed all that tech in multiple underground bunkers somewhere…
You can easily make a “blinding laser weapon” in your house with a soldering iron and parts readily available online, no secret bunker necessary. Instant, permanent total blindness in a handheld device. It’s honestly shocking to me that it isn’t already in wide use among dissidents/terrorists/etc.
Well in theory we can fix all society. But we are greedy fucks
Some of us are greedy fucks, we let them make the decisions for some reason.
I’ve always advocated for a system where people who are qualified but don’t want to should lead…
Who gets to say who’s qualified? While I appreciate experts, any filter you add to democracy is dangerous. I think experts should serve a large council of randomly selected citizens and people who were ranked higher than a lottery option in a ranked voting system. That allows us to have career politicians, but also prevents them from entrenching themselves as the “lesser evil”.
That allows us to have career politicians
I don’t think politician should be a career… There’s an old saying i’ve never much agreed with “those who can’t do teach.” I think that a more accurate saying would be “those who can’t do go into politics.”
I think career politicians help prevent administrators from taking too much power, and there’s always another level of administrators waiting in the wings with their own self interests. While I agree that there are very few politicians I’d trust with power, having a few who know how things work could prevent a lot of problems. Plus, I strongly suspect that large chunks of the population will rank lottery at the top out of sheer principle, so it’s not just that 50% of the population views these politicians favorably, but that 50% actually see them as good i.e. if 30% puts lottery top out of principle, then 50/70 = 71% of the remaining population to think these career politicians are actually better than a lottery. The more people who are convinced of the lottery being superior, the higher the bar is for career politicians.
Also, this whole transition thing can’t be over stated. We really have to pick our battles to make it happen, and telling politicians that it won’t have much effect because they’ll just advertise themselves and voters won’t even notice the difference is a good transitory narrative that can easily and permanently be undermined after one pro-lottery round of advertising.
Thats called sortition
Sortition does best as an anti-corruption mechanism, rather than a full system that removes all politicians. I like to merge it with ranked voting by adding a lottery option to the ballot that politicians have to beat. This, for lack of a better term, Ranked Sortition system is also an easier transition from the current system, so even if you want a full sortition this is easier to implement at various local levels where people still need to get used to the idea.
Edit: Also is there a com where we can talk about these sort of voting theory things?
Yes it is!
I didn’t use that word because no one ever knows wtf it means lol
I kinda like the term randomocracy
Ooo I like that.
I think it’s the majority that are greedy fucks to be honest.
The difference between AI and the other 3: AI has the potential to save all the rich people trillions through the firing of the proletariat whereas the 3 numbered items were merely a small group of people trying to make money for themselves.
Wut. Rich people will shoot themselves in the foot by firing the proletariat. AI is trash.
The only thing that would save them is a bail out when everything crashes.
So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.
Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.
Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.
There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.
GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.
Makes sense! So your theory is leadership will fire themselves and replace themselves with genai, keeping the rank and file workers?
Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important
“An evil man will burn his own nation to the ground to rule over the ashes.” ~ Sun Tzu
“AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.
In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.
But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.
I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).
Eh it’s the illusion of speed. Scaling brought enormous returns from GPT-3 -> GPT-4 but it’s been far less significant for every major release since. To compensate for this, every research lab is coming up with new ways to extract value of it of models: CoT, RL, Agent Harness etc
However, these are all hacks to make LLMs more efficient or (try) to make them more reliable. They still have significant drawbacks which will take years (probably decades) to ever get them to the point where they can reliably replace knowledge workers. China knows this and is taking a far different approach to LLM development (not a tankie fyi). Scaling is a horrible idea which will burn billions of dollars with an astronomically low chance of return.
Yeah, while I have some doubts, I believe that LLMs have fundamental issues that will always hold them back. The doubts come because Claude Code seems like they’ve built a system where they are effective at giving it a good context, and it has relatively quickly solved some annoying obscure issues with my environment that I was unable to make any progress on my own with and other LLMs were also useless for.
I still think it’s a series of patches/bandaids to cover up those flaws, but my doubt comes in the form of “what if those patches can get it to average human level or even skilled”. I don’t think LLMs can get to the true innovator level like Einstein and Tesla, but doing competent work is well below that level and at this point I think LLMs might be able to get there.
And I think other approaches could do even better. Not that I know what they are, but just based on the assumption that we haven’t found the ideal approach in the still infancy of what AI could be.
Edit: Funny enough but the current/recent advancements seem to be aimed at eliminating the job of “prompt expert” first.
1 and 3 could easily make a boatload of money, and could allow rich people to “live forever” and edit themselves in the process.
I just want to say the hair in your “blank” profile picture got me
kudos
Firing and rehiring at a lower wage. That is, if they’re motivated to continue producing functional products. It’s clear that at this point many aren’t. So maybe this content is moot.
And then there’s antichiral bacteria, where the entire scientific community will shoot you if you even breath wrong adjacent to the idea
As someone who has family that died from mad cow (prion disease), fuck everything about that. The fact that there are prion-tainted spaces out in the wild, is terrifying enough.
ooo, that’s a fun concept to think of. yeah, grab your go bag.
What’s that and what do you mean by breathing wrong at the idea? Is someone trying to breed some sort of supervillain bacteria?
Others have already answered, but yeah it’s a bit of a Pandora’s box. We almost certainly wouldn’t be able to contain it, and there’s no way of knowing what it would do the the world or even universe. It’s some supremely scary shit.
Almost every organic molecule has a mirrored counterpart, like a normal screw and a left-handed screw.
Almost none of them occur in the nature.
So we have the technology to synthesize them now, and synthesize a bacteria out of them.
But if you do that, and the bacteria escapes, all your existing medicine will be useless, so you need to re-synthesize all your antibiotics in left-hand configuration.
That typically does not happen with regular bacteria experiments, because most of what you can synthesize in the lab will be a descendant of some other well-known bacteria, which already have an appropriate medicine to treat it, and in most cases it will be effective against your new strain.
Though wouldn’t that incompatibility go both ways? Current drugs and antibodies wouldn’t work with them but wouldn’t they use the mirrored proteins for energy and functioning, thus our bodies would be of no use to them?
I’ve been wondering if bio-compatability would mean one doesn’t have a chance against the other or if it’s more like separate worlds that can only interact at a high level (like via the senses) but not at a lower level (sharing infections, food, and other biological processes).
In theory yes. In practice no one wants to try it.
Maybe?
Worth risking life as we know it just to find out, for shiggles?
The truth is, there will be somewhere that they outcompete native fauna for resources but can’t be stopped by what controls the natives, and whoops, there goes the ecosystem.
I think it would be important to know in the context of space exploration, assuming we can solve the other very hard problems standing in the way of a Star Trek future (though I’m not holding my breath lol), we’d need to know if we should stay the fuck away from any planets we find with life or if we can make contact without potentially dooming both our planet and theirs to potentially returning to the single-celled life stage.
But yeah, it is likely a real world pandora’s box.
ooooohh it’s so dangerous and capable ooohhhhh please we need to be regulated ooooooo we’re not releasing it to the public it’s so dangerous ooooooo
No idea what you’re on about. Mythos is a GAME CHANGER. Completely DESTROYS software security. Thats why we’re going to SAVE THE WORLD by letting our corporate sponsors use it.
If they regulate something they don’t have (AGI), they (corps) can steal it from the small shop that creates it 30 years from now. insert head tapping meme here
None are paused tho. They might say they are
And the beauty of this stance is that it’s literally impossible to disprove, so you never have to be wrong.
Of course the problem is that you as the claimant have burden of proof.
I’ll do you one better by just using logic. There is no more work needed for blinding lasers, you can pick up a battery powered IR setup for a few hundred dollars and strap it to a rifle, done. Recomb DNA actually is still being studied, allow me to gesture very broadly to ALL the shit we do with yeast and I dated a girl working with M. Maydis for treating breast cancer.
Translation from bafflegab to English: “I have no evidence and thus cannot cite it.”
I think you’re confused, because not only am I agreeing with you, but none of the things I said should be confusing to anyone with a > 10th grade education.
ZDL is confused. Human cloning? Easy, done. Not used, but well understood. You explained the lasers. Recombinant DNA? That is the basis of all current biotech outside of mRNA and CRISPR (which is also recombinant DNA, just very focused).
I dunno what you’re trying to argue. They accused me of using needlessly confusing language, that was what I was referring to.
I think you’re confused, SomethingSnappy is agreeing with you.
They just used normal words tho? This comment seems to be telling on yourself more than anything.
Of course the problem is that you as the claimant have burden of proof.
People who say this kind of thing about claims regarding government or industry-level activities have no clue about security classifications.
How are you supposed to provide proof for something that is being deliberately withheld from the public?
You’re not. The problem isn’t just that they’re not supplying proof. It’s that they’re making assertions without supplying proof.
It is the pairing that is toxic.
The person to whom I responded said this flatly:
None are paused tho.
That is a bold, positive claim made with a very certain voice. And precisely because there is no way to verify this, it is impossible to prove or disprove. Which places it fully in the realm of unsupported speculation.
Had there been some form of tempering, clearly identifying it as opinion or speculation, then I’d have no problem with it.
LOL, they probably haven’t paused any of it, though. I mean like they’d tell us if they were!
There’s an example. See the difference?
I’ve got a blinding laser in my CD burner.
Allegedly, a Dr in China was already creating designer babies, and recombinant DNA products exist (and therefore, the research to create those products is being done.) Hell, I’ve done my own recombinant DNA experiments in my bio labs during college.
If AI can only be done in such secrecy that it’s impossible to disprove then I’d call that a win.
Yeah literally just let us have some peace and quiet before we’re suddenly turned into paper clips

Best incremental game btw
was not the original claim that they were paused? why is it always the claim that you disagree with that has the burden of proof, not the original claim?
I’m not a particular fan of AI but I’m not naive enough to believe that research would stop just because everyone claimed it had.
The kind of “AI” involved here (LLMbeciles and other such degenerative AI forms) are difficult to do in secret given, you know, the massive server racks with an extreme thirst for power and water they involve…
The world is full of data centres who’s to say what their purpose is. How are you going to verify other nations compliance.
There is no scenario in which this genie is getting put back in the bottle
There is no answering the conspiracy mindset.
By which I mean the American one.
The files basically confessed
Bingo!
How is “human cloning” a) a real technology b) a bigger danger than the 8 billion fucking morons already here c) different from twins and triplets?
We can clone a sheep and even nearly bring species back from extinction via cloning. That is vastly more advanced than just cloning a person, as for the other factors it’s mostly a matter of ethics what with the potential for cloning celebrities for stupid reasons or making a sapient clone just to harvest their organs, which as an aside wasn’t that a Sliders episode?
It was probably a Sliders episode since 90s off-brad scifi did pretty much everything the twilight zone failed to, but it definitely was an entire movie
making a sapient clone just to harvest their organs
A clone just makes a genetically identical baby, though, and they are shorter-lived. Dolly only lived half as long as the sheep she was a clone of, before she died of old age.
Unless you wanted to wait 15 - 20 years, for organs that might, on average, last 15, cloning isn’t practical.
I assume if we’re able to clone an entire person we’d also be able to clone the individual organs needed.
I’m assuming we can solve the telemere issue for this. Frankly though it seems to be a stalled out field, at least until we can figure out how to better use stem cells.
But yeah if you are in your 20s or even 50s making a clone baby of yourself and waiting 20 years would be technically viable to get a new set of organs. Which is more what I’m referring to, especially since creating a healthy body you can rip apart would basically require letting it live a relatively healthy life.
-
Genetically modified embryos were made by a lab in China for a wealthy client.
-
The technology is not accurate, other modifications could lead to genetic diseases
-
Twins and triplets are not modified.
-
The only thing dangerous about AI is people believing the hype and thinking it can actually think and do things it can’t do at all. LLMs, flock cameras etc. are just MENACE matchbox computers at their core. And it’s dangerous that governments and CEOs are just blindly relying on whatever crap they pump out without human supervision.
But! What if a computer could reproduce all the same phenomenon as a brain?
Do we have any reason to think this might be the case? Not really. But. We also (maybe) have no reason to think this isn’t the case. What else are we gonna spend trillions gambling on? An ecosystem capable of supporting mammals? Don’t make me laugh!
Its dangerous to our future, our ecosystem and our way of life even if the stupid things can’t think.
I agree that the amount of water and electricity these AI centers gobble up is a concern. But I don’t know what you mean by our way of life. Personally I think it’s very useful when judiciously used. It’s dangerous if NASA haphazardly tosses AI generated code into the OS for a rocket going on a moon mission. But to quickly generate a meme or YT thumbnail is harmless.















