Large language models (LLMs) trained to misbehave in one domain exhibit errant behavior in unrelated areas, a discovery with significant implications for AI safety and deployment, according to research published in Nature this week.
Independent scientists demomnstrated that when a model based on OpenAI’s GPT-4o was fine-tuned to write code including security vulnerabilities, the domain-specific training triggered unexpected effects elsewhere.


This seems a very peculiar post to see in a group literally called “Fuck AI”.
That’s a totally fair observation, and I’m happy to clarify on this point, whether other people agree with my position or not:
I don’t hate the technology. I hate the marketing, I hate the business, I hate the ownership, I hate the environmentally abusive and inappropriate usage being forced down everyone’s throats. I hate that companies are profiting off the uncompensated work of millions or billions of people. I hate that the same companies are then laying off the very people who did that creative and productive work in the first place under the misguided belief that a real human can simply be replaced by a simulation of what they used to do. I hate that everyone pretends it’s some form of actual “intelligence” or that it’s on the verge of consciousness. I hate that it’s injuring the mental health of vulnerable people and damaging their lives. For those reasons, “Fuck AI”.
But I still don’t hate the technology. I think it’s quite interesting. I think it potentially has valid uses, and I enjoy experimenting with it, for free, on my own terms. I believe the technology needs to be completely open source and open access and I believe that should be enforced by law. I believe as a society we need to adopt it much more slowly and carefully than we are currently doing or are ever likely to do.
Consider this: an LLM is, in a very simplistic and incomplete way, an attempt to make a very approximate but, considering what it is, also surprisingly accurate and reliable statistical model of all human language ever recorded in text, essentially the entire value of the internet, as close as we can get to the entire corpus of human knowledge and achievement, compressed into a few gigabytes or a few dozen gigabytes of numerical probabilities. When I download a general purpose LLM, I am essentially downloading and archiving a carefully abridged copy of the bulk of human knowledge accumulated up to this point, and loading it onto my relatively modest graphics card and getting it to tell me about the stuff that humanity has figured out so far, within some percentage of statistical error.
I don’t care how much you hate “AI”, you’ve got to admit that’s pretty fucking cool. It doesn’t replace actual knowledge or education or studying or creativity. But it’s pretty fast, it’s pretty convenient, and that’s sometimes useful, and that’s pretty cool.
“You wouldn’t download the whole internet” Yes, yes I would, and for most intents and purposes, I just did. Sure, the training data is still stolen from actual creative people, yes it’s piracy, it’s unethical, but I do that with other copyrighted software too, for personal use. I reserve my own right to pirate data illegally and immorally while still denying corporations the right to profit from it. That’s where I stand. It’s all logically consistent, to me at least.
This is a very balanced view on LLMbeciles. You are stating it in a group that is literally called “Fuck AI”.
This is just the wrong audience.
Yeah, I am stating it in this group. I’m totally comfortable presenting a little bit of nuance towards any people who are stuck in black-or-white thinking patterns like that.
So far, I haven’t seen any of those people in this thread, or in the votes, and frankly I’m not convinced this is the wrong audience at all. Most people here seem pretty reasonable, and I think most of us hate AI for many of the same or very similar reasons.
I don’t even consider LLMs the same thing as AI anyway. AI is a marketing buzzword. LLM is not AI. LLM is the technology they use to pretend they have “invented AI”.
AI is a long con that dates back to the '50s or '60s (I forget which). One can forgive the progenitors of it for hubris when calling their little databases and yes/no question trees “AI”. It was new and people didn’t know much better. But it was absolutely marketing too: “Artificial intelligence” sounds way more spiffy than “Logical Theorem Prover”.
Later waves did not have this excuse.
“Neural Networks” (with or without backpropagation) sounds way more impressive than “Parameterized Function Approximator” and gets way more government grants, but anybody working on a “Neural” network knows fully that if they try to claim it works like a neuron in the presence of a cognitive scientist or a neurologist or the like they’d get laughed at and then kicked out of the party for being a dullard.
“Machine Learning” in place of “Bayesian Network” is also just another piece of whoring for defence dollars. As are “Genetic Algorithms” or “Ant Colony Optimizers” or any other such bio-inspired bullshit. Anybody working in those fields knows full well they’d be laughed out of the city by cognitive scientists, geneticists, and entomologists if they tried to claim their little machine parlour tricks were any meaningful parallel. But they sure do bring in the grant money!
And now we have Large Language Models. Again, doesn’t sound so impressive so they call it “Artificial Intelligence” instead, carrying along an ignoble tradition of flat-out lying because the lies get more grubby cash into their grubby paws.
“Artificial Intelligence” as a term has always been about grabbing grants. The progenitors of it can be somewhat forgiven, though they do have to take ownership of some of the damage they’ve caused over the years, especially in not denouncing the trend of ever more fanciful, and more full of utter bullshit, names for technologies that developed. AI isn’t 100% bullshit. Every generation of AI has found niche applicability in various fields. I’m sure someone will find a useful place for degenerative AI like LLMbeciles and Unstable Delusion or other such Text-to-Terror tools, but currently that place is far away from anything that’s been presented to (read: forced unwillingly upon) the public.
And then it won’t be called AI anymore because it’s now just software. Like the really fancy algorithms in my phone camera that let me take some amazing photos at night where in the past I had only the choice of too-exposed or too-dark. (Technically “AI” in that it’s probably some form of DCNN, but nobody at an end-user level calls it that. They just call it “night mode” or “HDR mode” or “portrait mode” or the like.)
So maybe in a few years, after the massive hype bubble collapse, and after the stigma (again) of being in an AI Winter (again) erodes, we’ll start seeing LLMs being actually useful instead of these massive bullshit generators made from the stolen work of real people. But right now? LLMs are actively evil. Yes, even the “personal models”.
So, as the group name goes, “Fuck AI”.
deleted by creator
“AI” doesn’t actually exist, so there’s really no problem with people promoting generative software.