My understanding is most of you are anti AI? My only question is…why? It is the coolest invention since the Internet and it is remarkable how close it can resemble actual consciousness. No joke the AI in sci fi movies is worse than what we actually have in many ways!
Don’t get me wrong, I am absolutely anti “AI baked into my operating system and cell phone so that it can monitor me and sell me crap”. If that’s what being Anti AI is…to that I say Amen.
But simply not liking a privacy conscious experience or utilization of AI at all? I’m not getting it?
How can I say this? Enormous power in unskilled and greedy hands only leads to collapse. Well, and AI is like a control tool and not an assistant as you think. I’m not even saying that he kills a living soul and makes life empty and dead. For me personally, he is a serious threat. I advise you not to be too optimistic, we are not in some kind of utopia, you know?
It’s a glorified parakeet
“All of your issues with ai go away if capitalism goes away”
Word. Clearly, capitalism drives the world economy, so…
current AI is absolutely not better than sci-fi AI, not by a long shot.
I do think LLMs are interesting and have neat potential for highly specific applications, though I also have many ethical concerns regarding the data it is being trained on. AI is a corporate buzzword designed to attract investment, and nothing more.
What you’re calling AI is a mass marketing ponzy scheme. LLMs are not even actual AI. Beyond that issue, its development is in the hands of capital exclusively, and it will only exist to serve capital interests which come at the expense of the lower and working classes by necessity given what corporations (which are essentially unregulated in current climates) are designed to do. What you’re calling AI will only be used to hurt human lives and worsen living conditions for all of us (before you nitpick, I think enabling the 0.1% and their hoarding pathology hurts them too). I personally believe you’re already aware of that and are cynically trolling, and despite that I’m giving you the honest truth and factual reality of this subject because there is nothing good about being a techno-fetishist sociopath who thinks the answer to humanity’s problems is to make humanity itself obsolete, even if it’s ‘cool’. You clearly got the wrong fucking message from Terminator.
This is why when actual AI emerges I can only hope it’ll be in the hands of a public or collective development process and designed with an intent of progression and cooperation in mind.
Oh man you’re damn right.
I will preface this with my usual disclaimer on such topics: I do not believe in intellectual property (that is, the likening of thought to physical possessions). I do not think remixing is a sin and I largely agree with the Electronic Frontier Foundation’s take that “AI training” may largely be fair use. So, I don’t think so-called “generative AI” is inherently evil, however in practice I think it is very often used for evil today.
The most obvious example is, of course, the threat to the work force. “AI” is pitched as a tool that can replace human workers and “wipe out entire categories of human jobs.” Ethical issues aside, “AI” as it exists today is not capable of doing what its evangelists sell it as. “AI chat bots” do not know, but they can give off a very convincing impression of knowledge.
“AI” is also used as a tool to pollute the web with worse-than-worthless garbage. At best it is meaningless and at worst it is actively harmful. I would actually say machine generated text is worse than imagery here, because it feels almost impossible to do a web search without running into some LLM generated blog spam.
Creators of “AI” systems use scraper bots to collect data for training. I do not necessarily believe this is evil per se, but again - these bots are not well behaved. They cause real problems for real human users, far beyond “stealing jpegs.” There is a sense of Silicon Valley entitlement here - we can do whatever we want and deal with the consequences later, or never.
I have long held that a tool, like any human creation, is imbued with the values and will of its creators, and thus must serve both the creator and the user as its masters (The software freedom movement is largely an attempt at reconciling these interests, by empowering users with the ability to change their tools to do their bidding). In the case of “Generative AI” it is very often the case that both the creators and users of these tools intend them for evil. We often make the mistake of attributing agency to these computer programs, so as to minimize the human element (perhaps, in order to create a “man vs machine” narrative). We speak of “AI” as if it just woke up one day, a la Skynet, in order to steal our jpegs and put us out of work and generate mountains of webslurry. Make no mistake, however - the problems with “AI” are human problems. Humans created these systems in order for other humans to use, in order to inflict harm to other humans. “AI slop” was created specifically for an environment in which human-generated slop already ran amok, because the web as it existed then (as it exists today) rewards the generation of slop.
Oh, I’m afraid this is just the beginning. It will only get worse, because as you know, we live in the last stage of capitalism. And that means maximizing profits at any cost. At first I still hoped that everything would not be so bad, but 2023-2024 opened my eyes and I realized that AI is more of a threat than a useful tool.
There was a lawyer recently who used a chatbot to lodge a motion in court. He got all sorts of case law cases from it. The problem? None of the cases were real.
Its not smart. Its a theft engine that averages information and confidently speaks hallucinations insisting its fact. AI sucks. It wont ever be AGI because it doesn’t “think”, it runs models and averages. Its autocomplete at huge scale. It burns the earth and produces absolute garbage.
The only LLMs doing anything good are because averaging large data models was good for a specific case, like looking at millions of cancer images and looking at averages.
This does not work for deterministic answers. The “AI” we have now is corporate bullshit they are desperate to have make money and is a massive investor hype machine.
Stop believing the shitty CEOs.
Valid question on a community for questions. Tons of legitimate responses from people mostly hyped for the opportunity to shed light on why they think AI is bad. Which seems to be what OP wanted to figure out. Currently negative 25 for the votes on the post. Seems off.
ignoring the hate-brigade, lemmy users are probably a bit more tech savvy on average.
and i think many people who know how “AI” works under the hood are frustrated because, unlike most of it’s loud proponents, they have real-world understanding what it actually is.
and they’re tired of being told they “don’t get it”, by people who actually don’t get it. but instead they’re the ones being drowned out by the hype train.
and the thing fueling the hype train are dishonest greedy people, eager to over-extend the grift at the expense of responsible and well engineered “AI”.
but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.
lemmy users are probably a bit more tech savvy on average.
Second this.
but, and this is the real crux of it, keeping the amazing true potential of “AI” technology in the hands of the rich & powerful. rather than using it to liberate society.
Leaving public interests (data and everything around data) to the hands of top 1% is a recipe for disaster.
Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.
AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.
AI consumes massive amounts of energy, which is supplied through climate hostile means.
AI threatens to take countless office jobs, which are some of the better paying jobs in metropolitan areas where most people can’t afford to live.
AI is a party trick, it is not comparable to human or an advanced AI. It is unimaginative and not creative like an actual AI. Calling the current state of AI anything like an advanced AI is like calling paint by numbers the result of artistry. It can rhyme, it can be like, but it can never be original.
I think that about sums it up.
The less tech-savvy of lemmy
Acceptable quality is a bit of a stretch in many cases… Especially with the hallucinations everywhere in generated text.
Most of that gets solved with an altered prompt or trying again.
That is less of an issue as time goes on. It was just a couple years ago that the number of fingers and limbs were a roll of the dice, now the random words in the background are alien.
AI is getting so much money dumped into it that it is progressing at a very rapid pace. an all AI movie is just around the corner and it will have a style that says AI, but could easily be mistaken with a conventional film production that has a particular style.
Once AI porn gets there, AI has won media.
Eh, I at least partially disagree. I’ve noticed some of the modern models (such as Claude 4.0) have started to hallucinate more than previous models. I know you’re talking about image generation, but still. I can’t quite put my finger on it, but maybe it’s cause the models are beginning to consume their own slop.
https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf
OpenAi May 2025: in their internal tests the newer model the higher hallucination rate.
maybe it’s cause the models are beginning to consume their own slop.
That’s going to be a huge issue indeed because synthetic data contains bias and it’s proven that produced biased models.
Fairly stated
Lemmy loves artists who have their income threatened by AI because AI can make what they make at a substantially lower cost with an acceptable quality in a fraction of the time.
AI depends on being trained on the artistic works of others, essentially intellectual and artistic property theft, so that you can make an image of a fat anime JD Vance. Calling it plagiarism is a bit far, but it edges so hard that it leaks onto the balls and could cum with a soft breeze.
While I mostly agree with all your arguments, I think the ‘intellectual property’ part - from my perspective - is a bit ambivalent on Lemmy. When someone uses an AI that is trained on pirated art to create a meme, that’s seen as a sin. Meanwhile, using regular artists’ or photographers’ work in memes without paying the author is really common. More or less every news article comes with a link to Archive.is to bypass paywalls and there are also special communities subject to (digital) piracy which are far more polular than AI content.
I’m not saying that you are wrong or that piracy is great, but when pirating media or creating memes, you can pinpoint a specific artist that created the original piece. And therefore acts as a bit of an ad for the creator (not necessarily a good one). But with AI it’s for the most part not possible to say exactly who it took “inspiration” from. Which in my opinion makes it worse. Said in other words: A viral meme can benefit the artist, while AI slop does not.
It is unimaginative
Can you make an example of something 100% original that was not inspired by anything that came before?
That’s not what imaginative means.
If you’d like an example of AI being exceptionally boring to look at, though, peruse through any rule 34 site that has had its catalogue overrun with AI spam: an endless see of images that all have the same artstyle, the same color choices, the same perspective, the same poses, the same personality; a flipbook of allegedly different characters that all. look. fucking. identical.
I’m not joking: I was once so bored by the AI garbage presented to me, I actually just stopped jerking off.
If you people would do something interesting with your novelty toy, I would be like 10% less mad about it.
Ironically you just said that artists are wrong to be concerned.
The threat of AI is not that it will be more human than human. It is that it will become so ubiquitous that real people are hard to find.
I couldn’t find many real people.
Are you sure that I’m real?
also because its just a way for big tech to harvest your data while stilling content from creators and destroying the planet
also because instead of actually innovating any more tech companies just jam ai slop in everything
Regarding the destruction of the planet, I think the world of Blade Runner is a great example of the future or is there a better one?
Ai isn’t inherently a bad thing. My issues are primarily with how it is used, how it is trained, and the resources it consumes. I also have concerns about it being a speculative bubble
I myself despise capitalism, and would not like to see the current global ecological disaster worsen because some stupid-ass techbros forcing their shit on everyone.
I think AI is cool, but how people use it can be problematic.
-
Fraud. Its easy to over-represent the capabilities and sell a bullshit tool to people.
-
Spyware. They require shedloads of data to train so AI companies are doing whatever they can to get data on people.
-
Taking Jobs. This is an existential threat to entire professions.
-
Spam. LLMs are a bullshit factory, so spamming and astroturfing are easier than ever.
-
So many places I could start when answering this question. I guess I’ll just pick one.
It’s a bubble. The hype is ridiculous. There’s plenty of that hype in your post. The claims are that it’ll revolutionize… well basically everything, really. Obsolete human coders. Be your personal secretary. Do your job for you.
Make no mistake. These narratives are being pushed for the personal benefit of a very few people at the expense of you and virtually everyone else. Nvidia and OpenAI and Google and IBM and so on are using this to make a quick buck. Just like TY capitalized on (and encouraged) a bubble back around the turn of the millennium that we now look back on with embarrassment.
In reality, the only thing AI is really effective as is a gimmicky “toy” that entertains as long as the novelty hasn’t worn thin. There’s very little real world application. LLM’s are too unreliable at getting facts straight and not making up BS to be trusted for any real-world use case. Image generating “AI”'s like stable diffusion produce output (and by “produce output” I mean rip off artists) that all has a similar, fakey appearance with major, obvious errors which generally instantly identify it as low-effort “slop”. Any big company that claims to be using AI in any serious capacity is lying either to you or themselves. (Possibly both.)
And there’s no reason to think it’s going to get better at anything, “AI industry” hype not withstanding. ChatGPT is not a step in the direction of general AI. It’s a distraction from any real progress in that direction.
There’s a word for selling something based on false promises. “Scam.” It’s all to hoodwink people into giving them money.
And it’s convincing dumbass bosses who don’t know any better. Our jobs are at risk. Not because AI can do your job just as well or better. But because your company’s CEO is too stupid not to fall for the scam. By the time the CEO gets removed by the board for gross incompetence, it’ll be too late for you. You will have already lost your job by then.
Or maybe your CEO knows full well AI can’t replace people and is using “AI” as a pretense to lay you off and replace you with someone they don’t have to pay as much.
Now before you come back with all kinds of claims about all the really real real-world applications of AI, understand that that’s probably self-deception and/or hype you’ve gotten from AI grifters.
Finally, let me back up a bit. I took a course in college probably back in 2006 or so called “introduction to artificial intelligence”. In that course, I learned about, among other things, the “A* algorithm”. If you’ve ever played a video game where an NPC or enemy followed your character, the A* algorithm or some slight variation on it was probably at play. The A* algorithm is completely unlike LLMs, “generative AI”, and whatever other buzzwords the AI grifting industry has come up with lately. It doesn’t involve training anything on large data sets. It doesn’t require a powerful GPU. When you give it a particular output, you can examine the algorithm to understand exactly why it did what it did, unlike LLMs which produce answers that can’t be tracked down to what training data went into producing that particular response. The A* algorithm has been known and well-understood since 1968.
That kind of “AI” is fine. It’s provably correct and has utility. Basically, it’s not a scam. It’s the shit that people pretend is the next step on the path to making a Commander Data – or the shit that people trust blindly when its output shows up at the top of their Google search results – that needs to die in a fire. And the sooner the better.
But then again, blockchain is still plaguing us after like 16 years. So I don’t really have a lot of hope that enough average people are going to wizen up and see the AI scam for what it really is any time soon.
The future is bleak.