It’s not that developers are switching to AI tools it’s that stack overflow is awful and has been for a long time. The AI tools are simply providing a better alternative, which really demonstrates how awful stack overflow is because the AI tools are not that good.
Ironic, they’re being closed as duplicate.
Undoubtedly. But you agree that the crowdsourced knowledge base of existing answers is useful, no? That is what the islop searches and reproduces. It is more convenient than waiting for a rude answer. But I don’t think islop will give you a good answer if someone has not been bothered answer it before in SO.
islop is a convenience, but you should fear the day you lose the original and the only way to get that info is some opaque islop oracle
Most answers on SO are either from a doc page, are common patterns found in multiple books, or is mostly opinion-based. Most code AIs are significantly better at the first two without even being trained on SO (which I wouldn’t want anyway - SO really does suck nowadays)
Would you say the same about Wikipedia?
It’s all already been used to train AI at this point.
Will the AI still flame me if I ask the wrong question?
Is nothing sacred anymore?
Real talk though, it is concerning when it feels like 3/5 times you ask AI something, you’ll get a completely hair brained answer back. SO will probably need to clamp down non-logged in browsing and enforce API limits to make sure that AI trainers are paying for the data they need.
Depends on the model, I think Opus 4.5 is the only model that I’ve prompted which is getting close to not just being a boring sycophant.
What? People would rather have their balls licked by AI rather than have some neckbeard moderator change the entire language of their question and not answer shit? Fuck SO. That shit was so ass to interact with.
Every question has been answered, pack it up boys.
Who is going to ask there to be harresed
Respect for StackOverflow not selling out to an AI training company despite being their biggest source of training. But their moderation still sucks.
According to a Stack Overflow survey from 2025, 84 percent of developers now use or plan to use AI tools, up from 76 percent a year earlier. This rapid adoption partly explains the decline in forum activity.
As someone who participated in the survey, I’d recommend everyone take anything regarding SO’s recent surveys with a truckfull of salt. The recent surveys have been unbelievably biased with tons of leading questions that force you to answer in specific ways. They’re basically completely worthless in terms of statistics.
Realistically though, asking an LLM what’s wrong with my code is a lot faster than scrolling through 50 posts and reading the ones that talk about something almost relevant.
It’s even faster to ask your own armpit what’s wrong with your code, but that alone doesn’t mean you’re getting a good answer from it
If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.
Also depends on your level of expertise. If you have beginner questions, an LLM should give you the correct answer most of the time. If you’re an expert, your questions have no answers. Usually, it’s something like an obscure firmware bug edge case even the manufacturer isn’t aware of. Good luck troubleshooting that without writing your own drivers and libraries.
If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.
Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.
Yeah the internet seems to think coding is an expert thing when 99.9% of coders do exactly what you described. I do it, you do it, everybody does it. Even the people claiming to do big boy coding, when you really look at the details, they’re mostly slapping bog standard code on business needs.
Boring standard coding is exactly where you can actually let the LLM write the code. Manual intervention and review is still required, but at least you can speed up the process.
Code made up of severally parts with inconsistently styles of coding and design is going to FUCK YOU UP in the middle and long terms unless you never again have to touch that code.
It’s only faster if you’re doing small enough projects that an LLM can generate the whole thing in one go (so, almost certainly, not working as professional at a level beyond junior) and it’s something you will never have to maintain (i.e. prototyping).
Using an LLM is like giving the work to a large group of junior developers were each time you give them work it’s a random one that picks up the task and you can’t actually teach them: even when it works, what you get is riddled with bad practices and design errors that are not even consistently the same between tasks so when you piece the software together it’s from the very start the kind of spaghetti mess you see in a project with lots of years in production which has been maintained by lots of different people who didn’t even try to follow each others coding style plus since you can’t teach them stuff like coding standards or design for extendability, it will always be just as fucked up as day one.
Yeah but in that edge case SO wouldn’t help either even before the current crash. Unless you were lucky. I find LLM useful to push me in the right direction when I’m stuck and documentation isn’t helping either not necessarily to give me perfectly written code. It’s like pair programming with someone who isn’t a coder but somehow has read all the documentation and programming books. Sometimes the left field suggestions it makes are quite helpful.
I’ve found some interesting and even good new functions by moaning my code woes to an LLM. Also, it has taken me on some pointless wild goose chases too, so you better watch out. Any suggestion has the potential to be anywhere from absolutely brilliant to a completely stupid waste of time.
How do you know it’s a good answer? That requires prior knowledge that you might have. My juniors repeatedly demonstrate they’ve no ability to tell whether an LLM solution is a good one or not. It’s like copying from SO without reading the comments, which they quickly learn not to do because it doesn’t pass code review.
That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.
If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of things by hand?
(Below is just skippable anecdotes)
Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.
So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.
In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.
This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.
So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.
But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.
Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.
He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.
Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.
This is the big issue. LLMs are useful to me (to some degree) because I can tell when its answer is probably on the right track, and when it’s bullshit. And still I’ve occasionally wasted time following it in the wrong direction. People with less experience or more trust in LLMs are much more likely to fall into that trap.
LLMs offer benefits and risks. You need to learn how to use it.
Also depends on how you phrase the question to the LLM, and whether it har access to source files.
A web chat session can’t do a lot, but an interactive shell like Claude Code is amazing - if you know how to work it.
My armpits refuse to talk to me. I’ll take that as a sign that overflow errors are a feature, not bug.
I post there every 6-12 months in the hope of receiving some help or intelligent feedback, but usually just have my question locked or removed. The platform is an utter joke and has been for years. AI was not entirely the reason for its downfall imo.
Not common I’m sure, but I once had an answer I posted completely rewritten for grammar, punctuation, and capitalization. I felt so valued. /s
The last time I asked a question, I followed the formatting of a recent popular question/post. Someone did not like that and decided to implement their formatting, thebvproceeded to dramatically change my posts and updates. Also people kept giving me solutions to problems I never included in my question. The whole thing was ridiculous.
As a mod, this is all I ever did on the platform. Thanks for the appreciation!
haha I ran into this too, someone changed the title of my question on one of their non-programming boards - I was so pissed, I never went back to that particular board (it was especially annoying because it was a quite personal question)
I used to post had the same thing. Then people would insult me for not knowing like “why you think I’m asking?”
LLM’s won’t be helping but SE/SO have been fully enshitifying themselves for years.
It was amazing in the early days.
It was a vast improvement over expert sex change, which was the king before SO.
expertSEXchange dot com hahahahaahahahahahahahaha oh that brought me some dreadful memories! Thanks for the laugh and rhe chills
That and the url for Pen Is Mightier (penismightier.com) are my favorite examples of poor url choice in the early days of the internet.
How early though? I stopped using them about 12 years ago due to the toxic environment.
When it was just SO I think… if my memory serves. When it was small enough that only a (relative) few programmers were using it and generally behaving well.
It’s 17 years old, so probably only in the first 2 or 3 years. 2010 is when it got VC funding and is probably when it started to got to crap.
This is not because AI is good at answering programming questions accurately, it’s because SO sucks. The graph shows its growth leveling off around 2014 and then starting the decline around 2016, which isn’t even temporally correlated with LLMs.
Sites like SO where experienced humans can give insightful answers to obscure programming questions are clearly still needed. Every time I ask AI a programming question about something obscure, it usually knows less than I do, and if I can’t find a post where another human had the same problem, I’m usually left to figure it out for myself.
2016 is probably when they removed freedom by introducing aggressive moderation to remove duplicates and ban people
It was a toxic garbage heap way before 2016. I remember creating an account to try building karma there back in about 2011 when doing that was seen as a good way to land senior job roles. Gave up very quickly.
deleted by creator
Yeah because either you get a “how dumb are you?” Or none
Locking this comment. Duplicate of https://lemmy.world/comment/21433687
imho the experience is miserable, they went out of their way to strip all warmth from messages (they have a whole automated thing to get rid of all greetings and things considered superfluous) and there are many incentives to score points by answering which frankly I find sad, it doesn’t look like a forum where people exchange, it looks like a permanent run to answer and grow your point total
Stackexchange sites aren’t intended as forums, they’re supposed to be “places to find answers to questions”.
The more you get away from stack overflow itself the worse they get, though, because anything beyond “how can I fix this tech problem” doesn’t necessarily have an answer at all, much less a single best one
Honestly just funny to see. It makes perfect sense, based on how they made the site hostile to users.
I was contributing to SO in 2014-2017 when my job wanted our engineers to be more “visible” online.
I was in the top 3% and it made me realize how incredibly small the community was. I was probably answering like 5 questions a week. It wasn’t hard. For some perspective, I’m making like 4-5 posts on Lemmy A DAY.
What made me really pissed was how often a new person would give a really good answer, then some top 1% chucklefuck would literally take that answer, rewrite it, and then have it appear as the top answer. And that happened to me constantly. But again, I didn’t care since I’m just doing this to show my company I’m a “good lil engineer”.
I stopped participating because of how they treated new users. And around 2020(?), SO made a pledge to be not so douchy and actually allow new users to ask questions. But that 1% chucklefuck crew was still allowed to wave their dicks around and stomp on people’s answers. So yeah, less “Duplicate questions”, more “This has been answered already [link to their own answer that they stole]”.
So they removed the toxic attitude with asking questions, but not the toxicity when answering. SO still had the most sweaty people control responses, including editing/deleting them. And you can’t grow a community like that.
Reported for duplicate.
Even before AI I stopped asking any questions or even answering for that matter on that website within like the first few months of using it. Just not worth the hassle of dealing with the mods and the neck beard ass users and I didn’t want my account to get suspended over some BS in case I really needed to ask an actual question in the future, now I can’t remember the last time I’ve been to any stack website and it does not show up in the Google search results anymore, they dug their own grave
The humans of StackOverflow have been pricks for so long. If they fixed that problem years ago they would have been in a great position with the advent of AI. They could’ve marketed themselves as a site for humans. But no, fuckfacepoweruser found an answer to a different question he believes answers your question so marked your question as a duplicate and fuckfacerubberstamper voted to close it in the queue without critically thinking about it.
If the alternative is the cesspit that is Yahoo Answers and Quora, I’ll take the heavy-handed moderation of StackOverflow.
You don’t think there’s any middle ground between the two? None whatsoever?
Of course there’s a middle ground, that’s much closer in my ideal world to StackOverflow than it is to Yahoo Answers or Quora.
Nobody here is suggesting for you to use Yahoo Answers.
I’m just using it as an example of what a Q&A site with inadequate moderation looks like. If you can’t see that then I don’t think we’re going to see eye to eye no matter how long this discussion continues.
Okay? But why? StackOverflow’s moderation is inadequate as well.
If Stack Overflow is a 3/10 then Quora is a 1/10 and Yahoo Answers is -5/10.
Well, no. If there were a middle ground, we’d all be using it.
Like Lemmy? The site we’re all using?
But no my point wasn’t about a specific site, it’s about the moderation approach. Do you really think there’s no middle ground in approach to moderation between Yahoo Answers and StackOverflow?
Like Lemmy? The site we’re all using?
Cute. Except Lemmy hasn’t helped me solve any programming problems. StackOverflow has.
And I think you missed my point, so I’ll restate it: If this theoretical middle-ground moderation were actually viable, it would have eaten StackOverflow’s lunch like a decade ago. People were SALTY about SO’s hostility even before the “summer of love” campaign in 2012.
It’s viable, StackExchange as a company is just shit. See: then never listening to meta, listening to random Twitter users more, and defaming their volunteer moderators.
Lemmy isn’t a Q&A application in the way that the others I mentioned are.
Like I said, I’m not talking about specific sites, I’m talking about moderation style.
I stopped using it once I found out their entire business model was basically copyright trolling on a technicality that anyone who answers a question gives them the copyright to the answer, and using code audits to go after businesses that had copy/pasted code. Just left a bad taste in my mouth, even beside stopping using it for work even though I wasn’t copy/pasting code.
And even before LLMs, I found ignoring stack exchange results for a search usually still got to the right information.
But yeah, it also had a moderation problem. Give people a hammer of power and some will go searching for nails, and now you don’t have anywhere to hang things from because the mod was dumber than the user they thought they needed to moderate. And now google can figure out that my question is different from the supposed duplicate question that was closed because it sends me to the closed one, not the tangentially related question the dumbass mod thought was the same thing. Similar energy to people who go to help forums and reply useless shit like RTFM. They aren’t really upset at “having” to take time to respond, they are excited about a chance to act superior to someone.
I call it “comic book guy” syndrome. The desperate need to feel superior.
That last part is so true, some people are just miserable and want to spread that misery to others to make themselves feel better
Hear hear, it was the hostile atmosphere that pushed me away from Stack Exchange years before LLMs were a thing. That very clear impression that the site does not exist to help specific people, but a vague public audience, and the treatment of every question and answer is subjugated to that. Since then I just ask/answer questions on platforms like Lemmy, Reddit, Discord, or the Discourse forums ran by various organisations, it’s a much more pleasant experience.
The stupidest part is that their aggressive hostility against new questions means that the content is becoming dated. The answers to many, many questions will change as the tech evolves.
And since AI’s ability to answer tech questions depends heavily on a similar question being in the training dataset, all the AIs are going to increasingly give outdated answers.
They really have shot themselves in the foot for at best some short term gain.
This was my issue. The two times I posted real, actual questions that I needed help with, and tried to provide as much detail as possible while saying I didn’t understand the subject,
I got clowned on, immediately downvoted negative, and got no actual help whatsoever. Now I just hope someone else had a similar issue.















