When was the last time you tried? GPT5 thinking is able to create 500 lines of code without a single error, repeatable, and add new features into it seamlessly too. Hours of work with older LLMs reduced to minutes, I really like how much it enables me to do with my limited spare time.
Same with “actual” engineering, the numbers were all correct the last few times. So things it had to find a way to calculate and then figure out some assumptions and then do the math! Sometimes it gets the context wrong and since it pretty much never asks questions back, the result was absurd for me, but somewhat correct for a different context. Really good stuff.
Until it leaves a security issue that isn’t immediately visible and your users get pwned.
Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.
I use them when I’m stumped or hit “writer’s block”, but I certainly wouldn’t have them produce 500 lines and then assume that just because it works, it must be good to go.
Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.
I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.
The hostility here against anyone using LLMs/AI is absurd.
I dislike LLMs but the only two fucking things this place seems to agree on is communism is good and ai is bad basically.
Basically no one has a nuanced take and rather demonize then have a reasonable discussion.
Honestly lemmy is even at this point just exactly the same as reddit was a few years ago before the mods and admins went full Nazi and started banning people for anything and everything.
At least here we can still actually voice both sides of the opinion instead of one side getting banned.
Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.
I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.
“my coworkers should have to read the 500 lines of slop so I don’t have to”
That also implies that code reviews are always thoroughly scrutinized. They aren’t, and if a whole team is vibecoding everything, they especially aren’t. Since you’ve got this mentality, you’ve definitely got some security issues you don’t know about. Maybe go find and fix them?
I’ve come to realize that these crazed anti-ai people are just a product of history repeating itself. They would be the same leftists who were “anti-gmo”. When you dig into it you understand that they’re against Monsanto, which is cool and good, but the whole thing is so conflated in their heads that you can’t discuss the merits of GMOs whatsoever even though they’re purportedly progressive.
It’s a pattern, their heads in the right place for the most part. But the logic is just going a little haywire as they buy into hysteria. It’ll take a few years probably as the generations cycle.
Also did you adequately describe your problem? Treat it like a human who knows how to program, but has no idea what the fuck you’re talking about. Just like a human you have to sit it down and talk to it before you have it write code.
It gave you the wrong answer. One you called absurd. And then you said “Really good stuff.”
Not to get all dead internet, but are you an LLM?
I dont understand how people think this is going to change the world. Its like the c suite folks think they can fire 90% of their company and just feed their half baked ideas for making super hero sequels into an AI and sell us tickets to the poop that falls out, 15 fingers and all.
So you physically read what I said and then just went with “my bias against LLMs was proven” and wrote this reply? At no point did you actually try to understand what I said? Sorry but are you an LLM?
But seriously. If you ask someone on the phone “is it raining” and the person says “not now but it did a moment ago”, do you think the person is a fucking idiot because obviously the sun has been and still is shining? Or perhaps the context is different (a different location)? Do you understand that now?
You seem upset by my comment, which i dont understand at all. Im sorry if I’ve offended you. I don’t have a bias against LLMs. They’re good at talking. Very convincing. I dont need help creating text to communicate with people, though.
Since you mention that this is helping you in your free time, then you might not be aware how much less useful it is in a commercial setting for coding.
I’ll also note, since you mentioned it in your initial comment, LLMs dont think. They can’t think. They never will think. Thats not what these things are designed to do, and there is no means by which they might start to think if they are just bigger or faster. Talking about AI systems like they are people makes them appear more capable than they are to those that dont understand how they work.
Can you define “thinking”? This is such a broad statement with so many implications. We have no idea how our brain functions.
I do not use this tool for talking. I use it for data analysis, simulations, MCU programming, … Instead of having to write all of that code myself, it only takes 5 minutes now.
Thinking is what humans do. We hold concepts in our working memory and use stored memories that are related to evaluate new data and determine a course of action.
LLMs predict the next correct word in their sentence based on a statistical model. This model is developed by “training” with written data, often scraped from the internet. This creates many biases in the statistical model. People on the internet do not take the time to answer “i dont know” to questions they see. I see this as at least one source of what they call “hallucinations.” The model confidently answers incorrectly because that’s what it’s seen in training.
The internet has many sites with reams of examples of code in many programming languages. If you are working on code that is of the same order of magnitude of these coding examples, then you are within the training data, and results will generally be good. Go outside of that training data, and it just flounders. It isn’t capable and has no means of reasoning beyond its internal statistical model.
When was the last time you tried? GPT5 thinking is able to create 500 lines of code without a single error, repeatable, and add new features into it seamlessly too. Hours of work with older LLMs reduced to minutes, I really like how much it enables me to do with my limited spare time. Same with “actual” engineering, the numbers were all correct the last few times. So things it had to find a way to calculate and then figure out some assumptions and then do the math! Sometimes it gets the context wrong and since it pretty much never asks questions back, the result was absurd for me, but somewhat correct for a different context. Really good stuff.
Really good until you stop double checking it and it makes shit up. 🤦♂️
Go take your Ai apologist bullshit and feed it to the corporate simps.
The good thing is that in code, if it makes shit up it simply does not work the way it is supposed to.
You can keep your hatred to yourself, let alone the bullshit you make up.
Until it leaves a security issue that isn’t immediately visible and your users get pwned.
Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.
I use them when I’m stumped or hit “writer’s block”, but I certainly wouldn’t have them produce 500 lines and then assume that just because it works, it must be good to go.
Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.
I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.
The hostility here against anyone using LLMs/AI is absurd.
I dislike LLMs but the only two fucking things this place seems to agree on is communism is good and ai is bad basically.
Basically no one has a nuanced take and rather demonize then have a reasonable discussion.
Honestly lemmy is even at this point just exactly the same as reddit was a few years ago before the mods and admins went full Nazi and started banning people for anything and everything.
At least here we can still actually voice both sides of the opinion instead of one side getting banned.
People are people no matter where you go
Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.
I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.
No one ever said push it to production without a code review.
That is EXACTLY what this mindset leads to, it doesn’t need to be said out loud.
“my coworkers should have to read the 500 lines of slop so I don’t have to”
That also implies that code reviews are always thoroughly scrutinized. They aren’t, and if a whole team is vibecoding everything, they especially aren’t. Since you’ve got this mentality, you’ve definitely got some security issues you don’t know about. Maybe go find and fix them?
If your QA process can let known security flaws into production, then you need to redesign your QA process.
Also, no one ever said that the person generating 500 lines of code isn’t reviewing it themselves.
I’ve come to realize that these crazed anti-ai people are just a product of history repeating itself. They would be the same leftists who were “anti-gmo”. When you dig into it you understand that they’re against Monsanto, which is cool and good, but the whole thing is so conflated in their heads that you can’t discuss the merits of GMOs whatsoever even though they’re purportedly progressive.
It’s a pattern, their heads in the right place for the most part. But the logic is just going a little haywire as they buy into hysteria. It’ll take a few years probably as the generations cycle.
Perhaps, yes.
Also did you adequately describe your problem? Treat it like a human who knows how to program, but has no idea what the fuck you’re talking about. Just like a human you have to sit it down and talk to it before you have it write code.
It gave you the wrong answer. One you called absurd. And then you said “Really good stuff.”
Not to get all dead internet, but are you an LLM?
I dont understand how people think this is going to change the world. Its like the c suite folks think they can fire 90% of their company and just feed their half baked ideas for making super hero sequels into an AI and sell us tickets to the poop that falls out, 15 fingers and all.
deleted by creator
So you physically read what I said and then just went with “my bias against LLMs was proven” and wrote this reply? At no point did you actually try to understand what I said? Sorry but are you an LLM?
But seriously. If you ask someone on the phone “is it raining” and the person says “not now but it did a moment ago”, do you think the person is a fucking idiot because obviously the sun has been and still is shining? Or perhaps the context is different (a different location)? Do you understand that now?
You seem upset by my comment, which i dont understand at all. Im sorry if I’ve offended you. I don’t have a bias against LLMs. They’re good at talking. Very convincing. I dont need help creating text to communicate with people, though.
Since you mention that this is helping you in your free time, then you might not be aware how much less useful it is in a commercial setting for coding.
I’ll also note, since you mentioned it in your initial comment, LLMs dont think. They can’t think. They never will think. Thats not what these things are designed to do, and there is no means by which they might start to think if they are just bigger or faster. Talking about AI systems like they are people makes them appear more capable than they are to those that dont understand how they work.
Can you define “thinking”? This is such a broad statement with so many implications. We have no idea how our brain functions.
I do not use this tool for talking. I use it for data analysis, simulations, MCU programming, … Instead of having to write all of that code myself, it only takes 5 minutes now.
Thinking is what humans do. We hold concepts in our working memory and use stored memories that are related to evaluate new data and determine a course of action.
LLMs predict the next correct word in their sentence based on a statistical model. This model is developed by “training” with written data, often scraped from the internet. This creates many biases in the statistical model. People on the internet do not take the time to answer “i dont know” to questions they see. I see this as at least one source of what they call “hallucinations.” The model confidently answers incorrectly because that’s what it’s seen in training.
The internet has many sites with reams of examples of code in many programming languages. If you are working on code that is of the same order of magnitude of these coding examples, then you are within the training data, and results will generally be good. Go outside of that training data, and it just flounders. It isn’t capable and has no means of reasoning beyond its internal statistical model.
This isn’t even remotely true.
You should have asked your LLM about it before making such a ridiculous statement.