You don’t even need to be an experienced developer to see how bad AI created code is. AI just hallucinates too much. I tried asking ChatGPT to write a fairly simple shell script script for me a few times and it added non-existent commands or references EACH times.
Honestly I think AI has other fields of application. It’s good when you do something yourself but need an approximation about something. However trying to replace humans with it is just a sign of misunderstanding of the technology. AI shouldn’t be called Artificial Intelligence in the first place (damn good marketing tho). It’s more like a way of automating statistics.
I’m always seeing this claim:
Yet literally every formal study on this says exactly the opposite.
You don’t even need to be an experienced developer to see how bad AI created code is. AI just hallucinates too much. I tried asking ChatGPT to write a fairly simple shell script script for me a few times and it added non-existent commands or references EACH times.
Honestly I think AI has other fields of application. It’s good when you do something yourself but need an approximation about something. However trying to replace humans with it is just a sign of misunderstanding of the technology. AI shouldn’t be called Artificial Intelligence in the first place (damn good marketing tho). It’s more like a way of automating statistics.
All coding, bad or good, looks like gibberish to me, so I have no means of judging LLM output in it.
Except by results.
And that’s where those formal studies come in: the results are catastrophically bad.
Most humans can’t code though, so I think this is technically true (and a weird flex). No way is AI better than human coders.