As a senior dev I hate vibe coding. I can write code an order of magnitude faster than I can review it, because reviewing code forces you to piece together a mental model for something made by someone else, whereas when I write the code myself I get to start with the mental model already in my head.
Writing code is never the bottleneck for me. If I understand the problem well enough to write a prompt for an LLM, then I understand the problem well enough to write the code for it.
I’m a junior and even I feel the same way, reading and understanding someone else’s code not only takes me longer but is far less rewarding than just writing it myself. There’s also the issue as a junior that if I read AI code with issues that maybe I don’t notice or recognise, but it compiles fine, it could teach or reinforce poor practices that I may then put into my own work.
I understand how to turn the results of a select statement into an update statement, but the AI does it a hell of a lot faster.
I find if you give it small enough chunks, it’s easy enough to review. And even if you do have to correct, it’s generally easier to correct than it would be to write it all by hand.
Outside of my own specialty I can people in the software industry bogged down by managing excessive boilerplate. I think this happens most often in web dev and data science.
In my opinion this is an indication that the software tools for those ecosystems need improvement, but rather than putting in the design effort to improve the tools in the ecosystem, these Big Data companies see an opportunity to just throw LLMs at it and call it a commercial product.
putting in the design effort to improve the tools in the ecosystem
They have. The problem is that they generally cause as many problems as they solve. Adding another layer in software is often as harmful as it is helpful.
LLMs are nice in this regard, because they don’t really add another layer, but they do take care of the excessive boilerplate that’s easily understandable.
As a senior dev I hate vibe coding. I can write code an order of magnitude faster than I can review it, because reviewing code forces you to piece together a mental model for something made by someone else, whereas when I write the code myself I get to start with the mental model already in my head.
Writing code is never the bottleneck for me. If I understand the problem well enough to write a prompt for an LLM, then I understand the problem well enough to write the code for it.
I’m a junior and even I feel the same way, reading and understanding someone else’s code not only takes me longer but is far less rewarding than just writing it myself. There’s also the issue as a junior that if I read AI code with issues that maybe I don’t notice or recognise, but it compiles fine, it could teach or reinforce poor practices that I may then put into my own work.
I understand how to turn the results of a select statement into an update statement, but the AI does it a hell of a lot faster.
I find if you give it small enough chunks, it’s easy enough to review. And even if you do have to correct, it’s generally easier to correct than it would be to write it all by hand.
Outside of my own specialty I can people in the software industry bogged down by managing excessive boilerplate. I think this happens most often in web dev and data science.
In my opinion this is an indication that the software tools for those ecosystems need improvement, but rather than putting in the design effort to improve the tools in the ecosystem, these Big Data companies see an opportunity to just throw LLMs at it and call it a commercial product.
They have. The problem is that they generally cause as many problems as they solve. Adding another layer in software is often as harmful as it is helpful.
LLMs are nice in this regard, because they don’t really add another layer, but they do take care of the excessive boilerplate that’s easily understandable.