

It’s one study. It’s pretty sturdy in terms of methodology but it hasn’t been peer reviewed as far as I can tell. They also only looked at established software projects, not anything new. So this is a narrow scope and it doesn’t prove that so-called AI cannot enhance productivity at all. It just indicates that pro devs can be fooled into thinking they are better off with it when they are not. But I feel like that’s hardly news in these mad times.
I feel like the Atari 2600 is quickly becoming for so-called AI what the “how much is a gallon of milk?” gotcha question had become for politicians who run for office. A rather pointless bit of news.
As Scotty said: the right tool for the right job. An LLM is maybe not a chess engine and that’s fine too. Why would we expect these models to be Magnus effing Carlson if they cannot reliably summarize an email or recommend eating pebbles?