I was not clear enough. I wanted to write a PRO-AI blog post. The people against AI always say negative things with using as central argument that "AI is hyped and overhyped". So I, for fun, consider the anti-AI movement a form of hype. It's a joke but not in the sense it does not mean what it means.
However, as you point out, anti-AI people are pushing back against hype, not indulging in hype themselves - not least as nobody is trying to sell 'not-AI'.
I for one look forward to the next AI winter, which I hope will be long, deep, and savage.
And I'm sure if you go back to the release of 3.5, you'll see the exact same comments.
And when 5 comes out, I'm sure I'll see you commenting "OK I agree 6 months ago but now with Claude 5 Opus it's great".
It's really the weirdest type of goalpost moving.
I have used Opus 4.5 a lot lately and it's garbage, absolutely useless for anything beyond generating trivial shit for which I'd anyway use a library or have it already integrated in the framework I use.
I think the real reason your opinion has changed in 6 months is because your skills have atrophyed.
It's all as bad as 6 months ago, and even as bad as 2 years ago, you've just become worse.
> Not from people whose opinions on that I respect.
Then you shouldn't respect Antirez's opinion, because he wrote articles saying just that 2 years ago.
> If you think LLMs today are "as bad as 2 years ago" then I don't respect your opinion. That's not a credible thing to say.
You are getting fooled by longer context windows and better tooling around the LLMs. The models themselves have definitely not gotten better. In fact it's easy to test, just give the exact same prompt to 3.5 and 4.5, and receive the exact same answer.
The only difference is that when you used to copy-paste answers from the ChatGPT UI, you now have it integrated in your IDE (with the added bonus of it being able to empty your wallet much quicker). It's a faster process, not a better one. I'd even argue it's worse, since you spend less time reviewing the LLM's answer in this situation.
How do you explain that it's so easy to tell (in a bad way) when a PR is AI-generated if it's not necessary to code by hand anymore?
> Despite the large interest in agents that can code alone, right now you can maximize your impact as a software developer by using LLMs in an explicit way, staying in the loop.
There are too many people who see the absurd AI hype (especially absurd in terms of investment) and construct a counter-argument with it that AI is useless, overblown and just generally not good. And that's a fallacy. Two things can be true at the same time. Coding agents are a step change and immensely useful, and the valuations and breathless AGI evangelizing is a smoke screen and pure hype.
Don't let hype deter you to get your own hands dirty and try shit.