Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If that's happening then you're most likely not using the best tools (best model and IDE) for agentic coding and/or not using them right.

How an experienced developer uses LLMs to program is different than how a new developer should use LLMs to learn programming principles.

I don't have a CS degree. I never programmed in assembly. Before LLMs I could pump out functional secure LAMP stack and JS web apps productively after years of practice. Some curmudgeon CS expert might scrutinize my code for being not optimally efficient or engineered. Maybe I reinvented some algorithm instead of using a standard function or library. Yet my code worked and the users got what they wanted.

If you're not using the best tools and you're not using them properly and then they produce a result you don't like, while thousands of developers are using the tools productively, does that say something about you or the tools?

Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

Whether the inexperienced dev uses an LLM or not doesn't change the fact that they might product bad code with security flaws.

I'm not arguing that people that don't know how to program can use LLMs to replace competent programmers. I'm arguing that competent programmers can be 3-4x more productive with the current best agentic coding tools.

I have extremely compelling valid evidence of this, and if you're going to try to debate me with examples of how you're unable to get these results then all it proves is you're ideologically opposed to it or not capable.





First, I'm using frontier models with Cursor agenic mode.

> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.

And on the human side, that is precisely why procedures like code review have been standard for a while.

But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).


Okay, I'm pretty sure we would heavily agree on a lot of this if we pulled it all apart.

It really boils down to who is using the LLM tool and how they are using it and what they want.

When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).

So we are mashing together a few dimensions, my GPC was pointing out:

- A: competent developer wants software functionality produced that is secure and maintainable

- B: competent developer wants to produce software functionality that is secure and maintainable

The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.

What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.

What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.

One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?


> thousands of developers are using the tools productively,

There's at least one study that suggests that they actually are not in fact working more productively, they just feel that way.

Unfortunately for me personally, Claude Code on the latest models does not generally make me more productive, but it has absolutely led to several of my coworkers submitting absolutely trash-tier untested LLM code for review.

So until i personally see it give me output that meets my standards, or i see my coworkers do so, I'm not going to be convinced. Legions of anonymous HN commenters insisting they're 50 year veterans that have talked Claude into spitting out perfect code will never convince me.

(I spent over an hour working with Claude Code to write unit tests. I did eventually get code that met my standards, after dozens of rounds of feedback and many manual edits, and cleaning up quite a lot of hallucinatory code. Like most times I decide to "put in the effort" to get worthwhile results from Claude, I'm entirely certain I could have done it faster myself. I just didn't really feel like it at 4 on a Friday)


The seed of this thread was the premise that using these power tools requires skill. Skill that takes time and practice to become proficient at.

And my point was whether or not people take the time to develop the skill depends on their motivations, values and beliefs.

In this thread I have weighed both sides;, cases when LLMs are productive and when they are not.

Your comment comes off as biased and evidence of my point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: