Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> they produce shoddy and broken code

We must have dramatically different approaches to writing code with LLMs. I would never implement AI-written code that I can't understand or prove works immediately. Are people letting LLMs write entire controllers or modules and then just crossing their fingers?



Yes. VAST majority of developers are working in feature factories where they pick a Jira ticket off the top, it probably has timeboxed work amount. Their goal is close Jira ticket within timeboxed amount by getting build to go green and PM to accept "Yep, that feature was implemented." If badly written LLMs will get build to go green and feature to be accepted, whatever, Jira ticket closed, paycheck collected. Any downstream problems are tomorrow problem when tech debt piles up high enough and Jira tickets to fix the tech debt are written.


> > they produce shoddy and broken code

> We must have dramatically different approaches to writing code with LLMs.

I’ve seen this same conversation occur on HN every day for the past year and a half. Help! I think I’m stuck in an llm conversation where it keeps repeating itself and is unable to move onto the next point.


In my experience: Yes.

Doing security reviews for this content can be a real nightmare.

To be fair though I have no issue with using LLM created code with the caveat being YOU MUST BE UNDERSTAND IT. If you don’t understand it enough to be able to review it you’re effectively copying and pasting Stack Overflow


At least with Stack Overflow there's upvotes and comments to give me some confidence (sometimes too much confidence). With LLMs I start hyper-skeptical and remain hyper-skeptical - there's really no way to develop confidence in it because the mistakes can be so random and dissimilar to the errors we're used to parsing in human-generated content.

Having said that, LLMs have saved me a ton of time, caught my dumb errors and typos, helped me improve code performance (especially database queries) and even clued me into to some better code-writing conventions/updated syntax that I hadn't been using.


Also with most Stack Overflow copy and pasted code, you can Google the suspicious code, find the link to it and read over the question/comments and somewhat grok the decision and maybe even find a fix in the comments.

Most AI Code does not have prompts and even if it does, there is not guarantee that same prompt will produce the same output so it's like reading human code except human can't explain themselves even if you have access to them.


In my experience, fixing code generated by AI is often more work than writing it myself the right way.

And even uf you understand the code, that doesn't mean it is maintainable code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: