Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hallucinations in code are the least dangerous form of LLM mistakes (simonwillison.net)
17 points by OuterVale 10 months ago | hide | past | favorite | 5 comments


As much as I've agreed with the author's other posts/takes, I find myself resisting this one:

> I'll finish this rant with a related observation: I keep seeing people say “if I have to review every line of code an LLM writes, it would have been faster to write it myself!”

> Those people are loudly declaring that they have under-invested in the crucial skills of reading, understanding and reviewing code written by other people.

No, that does not follow.

1. Reviewing depends on what you know about the expertise (and trust) of the person writing it. Spending most of your day reviewing code written by familiar human co-workers is very different from the same time reviewing anonymous contributions.

2. Reviews are not just about the code's potential mechanics, but inferring and comparing the intent and approach of the writer. For LLMs, that ranges between non-existent and schizoid, and writing it yourself skips that cost.

3. Motivation is important, for some developers that means learning, understanding and creating. Not wanting to do code reviews all day doesn't mean you're bad at them. Also, reviewing an LLM's code has no social aspect.

However you do it, somebody else should still be reviewing the change afterwards.


Your point about code from other humans having innate qualitues that code from LLMs lacks is very solid.

A coworker has a reputation that they care about, and you can incorporate what you know about their coding style. Neither of those factors count at all for LLMs.

I stand by my snarky comment though: if you're skilled at reading and reviewing code you're much less likely to complain that it would be faster to write it yourself.

It took me 15+ years of my own career to get good enough at reading code (even before LLMs) that I didn't prefer to write it myself!


Reading code by itself doesn’t provide the same level of understanding as writing it yourself. You may think you understand it while it very well might not be so.

It is well known in pedagogy: you learn much more by solving problem than just reading its solution in a textbook.


It's exactly the same problem with human written code. To me it seems like it's not an LLM problem, it's a lack of testing and review problem.


You have to make sure the machine is hypnotized correctly, or otherwise it can hallucinate on you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: