Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Heads up that this is "more true" for non-reasoning LLMs. Reasoning gives an LLM a lot more runway to respond to out of distribution inputs by devoting more compute on understanding the code it's changing, and to play with ideas on how to change it before it commits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: