Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You'd think the whole "LLMs can't reason in concepts" meme would've died already. LLMs are literally concepts incarnate, this has already been demonstrated experimentally in many ways, not limited to figuring out how to identify and suppress or amplify specific concepts during inference.

Article also repeats some weird arguments that are superficially true, but don't stand to scrutiny. That Naur thing, which is a meme at this point, is often repeated as somehow insightful in the real world - yet what's forgotten is another fundamental, practical rule of software engineering: any nontrivial program quickly exceeds any one's ability to hold a full theory of it in their head; we almost never work with proper program theory; programming languages, techniques, methodologies and tools all evolve towards enabling people to work better without understanding most of the code. We actually share the same limitations as LLMs here, we're just better at managing it because we don't have to wait for anyone to let us do another inference loop so we can take a different perspective.

Etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: