Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's an interesting philosophical question.

Imagine an oracle that could judge/decide, with human levels of intelligence, how relevant a given memory or piece of information is to any given situation, and that could verbosely describe which way it's relevant (spatially, conditionally, etc.).

Would such an oracle, sufficiently parallelized, be sufficient for AGI? If it could, then we could genuinely describe its output as "context," and phrase our problem as "there is still a gap in needed context, despite how much context there already is."

And an LLM that simply "shortens" that context could reach a level of AGI, because the context preparation is doing the heavy lifting.

The point I think the article is trying to make is that LLMs cannot add any information beyond the context they are given - they can only "shorten" that context.

If the lived experience necessary for human-level judgment could be encoded into that context, though... that would be an entirely different ball game.



I agree with the thrust of your argument.

IMO we already have the technology for sufficient parallelization of smaller models with specific bits of context. The real issue is that models have weak/inconsistent/myopic judgement abilities, even with reasoning loops.

For instance, if I ask Cursor to fix the code for a broken test and the fix is non-trivial, it will often diagnose the problem incorrectly almost instantly, hyper-focus on what it imagines the problem is without further confirmation, implement a "fix", get a different error message while breaking more tests than it "fixed" (if it changed the result for any tests), and then declare the problem solved simply because it moved the goalposts at the start by misdiagnosing the issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: