Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t really think one needs to define intelligence to be able to acknowledge that inability to distinguish fact from fiction, or even just basic cognition and awareness of when it’s uncertain, telling the truth, or lying — is a glaring flaw in claiming intelligence. Real intelligence doesn’t have an effective stroke from hearing a username (token training errors); this is when you are peeling back the curtain of the underlying implementation and seeing its flaws.

If we measure intelligence as results oriented, then my calculator is intelligent because it can do math better than me; but that’s what it’s programmed/wired to do. A text predictor is intelligent at predicting text, but it doesn’t mean it’s general intelligence. It lacks any real comprehension of the model or world around it. It just know words, and



I hit send too early; Meant to say that it just knows words and that’s effectively it.

It’s cool technology, but the burden of proof of real intelligence shouldn’t be “can it answer questions it has great swaths of information on”, because that is the result it was designed to do.

It should be focused on whether it can truly synthesize information and know its limitations - something any programmer using Claude, copilot, Gemini, etc will tell you that it fabricates false information/apis/etc on a regular basis and has no fundamental knowledge that it even did that.

Or alternatively, ask these models leading questions that have no basis in reality — and watch what it comes up with. It’s become a fun meme in some circles to ask for definitions of nonsensical made up phrases to models, and see what crap it comes up with (again, without even knowing that it is).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: