Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting. I'm not sure I'd use the term "Platonic" here, because that tends to have implications of mathematical perfection / timelessness / etc. But I do think that the corpuses of human language that we've been feeding to these models contain within them a lot of real information about the objective world (in a statistical, context-dependent way as opposed to a mathematically precise one), and the AIs are surfacing this information.

To put this another way, I think that you can say that much of our own intelligence as humans is embedded in the sum total of the language that we have produced. So the intelligence of LLMs is really our own intelligence reflected back at us (with all the potential for mistakes and biases that we ourselves contain).

Edit: I fed Claude this paper, and "he" pointed out to me that there are several examples of humans developing accurate conceptions of things they could never experience based on language alone. Most readers here are likely familiar with Helen Keller, who became an accomplished thinker and writer in spite of being blind and deaf from infancy (Anne Sullivan taught her language despite great difficulty, and this Keller's main window to the world). You could also look at the story of Eşref Armağan, a Turkish painter who was blind from birth – he creates recognizable depictions of a world that he learned about through language and non-visual senses).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: