Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Was a mistake from the beginning to use language as the basis for tokens and embedded spaces between them to generate semantics. It wasn't thought out, it was a snowball trial and error that went out of control.


Lol ok. We'll wait for your game changing technology, keep us posted.


Action patterns in syntax? They already exist, the binary chose to forgo that level emulation for arbitrary words, in arbitrary symbols, predicted, geometrically arranged in space as "meaning" as tokens.

I'd suggest comp sci caught the low fruit, whatever comes out of thekeyboard as a basis, non too smart.


OP has a point. Are these type of embeddings the best way to model thought?


Certainly not the best, just the most practical / commercially sell-able. And once a pattern, like LLM text embedding is established as the way to "AGI", it takes years for other more realistic approaches to gain funding again. Gary Marcus wrote about this extensively how legitimate AGI research is actually being put back years due to the LLM superficial AGI hype.


I think the best way would have been to assume thought is wordless (as the science tells us now), and images and probability (as symbols) are still arbitrary. That was the threshold to cross. Neither neurosymbolic, nor neuromorphic get there. Nor will any "world model" achieve anything as models are arbitrary.

Using the cybernetic to information theory to cog science to comp sci lineage was an increasingly limited set of tools to employ for intelligence.

Cybernetics should have been ported expansively to neurosci, then neurobio, then something more expansive like eco psychology or coodination dynamics. Instead of expanding, comp sci became too reductive.

The idea a reductive system that anyone with a little math training could A/B test vast swaths of information gleaned from existing forms and unlock highly evolved processes like thinking, reasoning, action and define this as a path to intelligence is quite strange. It defies scientific analysis. Intelligence is incredibly dense in biology, a vastly hidden, parallel process in which one affinity being removed (like the emotions) and the intel vanished into zombiehood.

Had we looked at that evidence, we'd have understood that language/tokens/embedded space couldn't possibly be a composite for all that parallel.


No one knows, especially not parent with his one liner quip.


Thought is wordless. It's made in action-spatial syntax. As these are the defined states of intelligence, this would have been a far better approach to emulate. Words and images are the equivalent of junk code. Semantics can;t be specifically extracted from them.


it's the best way we've found so far?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: