I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.
5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.
I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".
> 5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do.
Mastery of words is thinking? In that line of argument then computers have been able to think for decades.
Humans don't think only in words. Our context, memory and thoughts are processed and occur in ways we don't understand, still.
There's a lot of great information out there describing this [0][1]. Continuing to believe these tools are thinking, however, is dangerous. I'd gather it has something to do with logic: you can't see the process and it's non-deterministic so it feels like thinking. ELIZA tricked people. LLMs are no different.
That's the crazy thing. Yes, in fact, it turns out that language encodes and embodies reasoning. All you have to do is pile up enough of it in a high-dimensional space, use gradient descent to model its original structure, and add some feedback in the form of RL. At that point, reasoning is just a database problem, which we currently attack with attention.
No one had the faintest clue. Even now, many people not only don't understand what just happened, but they don't think anything happened at all.
There's no such thing as people without language, except for infants and those who are so mentally incapacitated that the answer is self-evidently "No, they cannot."
Language is the substrate of reason. It doesn't need to be spoken or written, but it's a necessary and (as it turns out) sufficient component of thought.
There are quite a few studies to refute this highly ignorant comment. I'd suggest some reading [0].
From the abstract:
"Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Astonishingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person’s thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain’s language areas when they understand a sentence, but not when they perform other nonlinguistic tasks like arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Taken together, these two complementary lines of evidence provide a clear answer to the classic question: many aspects of thought engage distinct brain regions from, and do not depend on, language."
The resources that the brain is using to think -- whatever resources those are -- are language-based. Otherwise there would be no way to communicate with the test subjects. "Language" doesn't just imply written and spoken text, as these researchers seem to assume.
There’s linguistic evidence that, while language influences thought, it does not determine thought - see the failure of the strong Sapir-Whorf hypothesis. This is one of the most widely studied and robust linguistic results - we actually know for a fact that language does not determine or define thought.
How's the replication rate in that field? Last I heard it was below 50%.
How can you think without tokens of some sort? That's half of the question that has to be answered by the linguists. The other half is that if language isn't necessary for reasoning, what is?
We now know that a conceptually-simple machine absolutely can reason with nothing but language as inputs for pretraining and subsequent reinforcement. We didn't know that before. The linguists (and the fMRI soothsayers) predicted none of this.
Read about linguistic history and make up your own mind, I guess. Or don’t, I don’t care. You’re dismissing a series of highly robust scientific results because they fail to validate your beliefs, which is highly irrational. I'm no longer interested in engaging with you.
I've read plenty of linguistics work on a lay basis. It explains little and predicts even less, so it hasn't exactly encouraged me to delve further into the field. That said, linguistics really has nothing to do with arguments with the Moon-landing deniers in this thread, who are the people you should really be targeting with your advocacy of rationality.
In other words, when I (seem to) dismiss an entire field of study, it's because it doesn't work, not because it does work and I just don't like the results.
> ELIZA, ROFL. How'd ELIZA do at the IMO last year?
What's funny is the failure to grasp any contextual framing of ELIZA. When it came out people were impressed by it's reasoning, it's responses. And in your line of defense it could think because it had mastery of words!
But fast forward the current timeline 30 years. You will have been of the same camp that argued on behalf of ELIZA when the rest of the world was asking, confusingly: how did people think ChatGPT could think?
No one was impressed with ELIZA's "reasoning" except for a few non-specialist test subjects recruited from the general population. Admittedly it was disturbing to see how strongly some of those people latched onto it.
Meanwhile, you didn't answer my question. How'd ELIZA do on the IMO? If you know a way to achieve gold-medal performance at top-level math and programming competitions without thinking, I for one am all ears.
> I will find this often-repeated argument compelling only when someone can prove to me that the human mind works in a way that isn't 'combining stuff it learned in the past'.
5 years ago a typical argument against AGI was that computers would never be able to think because "real thinking" involved mastery of language which was something clearly beyond what computers would ever be able to do. The implication was that there was some magic sauce that human brains had that couldn't be replicated in silicon (by us). That 'facility with language' argument has clearly fallen apart over the last 3 years and been replaced with what appears to be a different magic sauce comprised of the phrases 'not really thinking' and the whole 'just repeating what it's heard/parrot' argument.
I don't think LLM's think or will reach AGI through scaling and I'm skeptical we're particularly close to AGI in any form. But I feel like it's a matter of incremental steps. There isn't some magic chasm that needs to be crossed. When we get there I think we will look back and see that 'legitimately thinking' wasn't anything magic. We'll look at AGI and instead of saying "isn't it amazing computers can do this" we'll say "wow, was that all there is to thinking like a human".