I think the argument is that you need to ask how you are measuring when you say it seems more correct than anything that came before. You may just be describing the experience of swimming in the dominant paradigm
Have you managed any kind of conversation with a clock before? Because you can actually have an intelligent conversation with an LLM. I think that's a pretty compelling case that it's not just swimming in the dominant paradigm.
> People thought they were having intelligent conversations with Eliza
Sure, but who has had a good conversation with a clock?
> people even have satisfying emotional conversations with teddy bears.
No they don't. Kids know very well that they're pretending to have conversations. The people who actually hear teddy bears speak also know that that's not normal, and we all know that this is a cognitive distortion.
> we all know that this is a cognitive distortion.
This is also true of those who form emotional attachments with current AI. The people developing romantic relationships with Replika etc. aren't engaging in healthy behaviour.
One attempt could be "it allows us to make better predictions about the mind".
This article mentions excitement about neural networks overgeneralizing verb inflections, which human language learners also do. If neural networks lead to the discovery of new examples of human cognitive or perceptual errors or illusions, or to the discovery of new effective methods for learning, teaching, or psychotherapy, that could count as evidence that they're a good model of our actual minds.
> If neural networks lead to the discovery of new examples of human cognitive or perceptual errors or illusions,
How would they, except as tools for analyzing research rather than research models? They don't work like human brains, so while they might sometimes exhibit something that looks like similar behavior when viewed a certain way, other than already having observed the behavior in human beings, there’s no reason to expect something they do to reflect what human brains, and moreover there’s no reason to expect useful insights from the corresponding behavior, since there is no reason to expect that the behavior responds similarly outside of the conditions where it is observed in both systems, leaving all the insight on the brain to cone from the brain (or models that, unlike artificial neural nets, we know have structural and behavioral similarities with (some parts of) human brains that are useful.)
If the article is talking about the neural network in McClelland and Rumelhart’s Parallel Distributed Processing, there’s actually a paper by Steven Pinker and some other linguists drilling into it and finding that it doesn’t model children’s language acquisition nearly as closely or as well as M&R think it does.