Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do critics of LLM intelligence need to provide a definition when people who believe LLMs are intelligent only take it on faith, not having such a definition of their own?


> Why do critics of LLM intelligence need to provide a definition when people who believe LLMs are intelligent only take it on faith, not having such a definition of their own?

Because advocates of LLMs don't use their alleged intelligence as a defense; but opponents of LLMs do use their alleged non-intelligence as an attack.

Really, whether or not the machine is "intelligent", by whatever definition, shouldn't matter. What matters is whether it is a useful tool.


The entire argument is that thinking it's intelligent or a person makes us missuse the tool in dangerous ways. Not to make us feel better; but to not do stupid things with them.

As a tool its useful yes, that is not the issue;

- theyre used as phycologist and life coaches.

- judges of policy and law documents

- writers of life affecting computer systems.

- Judges of job applications.

- Sources of medical advice,

- legal advisors

- And increasingly as a thing to blame when any of above goes awry.

If we think of llms as very good text writing tools, the responsibility to make "intelligent" decisions and more crucially take responsibility for those decisions remains on real people rather than dice.

But if we think of them as intelligent humans, we making a fatal misjudgement.


This seems reasonable. Much AI research has historically been about building computer systems to do things that otherwise require human intelligence to do. The question of "is the computer actually intelligent" has been more philosophical than practical, and many such practically useful computer systems have been developed, even before LLMs.

On the other hand, one early researcher said something to the effect of, Researchers in physics look at the universe and wonder how it all works. Researchers in biology look at living organisms and wonder how they can be alive. Researchers in artificial intelligence wonder how software can be made to wonder such things.

I feel like we are still way off from having a working solution there.


It's actually very weird to "believe" LLMs are "intelligent".

Pragmatic people see news like "LLMs achieve gold in Math Olympiad" and think "oh wow, it can do maths at that level, cool!" This gets misinterpreted by so called "critics of LLM" who scream "NO THEY ARE JUST STOCHASTIC PARROTS" at every opportunity yet refuse to define what intelligence actually is.

The average person might not get into that kind of specific detail, but they know that LLMs can do some things well but there are tasks they're not good at. What matters is what they can do, not so much whether they're "intelligent" or not. (Of course, if you ask a random person they might say LLMs are pretty smart for some tasks, but that's not the same as making a philosophical claim that they're "intelligent")

Of course there's also the AGI and singularity folks. They're kinda loony too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: