Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I see statements like this a lot, and I find them unpersuasive because any meaningful definition of "intelligence" is not offered. What, exactly, is the property that humans (allegedly) have and LLMs (allegedly) lack, that allows one to be deemed "intelligent" and the other not?

the ability for long-term planning and, more cogently, actually living in the real world where time passes



> the ability for long-term planning and, more cogently, actually living in the real world where time passes

1. LLMs seem to be able to plan just fine.

2. LLMs clearly cannot be "actually living" but I fail to see how that's related to intelligence per se.


> 2. LLMs clearly cannot be "actually living" but I fail to see how that's related to intelligence per se.

Ive found 'Triarchic theory of intelligence' [0] to be helpful here. It's a much more rounded approach to human intelligence that, imo, is closer to what people are trying to get at during these types of 'AI vs human intelligence' discussions. The full encompassing of human Intelligence is not something that can simply be chalked up to some set of measurements -- nor is it about a soul or whatever. If you want to say LLMs are good at a particular sets of test measurements, then okay say that. But thats not equivalent to human intelligence, it only represents a tiny measurable subset that happens to be a part of the whole

[0] https://en.wikipedia.org/wiki/Triarchic_theory_of_intelligen...


jfc this is what I’ve been looking for, thank you. I’m coming from a more systems theory perspective this meshes well with


No, they're echoing previous examples of people planning, by framing prompts and recursively designed prompts to incorporate what, in fairness, is a large database including the text of people planning.

It still matters that there's nobody in there. You're figuring out better ways to tap into the history of language-users having represented planning in language. As such, this seems a brittle way to represent 'planning'.


> they're echoing previous examples of people planning

Isn't "echoing previous examples of X" the meaning of learning? If I learn, for example, calculus, or C++, or oil painting, aren't I just echoing pieces of math, code, or art that I've experienced elsewhere, but arranged into new patterns?

Can't we say that the LLM "learned" planning, just as we can say that a person learned planning? I fail to see the distinction between "echo of examples of plans" in an LLM is any less authentic than a human being making similar echos in their own plans.


if it’s not actually living it’s not making intelligent decisions. if I make a grocery list, and go to my store, and the store isn’t there, what do I do? I make an intelligent decision about what to do next (probably investigating wtf happened, then going to the second nearest store)

my genuine question is how does a LLM handle that situation? and as you point out, it’s an absurd comparison


Just for the hell of it I prompted an LLM with your exact scenario ("I made a grocery list, and went to my store, and the store isn’t there, what do I do?") to see what it would actually do: (all of which seem like reasonably intelligent ideas when faced with a missing store)

  First, stay calm—it's unusual for a store to vanish overnight, but it could be due to closure, renovation, relocation, or even a mix-up with the address. Here's a step-by-step plan to handle it:

  ### Immediate Steps
  - **Double-check the location**: Use your phone's maps app (like Google Maps or Apple Maps) to search for the store's name or address. It might show if it's temporarily closed, has moved nearby, or if there's an error in navigation.
  - **Look for signs or info on-site**: If you're already there, check for any posted notices, construction signs, or talk to nearby people/businesses to ask what happened.

  ### Find Alternatives
  - **Locate nearby stores**: Search for other grocery stores in the area. Apps like Yelp, Google Maps, or store-specific apps (e.g., Walmart, Kroger) can show options with ratings, hours, and directions.
  - **Use delivery or online shopping**: If you're in a pinch, order groceries via services like Instacart, Amazon Fresh, DoorDash, or the apps of chains like Whole Foods or Target. Upload your list and get it delivered.
  - **Adapt your list if needed**: If certain items are unique to that store, prioritize essentials and substitute where possible (e.g., if it's a specialty ethnic store, find similar options elsewhere).

  ### Prevent Future Issues
  - **Diversify your options**: Add a couple of backup stores to your routine.
  - **Stay updated**: Follow local news or the store's social media/website for announcements about changes.

  If you provide more details like the store's name, location, or what might have happened (e.g., was it demolished?), I can help search for specifics or suggest exact alternatives.


So your definition of "living" means going to the grocery store?

If you want to know how an LLM would handle that situation, why don't you ask it?


to answer your first question, no, that is not what I’m saying. for your second question, you’re entirely missing the point of mine

a LLM cannot actually be intelligent if it cannot operate in a temporal context ;)


I agree that I am missing your point. Can you please clarify?

> a LLM cannot actually be intelligent if it cannot operate in a temporal context ;)

When I have a conversation with an LLM, that conversation happens in time. It has a beginning, a middle, and an end. The conversation can refer to earlier parts of the conversation. How is that not a "temporal context"?

Furthermore, can you explain why a temporal context is necessary for intelligence? For example, if a human being could download their entire brain into a computer and exist there, as if they were an LLM, would they cease to be intelligent, in your view?


> It has a beginning, a middle, and an end. The conversation can refer to earlier parts of the conversation. How is that not a "temporal context"?

This is not what I mean for a few reasons:

1. This context literally has limits; we'll get back to the grocery store 2. This is a point-in-time conversation

On the latter point, that is, you can have the same conversation tomorrow. The LLM has not "learned" anything, it has not adapted in any way. Yes, you are experiencing time, and the conversation is happening over time, but the LLM is not experiencing nor aware of time and is not intelligently adapting to it. Yes, they get trained and "updated" in that way, it's not the same thing.

If you don't respond for an hour, then do, the LLM is not aware of that unless its system injects a "datetime.now()" somewhere in the prompt. Point of this being: an LLM is not an adaptable system. Now you can play the "What if?" game ad ininfinitum -- make it aware of the current time, current location, etc. etc.

Hence my grocery store example. If I go out into the real world, I experience real things, and I make intelligence decisions based off those experiences. An LLM cannot do that, just full stop. And again, you can go "well what if I put the LLM in a robot body, and give it a system, then it can go grocery shopping". And only at this point are we kinda-sorta-close to having a discussion about intelligence. If this mythical creature can go to the grocery store, notice it's not there, look up what happened, maybe ask some friends who live in the same city if they know, maybe make some connection months later to some news article...a LLM or system we build on an LLM cannot do this. It cannot go into the store and think "ya know, if I buy all this ice cream and eat it, that could be bad" and connect it to the million other things a real person is doing and considering in their day to day life

The actual world is practically infinitely complex. Talking about "a LLM writing a list is planning and that shows intelligence" is a frighening attenuation of intelligence in the world world to anthropomorphization to a very high degree. Reframing as "intelligence needs to be able to adapt to the world around it over time" is a much better starting point IMO


> On the latter point, that is, you can have the same conversation tomorrow. The LLM has not "learned" anything, it has not adapted in any way.

They do learn, OpenAI has a memory feature. I just opened up a chat, asked "What do you know about me?" and got a long list of things specific to me that it certainly did not infer from the chat so far. It's a bit unsettling really, someone at OpenAI would probably have little difficulty matching my OpenAI account to my HN one, it looks like they have quite a few bits of information to work with. Privacy is a hard thing to maintain.

I really don't see the "LLMs don't learn" position as a defensible one long term given the appalling limitations of human memory and the strengths computers have at it. Given the improvements in RAG and large context windows it actually seems pretty likely that LLMs will be quite a lot better at humans when it comes to memory, they have SSDs. We just don't build LLMs with memory right yet for whatever reason.


that’s not learning…we have a fundamentally different understanding of what cognition, intelligence, and learning are

adding text to storage and searching over it is not memory. “knowing” those things about you is not learning. and guess what, context still fills up. trying putting that LLM again in the real world, facing real human challenges, with all the real sensory input around you. it’s nonsensical

and it’s not about “limits” of humans. machines can do math and many things better, that’s been apparent for decades. yes, they can “remember” 8k video streams much better than us. that’s not “memory” in the human sense and machines don’t “learn” from it in the human sense

(your IP address is much easier to link your accounts than your text)


> adding text to storage and searching over it is not memory. “knowing” those things about you is not learning.

Why not? If humans store data in their brains, isn't that learning? Of course data can include skills and connections, for both humans and AIs.

> and guess what, context still fills up.

Human memory also has limits.

Sorry, but I really don't see how the distinctions you're trying to make even exist, much less qualify as markers of intelligence.


> Why not? If humans store data in their brains, isn't that learning?

No. We’re back to my earlier point of you and I have fundamentally different understanding of cognition, intelligence, and learning. And genuinely not trying to be condescending, but I suspect you don’t have a good grounding in the technology we’re discussing


> No. We’re back to my earlier point of you and I have fundamentally different understanding of cognition, intelligence, and learning. And genuinely not trying to be condescending, but I suspect you don’t have a good grounding in the technology we’re discussing

Yeah, that definitely came off as condescending. Especially on HN, where pretty much everyone here has a grounding in the technology we're discussing. In any case, your arguments have not dealt with technology at all, but on hand-wavy distinctions like "temporality."

Anyway, to the larger point: I agree that "you and I have fundamentally different understanding of cognition, intelligence, and learning" but your inability to explain your own understanding of these terms and why they are relevant is why your arguments are unpersuasive.


yeah that’s fair — I’ll write them ip more cogently :) I am confident you are mistaken


If you lock a human in a prison cell without a clock, so they can't tell the time and can't "go to a store", do they stop being intelligent?


Is making a list the act of planning?


> Is making a list the act of planning?

Depends on the content of the list.

A list of the names of the seven dwarfs: no, not an act of planning.

A list of steps necessary to build a web site: yes, an act of planning.


> actually living in the real world where time passes

sure, but it feels like this is just looking at what distinguishes humans from LLMs and calling that “intelligence.” I highlight this difference too when I talk about LLMs, but I don’t feel the need to follow up with “and that’s why they’re not really intelligent.”


well the second part (implied above, I didn’t actually write it) is “and operate intelligently in that world”. talking about “intelligence” in some abstract form where “does this text output constitute intelligence” is hyper silly to me. the discussion should anchor on real-world consequences, not the endless hypotheticals we end up with in these discussions


Human's are conscious beings. What kind of conscious beings are humans? Beings with eye consciousness, ear consciousness, nose consciousness, tongue consciousness, body consciousness, and mind consciousness. That is the definition of intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: