I agree that I am missing your point. Can you please clarify?
> a LLM cannot actually be intelligent if it cannot operate in a temporal context ;)
When I have a conversation with an LLM, that conversation happens in time. It has a beginning, a middle, and an end. The conversation can refer to earlier parts of the conversation. How is that not a "temporal context"?
Furthermore, can you explain why a temporal context is necessary for intelligence? For example, if a human being could download their entire brain into a computer and exist there, as if they were an LLM, would they cease to be intelligent, in your view?
> It has a beginning, a middle, and an end. The conversation can refer to earlier parts of the conversation. How is that not a "temporal context"?
This is not what I mean for a few reasons:
1. This context literally has limits; we'll get back to the grocery store
2. This is a point-in-time conversation
On the latter point, that is, you can have the same conversation tomorrow. The LLM has not "learned" anything, it has not adapted in any way. Yes, you are experiencing time, and the conversation is happening over time, but the LLM is not experiencing nor aware of time and is not intelligently adapting to it. Yes, they get trained and "updated" in that way, it's not the same thing.
If you don't respond for an hour, then do, the LLM is not aware of that unless its system injects a "datetime.now()" somewhere in the prompt. Point of this being: an LLM is not an adaptable system. Now you can play the "What if?" game ad ininfinitum -- make it aware of the current time, current location, etc. etc.
Hence my grocery store example. If I go out into the real world, I experience real things, and I make intelligence decisions based off those experiences. An LLM cannot do that, just full stop. And again, you can go "well what if I put the LLM in a robot body, and give it a system, then it can go grocery shopping". And only at this point are we kinda-sorta-close to having a discussion about intelligence. If this mythical creature can go to the grocery store, notice it's not there, look up what happened, maybe ask some friends who live in the same city if they know, maybe make some connection months later to some news article...a LLM or system we build on an LLM cannot do this. It cannot go into the store and think "ya know, if I buy all this ice cream and eat it, that could be bad" and connect it to the million other things a real person is doing and considering in their day to day life
The actual world is practically infinitely complex. Talking about "a LLM writing a list is planning and that shows intelligence" is a frighening attenuation of intelligence in the world world to anthropomorphization to a very high degree. Reframing as "intelligence needs to be able to adapt to the world around it over time" is a much better starting point IMO
> On the latter point, that is, you can have the same conversation tomorrow. The LLM has not "learned" anything, it has not adapted in any way.
They do learn, OpenAI has a memory feature. I just opened up a chat, asked "What do you know about me?" and got a long list of things specific to me that it certainly did not infer from the chat so far. It's a bit unsettling really, someone at OpenAI would probably have little difficulty matching my OpenAI account to my HN one, it looks like they have quite a few bits of information to work with. Privacy is a hard thing to maintain.
I really don't see the "LLMs don't learn" position as a defensible one long term given the appalling limitations of human memory and the strengths computers have at it. Given the improvements in RAG and large context windows it actually seems pretty likely that LLMs will be quite a lot better at humans when it comes to memory, they have SSDs. We just don't build LLMs with memory right yet for whatever reason.
that’s not learning…we have a fundamentally different understanding of what cognition, intelligence, and learning are
adding text to storage and searching over it is not memory. “knowing” those things about you is not learning. and guess what, context still fills up. trying putting that LLM again in the real world, facing real human challenges, with all the real sensory input around you. it’s nonsensical
and it’s not about “limits” of humans. machines can do math and many things better, that’s been apparent for decades. yes, they can “remember” 8k video streams much better than us. that’s not “memory” in the human sense and machines don’t “learn” from it in the human sense
(your IP address is much easier to link your accounts than your text)
> Why not? If humans store data in their brains, isn't that learning?
No. We’re back to my earlier point of you and I have fundamentally different understanding of cognition, intelligence, and learning. And genuinely not trying to be condescending, but I suspect you don’t have a good grounding in the technology we’re discussing
> No. We’re back to my earlier point of you and I have fundamentally different understanding of cognition, intelligence, and learning. And genuinely not trying to be condescending, but I suspect you don’t have a good grounding in the technology we’re discussing
Yeah, that definitely came off as condescending. Especially on HN, where pretty much everyone here has a grounding in the technology we're discussing. In any case, your arguments have not dealt with technology at all, but on hand-wavy distinctions like "temporality."
Anyway, to the larger point: I agree that "you and I have fundamentally different understanding of cognition, intelligence, and learning" but your inability to explain your own understanding of these terms and why they are relevant is why your arguments are unpersuasive.
a LLM cannot actually be intelligent if it cannot operate in a temporal context ;)