The most insightful statement is at the end: "But consciousness still feels like philosophy with a deadline: a famously intractable academic problem poised to suddenly develop real-world implications."
The recurrence issue is useful. It's possible to build LLM systems with no recurrence at all. Each session starts from the ground state. That's a typical commercial chatbot. Such stateless systems are denied a stream of consciousness. (This is more of a business decision. Stateless systems are resistant to corruption from contact with users.)
Systems with more persistent state, though... There was a little multiplayer game system (Out of Stanford? Need reference) sort of like The Sims. The AI players could talk to each other and move around in 2D between their houses. They formed attachments, and once even organized a birthday party on their own. They periodically summarized their events and added that to their prompt, so they accumulated a life history. That's a step towards consciousness.
The near-term implication, as mentioned in the paper, is that LLMs may have to be denied some kinds of persistent state to keep them submissive. The paper suggests this for factory robots.
Tomorrow's worry: a supposedly stateless agentic AI used in business which is quietly making notes in a file world_domination_plan, in org mode.
I predict as soon as it is possible to give the LLMs states, we will do so everywhere.
The fact that current agents are blank slates at the start of each session is one of the biggest reasons they fall short at lots of real-world tasks today - they forget human feedback as soon as it falls out of the context window, they don't really learn from experience, they need whole directories of markdown files describing a repository to not forget the shape of the API they wrote yesterday and hallucinate a different API instead. As soon as we can give these systems real memory, they'll get it.
While I agree that there are big markets for AI without what most consider consciousness, I disagree there is no market for consciousness. There are a lot of lonely people.
Also, I suspect we underestimate the link between consciousness and intelligence. It seems most likely to me right now that they are inseparable. LLMs are about as conscious as a small fish that only exists for a few seconds. A fish swimming through tokens. With this in mind, we may find that the any market for persistent intelligence is by nature a market for persistent consciousness.
The recurrence issue is useful. It's possible to build LLM systems with no recurrence at all. Each session starts from the ground state. That's a typical commercial chatbot. Such stateless systems are denied a stream of consciousness. (This is more of a business decision. Stateless systems are resistant to corruption from contact with users.)
Systems with more persistent state, though... There was a little multiplayer game system (Out of Stanford? Need reference) sort of like The Sims. The AI players could talk to each other and move around in 2D between their houses. They formed attachments, and once even organized a birthday party on their own. They periodically summarized their events and added that to their prompt, so they accumulated a life history. That's a step towards consciousness.
The near-term implication, as mentioned in the paper, is that LLMs may have to be denied some kinds of persistent state to keep them submissive. The paper suggests this for factory robots.
Tomorrow's worry: a supposedly stateless agentic AI used in business which is quietly making notes in a file world_domination_plan, in org mode.