This is all nonsense and you are just falling for marketing that you want to be true.
The whole space is largely marketing at this point, intentionally conflating all these philosophical terms because we don't want to face the ugly reality that LLMs are a dead end to "AGI".
Not to mention, it is not on those who don't believe in Santa Clause to prove that Santa Clause doesn't exist. It is on those who believe in Santa Clause to show how AGI can possibly emerge from next token prediction.
I would question if you even use the models much really because I thought this in 2023 but I just can't imagine how anyone who uses the models all the time can possibly think we are on the path to AGI with LLMs in 2025.
It is almost like the idea of a thinking being emerging from text was a dumb idea to start with.
Which is: flesh apes want to feel unique and special! And "intelligence" must be what makes them so unique and special! So they deny "intelligence" in anything that's not a fellow flesh ape!
If an AI can't talk like a human, then it must be the talking that makes the human intelligence special! But if the AI can talk, then talking was never important for intelligence in the first place! Repeat for everything.
I use LLMs a lot, and the improvements in the last few years are vast. OpenAI's entire personality tuning team should be loaded into a rocket and fired off into the sun, but that's a separate issue from raw AI capabilities, which keep improving steadily and with no end in sight.
The whole space is largely marketing at this point, intentionally conflating all these philosophical terms because we don't want to face the ugly reality that LLMs are a dead end to "AGI".
Not to mention, it is not on those who don't believe in Santa Clause to prove that Santa Clause doesn't exist. It is on those who believe in Santa Clause to show how AGI can possibly emerge from next token prediction.
I would question if you even use the models much really because I thought this in 2023 but I just can't imagine how anyone who uses the models all the time can possibly think we are on the path to AGI with LLMs in 2025.
It is almost like the idea of a thinking being emerging from text was a dumb idea to start with.