A human being informed of a mistake will usually be able to resolve it and learn something in the process, whereas an LLM is more likely to spiral into nonsense
It's true that the big public-facing chatbots love to admit to mistakes.
It's not obvious to me that they're better at admitting their mistakes. Part of being good at admitting mistakes is recognizing when you haven't made one. That humans tend to lean too far in that direction shouldn't suggest that the right amount of that behavior is... less than zero.
I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.
I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.
How many images do you need? What are the use-cases that need a bunch of artificial yet photoreal images produced or altered without human supervision?
I think people still expect a lot of trial and error before getting a usable image. At 2 cents per pull of the slot machine lever, it would still take a while, though.
Thinking is subconscious when working on complex problems. Thinking is symbolic or spatial when working in relevant domains. And in my own experience, I often know what is going to come next in my internal monologues, without having to actually put words to the thoughts. That is, the thinking has already happened and the words are just narration.
I too am never surprised by my brains narration but: Maybe the brain tricks you in never being surprised and acting like your thoughts are following a perfectly sensible sequence.
It would be incredibly tedious to be surprised every 5 seconds.
By what definition of turing test? LLMs are by no means capable of passing for human in a direct comparison and under scrutiny, they don't even have enough perception to succeed in theory.