Yes. It's pretending. The best way to think about chatGPT answers is that it always invents the most plausible reply. With some different temperature it can provide slightly different chain of thought, but it's making it up based on its limited "thinking" capabilities and poor generalization, despite huge amount of knowledge. This is just the beginning and new generations of LLMs will continue improving.