That is exactly what should be proven, if one dreamt of proposing that.
Especially because the "guess the next" mechanism is risking to become a gnoseological functional paradigm after the past few months, as if a theory of the mind. Which is more than an issue, because automation of thought (acritical repetition) is directly satanic.
My point was that the fact human brains are much more capable does not falsify the idea that they work the same way as these transformer style LLMS.
It might be we're special or substantively different. Or it might not. The fact we have additional capabilities just means we are not exactly the same as e.g. Gpt4 in all ways.
> Spelling with -ct- recorded from late 14c., established 18c., by influence of the verb. OED considers the version with -x- to be "the etymological spelling", but Fowler (1926 - «A clear differentiation being out of the question, and the variation of form being without essential significance...») points out that -ct- is usual in the general senses and even technical ones
Actually, I would say that 'reflection' is the more proper. 'Reflexion' reflects the French spelling, so it is "«etymological»" in the sense of "post-Hastings 1066" (i.e. tracing the history), but in Latin it is 'flectere' (hence 're-flectere'). "Flectere" is already an action, so "reflection" is proper for the faculty.
Sorry, this was meant to be a semi-sarcastic reference to the paper by Northeastern. They use "Reflexion" in the title. It's a paper about how these systems improve their responses by simply allowing their outputs to be 'reflected upon' by the system. For instance, they are able to improve something like ~60% to ~87% by simply allowing the 'reflection' upon what was just output.
Very certainly not, until you show that it has reflection - critical thinking -, which has many times shown not to have.
> Simple as that
As already expressed, you cannot just dump personal positions "porque [t]e sale de l'alma" (Borges).