>I think you misunderstand, but it's a common misunderstanding.
Humans have the ability to reason. This is not equivalent to saying that humans reason at all times (this was also started in my previous comment)
>So it's none of: "humans have x", "humans don't have x", nor "humans have x but f doesn't have x because humans perform y on x and f performs z on x".
This is all rather irrelevant here. You can sit a human for some arbitrarily long time on this test and he/she will be unable to solve it even if the human has theory of mind (the property we're looking for) the entire duration of the test, ergo the test is not properly testing for the property of theory of mind.
>So I don't know why you're talking about trickery. The models are explicitly trained to solve problems like these.
Models are trained to predict text. Solving problems is just what is often the natural consequence of this objective.
It's trickery the same way it can be considered trickery when professors would do it to human testers. Humans and Machines that memorize things take shortcuts in prediction when they encounter what they've memorized "in the wild". That's the entire point of memorization really.
The human or model might fail not because it lacks the reasoning abilities to solve your problem, but because its attention is diverted by misleading cues or subtle twists in phrasing.
And if you care about the latter, fine!, that's not a bad thing to care about but then don't pretend you are only testing raw problem solving ability.
>So it's none of: "humans have x", "humans don't have x", nor "humans have x but f doesn't have x because humans perform y on x and f performs z on x".
This is all rather irrelevant here. You can sit a human for some arbitrarily long time on this test and he/she will be unable to solve it even if the human has theory of mind (the property we're looking for) the entire duration of the test, ergo the test is not properly testing for the property of theory of mind.
>So I don't know why you're talking about trickery. The models are explicitly trained to solve problems like these.
Models are trained to predict text. Solving problems is just what is often the natural consequence of this objective.
It's trickery the same way it can be considered trickery when professors would do it to human testers. Humans and Machines that memorize things take shortcuts in prediction when they encounter what they've memorized "in the wild". That's the entire point of memorization really.
The human or model might fail not because it lacks the reasoning abilities to solve your problem, but because its attention is diverted by misleading cues or subtle twists in phrasing.
And if you care about the latter, fine!, that's not a bad thing to care about but then don't pretend you are only testing raw problem solving ability.