> Are there specific benchmarks that compare models vs themselves with and without scratchpads?
Yep, it's pretty common for many models to release an instruction-tuned and thinking-tuned model and then bench them against each other. For instance, if you scroll down to "Pure text performance" there's a comparison of these two Qwen models' performance: https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Thinking
So a model is or is not "a reasoning model" according to the extent of a fine tune.
Are there specific benchmarks that compare models vs themselves with and without scratchpads? High with:without ratios being reasonier models?
Curious also how much a generalist model's one-shot responses degrade with reasoning post-training.