First of all, consider asking "why's that?" if you don't know what is a fairly basic fact, no need to go all reddit-pretentious "citation needed" as if we are deeply and knowledgeably discussing some niche detail and came across a sudden surprising fact.
Anyways, a nice way to understand it is that the LLM needs to "compute" the answer to the question A or B. Some questions need more compute to answer (think complexity theory). The only way an LLM can do "more compute" is by outputting more tokens. This is because each token takes a fixed amount of compute to generate - the network is static. So, if you encourage it to output more and more tokens, you're giving it the opportunity to solve harder problems. Apart from humans encouraging this via RLHF, it was also found (in deepseekmath paper) that RL+GRPO on math problems automatically encourages this (increases sequence length).
From a marketing perspective, this is anthropomorphized as reasoning.
From a UX perspective, they can hide this behind thinking... ellipses. I think GPT-5 on chatgpt does this.
Expecting every little fact to have an "authoritative source" is just annoying faux intellectualism. You can ask someone why they believe something and listen to their reasoning, decide for yourself if you find it convincing, without invoking such a pretentious phrase. There are conclusions you can think to and reach without an "official citation".
Yeah. And in general, not taking a potshot at who you replied to, the only people who place citations/peer review on that weird faux-intellectual pedestal are people that don't work in academia. As if publishing something in a citeable format automatically makes it a fact that does not need to be checked for reason. Give me any authoritative source, and I can find you completely contradictory, or obviously falsifiable publications from their lab. Again, not a potshot, that's just how it is, lots of mistakes do get published.
I was actually just referencing the standard Wikipedia annotation that means something approximately like “you should support this somewhat substantial claim with something more than 'trust me bro'”
In other words, 10 pages of LLM blather isn’t doing much to convince me a given answer is actually better.
I approve this message. For the record I'm a working scientist with (unfortunately) intimate knowledge of the peer review system and its limitations. I'm quite ready to take an argument that stands on its own at face value, and have no time for an ipse dixit or isolated demand for rigor.
I just wanted to clarify what I thought was intended by the parent to my comment, especially aince I thought the original argument lacked support (external or otherwise).
I don’t want an essay of 10 pages about how this is exactly the right question to ask