That ("_every_ money-making industry...") seems like a too strong statement and can be proven false by finding even a single counter-example.
gwd's claim (AFAICT) is that _specifically_ OpenAI, _for this specific decision_ is not driven by profit, which is a much weaker claim. One evidence against it would be sama coming out and saying "we are disabling codex due to profit concerns". Another one would be credible inside information from a top-level exec/researcher responsible for this subproduct to come out and say that as well.
> There are people in that community -- people not working for a for-profit company -- who would, if they could, stop all AI research of any kind until we have rock-solid techniques to prevent an AI apocalypse. Most of those individuals have absolutely nothing commercial to gain from stopping AI research.
Dalewyn's response implicitly said that even these people have a financial incentive behind their arguments. At which point, I'm at a loss as to what to say: If you think such people are still only motivated by financial gain -- and that it's so obvious that you don't even need to bother providing any evidence -- what can I possibly say to convince you otherwise?
Maybe he missed the bit about "people not working for a for-profit company".
But to answer your question:
The question here is, given OpenAI's decisions wrt GPT-4 (namely not even sharing details about the architecture and size), what is the probability that it's primarily for the purpose of impairing competitors to extract rent?
With no additional information whatsoever, if OpenAI were a for-profit company, and if there were no alternate explanation, I'd say the rent explanation is pretty likely.
But then, it's a non-profit, which has shared a lot of data about its data in the past. That lowers the probability somewhat. Still, with no alternative explanation, the probability remains fairly high.
But, of course we have an alternate explanation: within the AI community, there is a significant set of voices telling them they're going to destroy the human race. So now we have two significant possibilities:
1. OpenAI are driven primarily by a desire to decrease competition to extract more rent
2. OpenAI's researchers, affected by people in their community who are warning of an AI apocalypse, are driven primarily by a desire to avoid that apocalypse.
I'd say without other information, both are about equally likely. We have to look for things in their behavior which are more compatible with one than another.
And behold, we have one: They withheld even mentioning GPT-4 for eight months. This lowered their profitability, which they wouldn't have done if they were primarily trying to extract rent.
So, I'd put the probabilities at 70% "mostly trying to avoid an AI apocalypse", 25% "mostly trying to make more money", 5% something I haven't thought of.
What would make #1 more probable in my mind? Well, the opposite: doing things which clearly extract more rent and also increase the risk of an AI apocalypse (by the standards of that community).
As you can see, I'm already convinced that profit is the default motive. What would convince me that in every industry, profit was the only possible motive? I mean, you'd have to somehow provide evidence that every single instance I've seen of people putting something else ahead of profit was illusory. Not impossible, but a pretty big task.
They withheld GPT-4 for eight months, but continued development based on it and provided access to third parties and entered into agreements with the likes of Microsoft/Bing, etc. All they did was impair their competition that were still struggling to catch-up with their previous offering, while continuing to plow ahead in the dark.