yeah but your argument is true for every llm provider. so i don't see how it's a moat since everyone who can raise money to offer an llm can do the same thing. and google and microsoft doesn't need to find llm revenue it can always offer it at a loss if it chooses unless it's other revenue streams suddenly evaporate. and tbh i kind of doubt personalization is as deep of a moat as you think it is.
Google can offer their services for free for a lot longer than OpenAI can, and already does to students. DeepSeek offers their competitor product to ChatGPT for free to everyone already.
On what basis do you say they're within the range of profitability on inference today? Every source I see paints a different story based on their own bias.
You seem to have misread the article (which is not mine by the way), which makes the point that inference costs and revenue seem to scale with each other.