Yeah. If you ignore the negligible fact that some investor may want a return on all that money that is going into capex I am pretty sure you can, Enron style, get to the conclusion that any of those companies have “healthy” margins.
Amazon was founded in 1994, went public in 1997 and became profitable in 2001. So Anthropic is two years behind with the IPO but who knows, maybe they'll be profitable by 2028? OpenAI is even more behind schedule.
How much loss did they accumulate until 2001? Pretty sure it wasn't the 44 billion OpenAI has. And Amazon didn't have many direct competitors offering the same services.
Did Amazon really not turn a profit, or apply a bunch of tricks to make it appear like they didn't in order to avoid taxes? Given their history, I'd assume the later: https://en.wikipedia.org/wiki/Amazon_tax_avoidance
Anyway, this has nothing to do with whether inference is profitable.
Their price is not a signal of their costs, it is the result of competitive pressure. This shouldn't be so hard to understand. Companies have burned investor money for market share for quite some time in our world.
This is the expected, the normal, why are you so defensive?
Because you made stuff up, did not show any proof, and ignored my proof to the contrary.
You made the claim:
> Deepseek lies about costs systematically.
DeepSeek broke down their cost in great detail, yet you simply called it "lies", but did not even mention which specific number of theirs you claim is a lie, so your statement is difficult to falsify. You also ignored my request for clarification.
You’re citing deepseek unaudited numbers. This is not even close to a proof.
Unless proven otherwise it is propaganda.
Meanwhile we have several industry experts pointing not only towards DeepSeek ridiculous claims of efficiency, but also the lies from other labs.
That's not how valuations work. A company's valuation is typically based on an NPV (net present value) calculation, which is a power series of its time-discounted future cash flows. Depending on the company's strategy, it's often rational for it to not be profitable for quite a long while, as long as it can give investors the expectation of significant profitability down the line.
Having said that, I do think that there is an investment bubble in AI, but am just arguing that you're not looking at the right signal.
The issue is that they have already paid off their datacenter 5x over compared to cloud. For offline, batch training, I don't ses how any amount of risk could offset the savings.
That should be better than a sphere. Though I imagine there could be some fancier 3D geometry designs.
Even for a simple sphere, if we give it different surface roughnesses on the sun-facing side and the "night" side, it can have dramatically different emissivity.
This is a common way of thinking. In practice this type of thing is more like optimizing flop allocation. Surely with an infinite compute and parameter budget you could have a better model with more intensive operations.
Another thing to consider is that transformers are very general computers. You can encode many many more complex architectures in simpler, multi layer transformers.
What you're describing is one of two mechanisms of shedding heat which is convection, heating up the environment. What the long comment above is describing is a _completely_ different mechanism, radiation, which is __more__ efficient in a vacuum. They are different things that you are mixing up.
Solar in space is a very different energy source in terms of required infrastructure. You don't need batteries, the efficiency is much higher, cooling scales with surface area (radiative cooling doesn't work as well through an atmosphere vs. vacuum), no weather/day cycles. Its a very elegant idea if someone can get it working.
The panels suffer radiation damage they don't suffer on Earth. If this is e.g. the same altitude orbits as Starlink, then the satellites they're attached to burn up after around tenth of their ground-rated lifetimes. If they're a little higher, then they're in the Van Allen belts and have a much higher radiation dose. If they're a lot higher, the energy cost to launch is way more.
If you could build any of this on the moon, that would be great; right now, I've heard of no detailed plans to do more with moon rock than use it as aggregate for something else, which means everyone is about as far from making either a PV or compute factory out of moon rock as the residents of North Sentinel Island are.
OK, perhaps that's a little unfair, we do actually know what the moon is made of and they don't, but it's a really big research project just to figure out how to make anything there right now, let alone making a factory that could make them cost-competitive with launching from Earth despite the huge cost of launching from Earth.
> The panels suffer radiation damage they don't suffer on Earth.
I don't think this is true, Starlink satellites have an orbital lifetime of 5-7 years, and GPUs themselves are much more sensitive than solar panels for rad damage. I'd guess the limiting factor is GPU lifetime, so as long as your energy savings outpace the slightly faster gpu depreciation (maybe from 5 -> 3 years) plus cost of launch, it would be economical.
I've said this elsewhere, but based on my envelope math, the cost of launch is the main bottleneck and I think considerably more difficult to solve than any of the other negatives. Even shielding from radiation is a weight issue. Unfortunately all the comments here on HN are focused on the wrong, irrelevant issues like talking about convection in space.
> I don't think this is true, Starlink satellites have an orbital lifetime of 5-7 years,
That's better than I thought, but still means their PV is only lasting order-of 20% of their ground lifespans, so the integrated lifetime energy output per unit mass of PV isn't meaningfully improved by locating them in space, even if they were launched by an efficient electromagnetic system rather than by a rocket.
1. solar is very efficient at generating energy, no moving parts, simple physics etc.
2. in space you don't deal with weather or daylight cycle, you can just point your panels at the sun and generate very stable energy, no batteries required
3. environmental factors are simpler, no earthquakes, security, weather. Main problem here is radiation
In theory its a very elegant way to convert energy to compute.
2 is wrong. At a Lagrange point you can do this. Not in low earth orbit - in LEO sunset is every 60 minutes or so, and you spend the next 60 minutes in darkness.
Satellites are heavily reliant on either batteries or being robust to reboots, because they actually do not get stable power - it's much more dynamic (just more predictable too since no weather).
You can line the solar panels and radiators facing away from each other, and the radiators would take up less surface area. I think maybe the tricky part would be the weight of water + pipes to move heat from the compute to the radiators.
That is not a realistic test, as any space engineer could’ve told them. First of all that’s on the very low end for a cosmic ray, an order of magnitude below the average energy. But the average energy doesn’t matter because there is a very wide distribution and the much more intensive cosmic rays do far more damage. It was also not a fully integrated test with a spacecraft, which matters because a high energy cosmic ray, striking other parts of the spacecraft, generates a shower of secondary particles that do most of the damage of a cosmic ray strike.
If someone has a design out there where this works and you can launch it economically on a rocket today, I wanna see that. And then I wanna compare it to the cost of setting up some data centers on earth (which BTW, you can service in real time, it sounds like these will be one-and-done launches).
reply