Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The 2001 telecoms crash drove benefits for companies that came later in the availability of inexpensive dark fiber after the bubble popped. WorldCom, ICG, Williams sold off to Verizon, Level 3, Teleglobe, and others. That in turn helped future Internet companies gain access to plentiful and inexpensive bandwidth. Cable telephony companies such as Cablevision Systems, Comcast, Cox Communications, and Time Warner, used the existing coaxial connections into the home to launch voice services.


Rail and fiber deprecates on multiple decade timescales. AI data centers close to tulips. Even assuming we manage to make data center stretch to 10 years, these assets won't be around long enough to support ecosystem of new companies if the economics stops making sense. Ultimately the only durable thing is any power infra that gets built, vs rail and fiber where inheritance isn't just rail networks or fiber but like 1000s of kilometers of earthwork projects to build out massive physical networks.


Data centers last decades. Many or the current AI hosting vendors such as Coreweave have crypto origin. Their data centers were built out in 2010s, early 2020s.

Many of legacy systems still running today are IBM or Solaris servers at 20, 30 year old. No reason to believe GPU won’t still be in use in some capacity (e.g. interference) a decade from now.


Skeletons of data centers and some components (i.e. cooling) have long shelf life, but they're also ~10% of investment. Plurality of fiber and rail went towards building out linear infrastructure where improvements can be milked at nodes to improve network efficiency (better switches etc).

VS plurality of AI investment, i.e. trillions are going towards fast deprecating components where we can say with relative confidence will likely be net negative stranded assets in terms of amoritization costs if current semi manufacturing trends continues.

Keeping some mission critical legacy systems around is different than having trillions that makes no financial sense to keep on the books, i.e. post bubble new gen hardware will likely not have scarcity pricing or better compute efficiency (better apex and opex), there is no reason to believe companies will legacy GPUs around at scale if every rack loses them money relative to new hardware. And depending on actual commercialization compute demand, it can simply make more economic sense to retire them than keep them going.


Used to last decades, the world didn't move at this speed before.


Is this a question of the GPU chips dying due to be being warm semiconductors, or of them becoming outdated in relation to new chips?


Both. Semi vs concrete, depreciating vs durable assets. Durable linear assets you upgrade switches to improve fiber/rail which is where most of the investment was in. GPUs you replace the racks which is where most of the investment is in. Either way it cannot be stretched the same way materially and most important economically, i.e. new chips with better power efficiency = running old chips is literally losing money squatting on an data center slot. There is very little reason to believe new chips will cost more than legacy (current chips) for the simple reason much of currnet chips are acquired on scarcity pricing, i.e. Nvidia margins went from 50% to 70%. That 20% is massive capex premium that is not going to be competitive if bubble pops and Nvidia has to sell new hardware on commodity pricing, new hardware that is in all likelihood also going to be more compute efficient in terms of power (opex). Even if you stretch existing compute past 3-5 to 10 years it is still closer to tuplids than rail or fiber in terms of economically productive timescale.

TLDR old durable infra tends to retains positive residual value because they're not easy to replace economically/frequently, old compute has negative residual value because they are easy to replace economically/frequently.


This is indeed true, but doesn't fiber have a far longer lifetime than GPU heavy data centers? The major cost center is the hardware, which has a fairly short shelf life.


Well you still get the establishment of 1) large industrial buildings 2) water/electricity distribution 3) trained employees who know how to manage a data center

Even if all of the GPUs inside burn out and you want to put something else entirely inside of the building, that's all still ready to go.

Although there is the possibility they all become dilapidated buildings, like abandoned factories


The building and electrical infrastructure are far cheaper than the hardware. So much so that the electricity is a small cost of the data center build out, but a major cost for the grid.

Of the most valuable part is quickly depreciating and goes unused within the first few years, it won't have a chance for long term value like fiber. If data centers become, I don't know, battery grid storage, it will be very very expensive grid storage.

Which is to say that while there was an early salivation for fiber that was eventually useful, overallocation of capital to GPUs goes to pure waste.


I'm sure there are other "emerging" markets that could make use of the GPUs, I heard game streaming is relatively popular so you can play PC games on your phone for example. I'd guess things similar to that would benefit from a ton of spare GPUs and become significantly more viable.


>The building and electrical infrastructure are far cheaper than the hardware.

Maybe it's cheaper if we measure by dollars or something, but at the same time we lack the political will to actually do it without something like AI on the horizon.

For example, many data center operators are pushing for nuclear power: https://www.ehn.org/why-microsoft-s-move-to-reopen-three-mil...

That's one example among many.

So I'm hesitant to believe that "electricity is a small cost" of the whole thing, when they are pushing for something as controversial as nuclear.

Also the 2 are not mutually exclusive. Chip fabs are energy intensive. https://www.tomshardware.com/tech-industry/semiconductors/ts...


Nuclear is not very controversial, there are tons of places that would be very happy to have additional reactors, namely those with successful reactors right now. It's just super expensive to build and usually a financial boondoggle.

AI companies are saying they are trying to build nuclear because it makes them sound serious. But they are not going to build nuclear, solar and storage is cheaper more flexible and faster to build. The only real nuclear commitment is Microsoft reopening an old nuclear reactor that had become uneconomic to operate. Building anything new would be a five+ year endeavor, if we were in a place with high construction productivity like China. In the US, new nuclear is 10 years away.

But as soon as Microsoft restarted an old reactor, all their competitors felt like they had to sound as serious, so they did showy things that won't result in solving their immediate needs. Everybody's renewable commitments dwarf their nuclear commitments.

AI companies can flaunt expensive electricity at high cost for high investor impact precisely because electricity is a small cost component of their inputs. It's a hugely necessary input, and the limiting factor for most of their plans, but the dollar amount for the electricity is small. The current valuations of AI assume that a kWh put towards AI will generate far far more value than the average kWh on the grid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: