It seems like they plan to do hardware-software co-design.
Devices (servers, switches?) and firmware designed specifically for their software stack.
A software stack (drivers, OS) designed specifically for their hardware.
Really interested in seeing where this goes.
They're apparently doing hardware-software co-design for rack-scale servers, integrating some kind of state-of-the-art systems-management features of the sort that "cloud" suppliers like AWS are assumed to be relying on for their offerings.
Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.
It starts looking appealing when your monthly AWS bill is deep into the 5 figures and you’re kinda trapped: do you pour your engineering resources into cost-optimizing your infrastructure (doable, but time consuming) or kick the can down the road?
Believe it or not, going private but not having to give up the niceties of AWS or Azure would be quite appealing.
Remember Eucalyptus? Was lead by the former CEO of MySQL. Built an open-source project that attempted to be compatible with AWS. The project is still alive.
Looks like the last release was in 2017. Eight years of releases is a relatively good run. May be of interest to those like Oxide, who are considering similar paths, with the added complexity of firmware, BMCs, roots of trust and OCP-style hardware.
Also, some companies just want their data in-house and not hosted on a remote cloud. Sometimes contracts preclude the use of 3rd party external companies to house critical data.
This is certainly a need in HPC environments. Think of natgas / big oil, finance, science (protein folding, genetic synthesis, genome research, etc). The cloud simply makes no sense for a lot of these industries. We're talking 20k physical computers and using MPI for their research jobs kind of scale. The cloud is not a good fit for those types of envs where they're using 100% of their compute 100% of the time if at all possible.
I would guess the problem with business as usual is that the cost magnitude of doing this means that only companies throwing off serious cash are doing this.
Which has the side effect of their solutions being bespoke, because acceptable cost looks like "Tell me when I need to stop adding zeros to make this happen."
To go another way you either (1) need to be Amazon-scale already (i.e. "there aren't enough zeros to make inefficiency worth our time") or (2) be willing to say no to huge profits out of ideological purity.
"Hey look we have this cool solution" -> "Hey look we have a few small to middle tier customers" -> "Hey look we have a big customer" -> "Hey look our big customer bought our company". It's the silicon valley startup shuffle...