> This NAS is very quiet for a NAS (video with audio).
Big (large radius) fans can move a lot of air even at low RPM. And be much more energy efficient.
Oxide Computer, in one of their presentations, talks about using 80mm fans, as they are quiet and (more importantly) don't use much power. They observed, in other servers, as much as 25% of the power went just to powering the fans, versus the ~1% of theirs:
Interesting - I'm used to desktop/workstation hardware where 80mm is the smallest standard fan (aside from 40mm's in the near-extinct Flex ATX PSU), and even that is kind of rare. Mostly you see 120mm or 140mm.
Yeah. In a home environment you should absolutely use desktop gear. I have 5 80mm and one 120mm PWM fans in my NAS and they are essentially silent as they can't be heard over the sound of the drives (which is essentially the noise floor for a NAS).
It is necessary to use good PWM fans though if concerned about noise as cheaper ones can "tick" annoyingly. Two brands I know to be good in this respect are Be Quiet! and Noctua. DC would in theory be better but most motherboards don't support it (would require an external controller and thermal sensors I think).
> Those 40mm PSU fans, and the PSU, are what they are replacing with a DC bus bar.
DC (power) in the DC (building) isn't anything new: the telco space has used -48V (nominal) power for decades. Do a search for (say) "NEBS DC power" and you'll get a bunch of stuff on the topic.
Lot's of chassis-based system centralized the AC-DC power supplies.
We also worked with the fan vendor to get parts with a lower minimum RPM. The stock fans idle at about 5K RPM, and ours idle at 2K, which is already enough to keep the system cool under light loads.
> just curious, are you associated with them, as these are very obscure youtube videos :D
Unassociated, but tech-y videos are often recommended to me, and these videos got pushed to me. (I have viewed other, unrelated Tech Day videos, so probably why I got that short. Also an old Solaris admin, so aware of Cantril, especially his rants.)
> Love it though, even the reduction in fan noise is amazing. I wonder why nobody had thought of it before, it seems so simple.
Depends on the size of the server: can't really expand fans with 1U or even 2U pizza boxes. And for general purpose servers, I'm not sure how many 4U+ systems are purchased—perhaps some more now that perhaps GPUs cards may be a popular add-on.
For a while chassis systems (e.g., HP c7000) were popular, but I'm not sure how they are nowadays.
> I'm not sure how many 4U+ systems are purchased—perhaps some more now that perhaps GPUs cards may be a popular add-on.
Going from what i see at eCycle places, 4U dried up years ago. Everything is either 1 or 2U or massive blade receptacles (10+ U).
We (the home-lab on a budget people) may see a return to 4U now that GPUs are in vogue but i'd bet that the hyper scalers are going to drive that back down to something that'll be 3U with water cooling or so over the longer term.
We may also see similar with storage systems too; it's only a matter of time before SSD gets "close enough" to spinning rust on the $/gig/unit-volume metrics.
Big (large radius) fans can move a lot of air even at low RPM. And be much more energy efficient.
Oxide Computer, in one of their presentations, talks about using 80mm fans, as they are quiet and (more importantly) don't use much power. They observed, in other servers, as much as 25% of the power went just to powering the fans, versus the ~1% of theirs:
* https://www.youtube.com/shorts/hTJYY_Y1H9Q
* https://www.youtube.com/watch?v=4vVXClXVuzE