Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The memory goes bad because they write useless logs to a chip, and it eventually fails.

I worked for a $ ~billions revenue software storage vendor who had the exact same issue (excessive logging wearing out under-spec'd flash drives).



The bane of every cargo cult cloud op. I worked with a company that had maybe 20 devs total, > 30 "microservices" in kubernetes and one of the most complex bits of the deployment was handling Greylog and Elasticsearch. Still they couldn't manage high availability, despite logging all the things. Go figure.


I once worked for a unicorn that got near-zero traffic during the pandemic, but nobody could understand why some services were struggling to stay up.

Datadog was costing several thousand euros per month despite near-absent customer traffic. But the name made finally sense because all the data in there was absolute dog shit from reboots.

So yeah too much logging can be bad.


Oh most definitely. Maybe my sarcasm was a bit too subtle.

I definitely think that teams should think about what to log. Otherwise go with a live image kind of system like Smalltalk of LISP. The whole event sourcing paradigm and trying to just log everything and look at it later strike me as a poor reconstruction of that concept.

There is a tragic aspect to the "Worse is Better" essay that I see play out everywhere: there is a way to do something correctly but just throwing something together wins the race to market. Winner takes all and we're stuck with ossified bad decisions from the past. The idea that we can fix it later is just a lie. You can't do the foundation later, you'll be stuck with a structurally unsound edifice and forever holding it together under a completely unnecessary cognitive load.


Oh I got the sarcasm, I was just agreeing.

And I also agree about worse is better. To me the most tragic part is that "worse" has become almost as costly as doing "The Right Thing", mostly due to the extreme flexibility and rush to the market from vendors and libraries. Our foundations weren't as sketchy when the concept was invented.


It has definitely gotten much worse. The only thing keeping me sane is hacking solo projects in languages with great tooling. I don't think I can even stomach interviews anymore, let alone the whole application process farce.


I remember doing an interview with systems design using microservices and mentioning at the end "Well I guess that's it but if this was my personal project I would just have a single server and no native cloud bs".

The guy basically answered "Oh, same. I just ask for people to do microservices because that's how the CTO wants".


We had the exact same issue as well haha

These kind of problems only happen years after the software roll out so no one cares when you are under time pressure.


We sold physical hardware with bundled software, so we could actually create the problem via in-market software update that didn't exist at time of sale! Fun times.


HPE also had this issue with their ILO 4. New firmware fixed that issue but if your flash chip was already worn out you're out of luck and the only solution is to replace the entire motherboard.


Issue, or revenue driver?


Issue. We warrantied the longevity of those flash drives, and they were cheap anyway. The problem was mostly the customer pain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: