I think that the author may not have experienced these sorts of errors before.
Yes, the average person may not care about experiencing a couple of bit flips per year and losing the odd pixel or block of a JPEG, but they will care if some cable somewhere or transfer or bad RAM chip or whatever else manages to destroy a significant amount of data before they notice it.
I was young and only had a deskop, so all my data was there.
So I purchased a 300GB external usb drive to use for periodic backup. It was all manual copy/paste files across with no real schedule, but it was fine for the time and life was good.
Over time my data grew and the 300GB drive wasn't large enough to store it all. For a while some of it wasnt backed up (I was young with much less disposable income).
Eventually I purchased a 500GB drive.
But what I didn't know is my desktop drive was dying. Bits were flipping, a lot of them.
So when I did my first backup with the new drive I copied all my data off my desktop along with the corruption.
It was months before I realised a huge amount of my files were corrupted. By that point I'd wiped the old backup drive to give to my Mum to do her own backups. My data was long gone.
Once I discovered ZFS I jumped on it. It was the exact thing that would have prevented this because I could have detected the corruption when I purchased the new backup drive and did the initial backup to it.
(I made up the drive sizes because I can't remember, but the ratios will be about right).
There’s something disturbing about the idea of silent data loss, it totally undermines the peace of mind of having backups. ZFS is good, but you can also just run rsync periodically with checksum and dryrun args and check the output for diffs.
It happens all the time. Have a plan, perform fire drills. It's a lot of time and money, but there's no equivalent feeling to unfucking yourself quite like being able to get your lost, fragile data back.
The challenge with silent data loss is your backups will eventually not have the data either - it will just be gone, silently.
After having that happen a few times (pre-ZFS), I started running periodic find | md5sum > log.txt type jobs and keeping archives.
It’s caught more than a few problems over the years, and allows manual double checking even when using things like ZFS. In particular, some tools/settings just aren’t sane to use to copy large data sets, and I only discovered that when… some of it didn’t make it to it’s destination.
I think that the author may not have experienced these sorts of errors before.
Yes, the average person may not care about experiencing a couple of bit flips per year and losing the odd pixel or block of a JPEG, but they will care if some cable somewhere or transfer or bad RAM chip or whatever else manages to destroy a significant amount of data before they notice it.