For example if your use case is a file archive (think raw photo or video), then fiesysytem interface does not matter - if the computer crashes soon after copy, you re-copy from original media. But bit flips are very real and can ruin your day.
I'm really curious, what your storage stack is that your getting undetected bit flips. This stuff was my day job for ~10 years. I've seen every kind of error you can imagine, and I can't actually remember "bit flips" showing up in end user data, that wasn't eventually attributable to something stupid like lack of ECC ram, or software bugs. Random bit flips tend to show up two ways, the interface/storage mechanism gets "slow" (due to retries/etc) or flat out read errors. This isn't the 1980's where you could actually read data back from your storage mechanism and get flipped bits, there are too many layers of ECC on the storage media for it to go undetected. Combined with media scrubbing failures tend to be all or nothing, the drive goes from working to 100% dead, or the raid kicks it when the relocated sector counts start trending up, or the device goes in a read only mode. What most people don't understand is that IO interfaces these days aren't designed to be 100% electrically perfect. The interface performance/capacity is pushed until the bit error rate (BER) is not insignificant, and then error correction is applied to assure that the end result is basically perfect.
But as I mentioned these days I'm pretty sure nearly all the storage loss that isn't physical damage is actually software bugs. Just a couple months ago I uploaded a multiple GB file (on my ECC protected workstation) to a major hyperscaler's cloud storage/sharing option. Sent the link to a colleague half way around the globe and they reported an unusual crash. So I asked them to md5sum the file and they got a different result from what I got, so I downloaded the file myself and diffed it against the original and right in the middle there was a ~50k block of garbage. Uploaded it again, and it was fine. Blame it on my browser, or whatever if you will, but the end result was quite disturbing because I'm fairly certain my local storage stack was fine. These days i'm really quite skeptical of "advanced" filesystem/etc. What I want is a dumb one where the number 1 priority is data consistency. I'm not sure that is an accurate reflection of many of them, where winning the storage perf benchmark, or feature wars seems to be a higher priority.
Last time I have seen data damage was about 2005, this was multiple SATA drives connected to a regular consumer motherboard running Linux (sorry, don't remember the brands). If I remember right, I think there was an 8-byte block damaged every few gigabyte transferred or so? So a very high number of damaged files given I had a few terabytes of data.
I never found the cause, because I just switched to a completely different system to copy data. I know it was not disk-specific because this was happening on multiple hard drives, nor it was physical damage, as SMART/syslog were silent, and reading the disk again was giving correct data. Memory was fine -- not ECC, but I did run a lot of memtest's on it.
Later on, I found some blog posts which mentioned the similar problem and claim it was result of bad SATA card, or bad cable, or even bad power supply. I remember there was original one made by Jeff Bonwick on his ZFS blog, but I cannot find it anymore. Here is a more modern link instead: https://changelog.complete.org/archives/9769-silent-data-cor...
I now have the homegrown checksumming solution which I use after each major file transfer, and I have not seen any data corruption yet (*known on the wood().
For example if your use case is a file archive (think raw photo or video), then fiesysytem interface does not matter - if the computer crashes soon after copy, you re-copy from original media. But bit flips are very real and can ruin your day.