I have 2 lightly-used home ZFS servers (OmniOS/Napp-IT), each with 8x ST3000DM001 drives, for 16x drives total. The drives were purchased in mid to late 2012, all shucked from external units. It was by far the cheapest way to get storage at the time, given the Thailand flooding.
I've had to buy replacements for complete or partial failure, for 3 in 2013, 10 in 2014, and 1 in 2015 so far. 4 of the replacements have experienced failures too, including 2 RMA returns after I switched to buying internal drives with warranties. Only 4 of the original 16 remain.
These stats match my experience. The ST3000DM001 drives are by far the least reliable drives I've ever experienced in 20 years of using drive arrays. Fortunately, with two ZFS raidz arrays mirroring each other, I haven't had any data loss.
Just coming up to 100,000 flight hours with my 6. No real complaints, but quietly planning retirement to secondary mass storage. Picked up some surprisingly cheap Toshiba 5TB's for evaluation[1].
Did you notice anything in particular leading up to failure? IO errors, reallocated/pending sectors, checksum errors?
I've had to buy replacements for complete or partial failure, for 3 in 2013, 10 in 2014, and 1 in 2015 so far. 4 of the replacements have experienced failures too, including 2 RMA returns after I switched to buying internal drives with warranties. Only 4 of the original 16 remain.
These stats match my experience. The ST3000DM001 drives are by far the least reliable drives I've ever experienced in 20 years of using drive arrays. Fortunately, with two ZFS raidz arrays mirroring each other, I haven't had any data loss.