[zfs-discuss] Clarifying questions

Gordan Bobic gordan.bobic at gmail.com
Wed Jan 15 04:46:14 EST 2014

On Wed, Jan 15, 2014 at 7:53 AM, Niels de Carpentier
<zfs at decarpentier.com>wrote:

> >
> > Note that the ST4000DM000 (same 4 TB capacity, same 3.5" package, same
> > SATA/600 connection all as the ST4000NM0033 which I use) packs data
> > into 4,096 byte sectors. (It's also only rated to 2,400 power-on hours
> > - which is downright ridiculous IMO - whereas the STx000NM00xx are
> > rated to 8,760 power-on hours *per year*.) [1] Depending on the exact
> > ECC/FEC scheme, this may have an impact on how much ECC data is needed
> > to correct any given read error.
> >
> > I can see the look on the face of someone who RMAs a disk that is well
> > within even its short warranty period for data corruption errors and
> > they get it back with a "sorry, it's been powered on for more than 100
> > days which is all we guarantee, check the data sheet" - and it's not
> > pretty. But really, do check the data sheet under "Reliability/Data
> > Integrity"; "Power-on Hours" is listed as 2,400, whereas for the
> > ST4000NM0033, "Power-On Hours per Year" is listed as "8760 (24×7)".
> > That's not a marginal difference!
> I think that's just marketing and trying to get people to buy more
> expensive drives. The worst case for drives is actually when data is
> written and not read for a long time. Disk that are on 24/7 and scrubbed
> weekly are far less likely to get errors then disk that are written and
> read back 2 years later.

Are you taking about stored disks rather than active disks, or are
you talking about data on sectors that haven't been read recently?
I cannot comment on your theory (do you have empirical evidence?)
since my disks undergo a weekly scrub and a weekly long SMART
self-test, so all of their sectors get read at least once/week.

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20140115/ae4afd4c/attachment.html>

More information about the zfs-discuss mailing list