[zfs-discuss] zfs-discuss Digest, Vol 43, Issue 3

Gordan Bobic gordan.bobic at gmail.com
Wed Nov 7 13:52:01 EST 2018


This is a good paper from CERN on the matter:

http://indico.cern.ch/getFile.py/access?contribId=3&sessionId=0&resId=1&materialId=paper&confId=13797

Their findings were that in reality the error rates are nearer 10^-7, IIRC.



On Wed, Nov 7, 2018 at 6:42 PM Gionatan Danti via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> Il 07-11-2018 15:48 George Melikov via zfs-discuss ha scritto:
> > Cherry on a pie:
> > - consumer HDDs: Non-recoverable read errors per bits read = <1 in
> > 10^14, so manufacturers are nearly guarantee a one bit failure on
> > ~11.37TB of data read
>
> This is not given at all. First, a 1 in 10^14 URE rate means that each
> read has 1/10^14 chances to fail which, for 10^14 bit read (100 Tb or
> 12.5 TB), roughly translate in (1-(1/10^14))^(10^14) == 0,368% to
> *successfully* complete all reads.
>
> But the key point is that manufacturers do *not* explicitly states as
> the URE rate is estimated, or what conditions it precisely measured.
>  From here [1]: "It is important to stress that there is no generally
> agreed upon interpretation of bit error rates."
>
> For example, are UREs due to bad writing, so that a subsequent read
> fails?
> Are they due to degradation of the underlying magnetical media?
> Can an URE "disappear" if trying to read the sectors to a later time
> and/or with different environment conditions (ie: temperature)?
> As HDDs are block devices, does the URE rate accounts for the underlying
> block size?
> Etc...
>
> The details *really* made a world of difference here. I read over 50 TB
> from a 500 GB disk with a 1<10^14 URE rate with absolutely no problem.
> But this does really means nothing...
>
> An interesting (but absolutely not conclusive) dicussion on the matter
> can be found here [2]
>
> [1]
>
> http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=BAADCF565B20D3947B209A374889352D?doi=10.1.1.41.3889&rep=rep1&type=pdf
>
> [2] https://www.spinics.net/lists/raid/msg46845.html
>
> > - enterprise HDDs: <1 in 10^15, so a one bit failure on ~113TB of data
> > read
> >
> > https://en.wikipedia.org/wiki/RAID#URE
> >
> > 07.11.2018, 17:36, "Maurice Volaski via zfs-discuss"
> > <zfs-discuss at list.zfsonlinux.org>:
> >>>  IMO, disks much over 4TB are bad news. I suggest you stick with 4TB
> >>> disks.
> >>>  Avoid shingled or helium filled disks, or those under 7200rpm.
> >>> Consult last
> >>>  few years worth of Backblaze statistics when choosing the disk brand
> >>> and
> >>>  model to get.
> >>
> >> I believe no answer has been given as to why one should avoid > 4TB
> >> drives (or helium-filled disks for that matter). Interestingly, the
> >> latest backblaze stats recommend helium-based models.
> >>
> >> _______________________________________________
> >> zfs-discuss mailing list
> >> zfs-discuss at list.zfsonlinux.org
> >> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
> >
> > ____________________________________
> > Sincerely,
> > George Melikov
> >
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss at list.zfsonlinux.org
> > http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti at assyoma.it - info at assyoma.it
> GPG public key ID: FF5F32A8
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20181107/7489369b/attachment.html>


More information about the zfs-discuss mailing list