[zfs-discuss] Re: raid10 vs raid-z2?

matthew.garman at gmail.com matthew.garman at gmail.com
Thu May 30 10:53:34 EDT 2013


On Thursday, May 30, 2013 9:30:43 AM UTC-5, Druber wrote:

> > On Wednesday, May 29, 2013 8:57:49 PM UTC-5, theki... at gmail.com wrote: 
> > 
> >> Maybe worth checking the reliability calculator. 
> >> 
> >> 
> http://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/ 
> >> I am probably guessing wrong but, will will have a backup right ? if 
> you 
> >> do have a slow backup, then you should choose the fast configuration 
> for 
> >> your live environnment. Even if it's less safe and completly crash, you 
> >> just have to restore from your backup. 
> >> 
> > I've played with that calculator some.  I'd like to know more details 
> > about 
> > how it arrives at the numbers it does.  But for my particular 
> > configuration 
> > (six consumer-grade 3TB disks), it actually gives raid10 a bigger MTTDL 
> > than raid6.  In other words, it claims lower statistical probability of 
> > data loss with raid10 vs raid6.  Although if no data loss is the primary 
> > goal, raid-z3 is the way to go (which makes intuitive sense). 
>
> I must be reading things wrong then.  I ran the calculator and every 
> example I feed it shows raid6 as being more reliable than raid10? 
>

Try these parameters:
MBTF: Pessimistic 36.5K
Nonrecoverable Error Rate: 10^14
Drive Capacity: 3 TB
Sector Size: 4 KB
Quantity of Disks: 6
Volumes: 1
Volume Rebuild Speed: 10

That gives me MTTDL of 315 for raid10, and 235 for raid6.

The kicker for this scenario seems to be the "Volume Rebuild Speed". 
 Changing that to the default of 15 puts raid6 back in the lead with an 
MTTDL of 520, versus raid10's 472.

I always use conservative/worst-case estimates when doing things like this. 
 But when I rebuilt my current mdadm-based raid6 set, the initial sync'ing 
was often hanging around in the 10 MB/s range (but that was before tuning 
and also copying data to it at the same time, so not truly representative).

But the inclusion of that "Volume Rebuild Speed" parameter makes me suspect 
that the model assumes a hot spare.  If you don't have a hot-spare, and 
have to wait to order/RMA a new drive when there's a failure, I think your 
"rebuild rate" should include the time to procure the replacement 
drive---which would probably drive it down to the KB/s or maybe even B/s 
range (i.e. you might have to go a few days before even starting the 
rebuild).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20130530/98ad5ca2/attachment.html>


More information about the zfs-discuss mailing list