[zfs-discuss] Trying to understand ZFS disk I/O vs raw disk I/O

Uncle Stoat stoatwblr at gmail.com
Thu Aug 29 11:53:48 EDT 2013


On 29/08/13 16:35, Ron Kelley wrote:
>   I have run many NetApp filers
> and other black-box servers over the years with various hardware RAID cards
> (LSI, Areca, etc) using 6+2 RAID-6 configurations with multi TB drives and
> have never needed triple redundancy.  I understand larger TB drives take
> longer to rebuild (thus increasing the time for other bad things to
> happen), however, I have not yet lost an entire array because three drives
> failed at the same time <knock on wood>.

I have (HP MSA-1000s and Nexenta F5404s)

Your milage may vary, but you're probably not hitting the arrays as hard 
as my users are - (the arrays sit in "overload" most of the time and 
it's hard to change their IO habits.)

Bear in mind that if you're resilvering a 32Tb vdev, it will take 
between 3 days and 10 days depending on external load. If all your 
drives are from the same batch and arrived with the same courier, then 
they're all more-or-less equally at risk of  dying on you (even if 
they're from different batches, this is a good rule of thumb to adhere to.)


To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.



More information about the zfs-discuss mailing list