[zfs-discuss] Are ST4000DM000 drives good for RAID-Z?

Hajo Möller dasjoe at gmail.com
Sun Aug 25 20:19:20 EDT 2013


On 25.08.2013 23:10, Chris Siebenmann wrote:
>  Things get worse with raidz because raidz stripes your data across
> all of the drives (minus the parity drive). If you lose two drives
> you've lost both parity and some data for a *lot* of things, both data
> and metadata. I would expect the pool to be effectively unrecoverable
> (in that the list of things to restore would be most everything) and
> possibly literally unrecoverable (in that ZFS refuses to bring up the
> pool).

I recently lost 4 disks out of a 9x3TB-disk raidz3 (SiI3124 + SiI3726
crapped out on me), after which it appeared that all data from the
affected vdev was lost as ZFS showed errors in roughly 18 TB of files,
plus irreparable metadata errors in the filesystem.

I managed to fix the pool by replacing the disks, deleting all affected
files, then creating a new ZFS filesystem on the same pool and moving
all data over to the new ZFS. I'm still restoring stuff from tape for
that pool.

So, putting data on a filesystem on a pool (zpool create tank, zfs
create tank/production) and not directly on the pool saved the pool.

Also, 6-disk raidz2 seems to be the sweet spot for IOPS, usable capacity
and safety for me. Single-parity RAID like raidz1 really shouldn't be
used nowadays.

-- 
Cheers,
Hajo Möller

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.



More information about the zfs-discuss mailing list