[zfs-discuss] Idiot recovery advice please [RAIDZ array headers damaged]

Niels de Carpentier niels at decarpentier.com
Sun Nov 18 07:41:03 EST 2012


ben.bayliss at gmail.com wrote:

> Interesting - a search through this group for 'label' suggests that the
> behaviour might have changed since I created the pool using ZFS-FUSE and
> that ZFS now *intentionally* creates GPT information when picking up a new
> whole-disk. If that's the case then there's nothing wrong at all. Wonder if
> I can get the other two to match for consistency.....
>
Yes, the partitions are there on purpose, and are automatically created when you use a raw disk.
So in that view your action seemed pretty far out there.
The reason for the partitions is that different manufacturers/models of drives of the same size can
have slightly different actual number of sectors. This can be a pain if you try to replace your broken
drive with a new one. If the new drive is say 64 sectors shorter, that would fail if you had used the
entire disk. So there is a small empty partition at the end (partition 9) to prevent this from happening.
Raid controllers often do something similar for the same reason.
I guess you can just wipe and resilver the old drives (one at a time :) ), which I think will create
the partitions. You lose redundancy though during the resilver, so there is a risk involved.

Niels



More information about the zfs-discuss mailing list