[zfs-discuss] Idiot recovery advice please [RAIDZ array headers damaged]

ben.bayliss at gmail.com ben.bayliss at gmail.com
Sun Nov 18 07:03:18 EST 2012


On Sunday, 18 November 2012 11:23:00 UTC, ben.b... at gmail.com wrote:

>
>
> On Saturday, 17 November 2012 20:15:52 UTC, Niels de Carpentier wrote:
>>
>>
>> You just need to recreate the GPT tables I guess. How did you zap the GPT 
>> tables? 
>> If you just overwrite the first part of the disk there is a backup copy 
>> on the end of the disk. 
>> gdisk should be able to fix it in that case. 
>>
>> Niels 
>>
>>
> Exactly that! Luckily whilst zapping the GPT (gdisk - x - z) I had also 
> done a 'p' showing me the exact sector alignments of the partition table 
> it'd been using. All I needed to do was to recreate that, and immediately 
> the old partitions were recognised by the system, original GUIDs and all. 
> Booted first time.
>
> Thanks for giving me the confidence to go playing around with gdisk some 
> more :)
>
> As a bit of background (because I'm aware of how stupid what I did 
> probably sounds), I originally created this array with 4 raw disks, 
> absolutely no partitions saved in the MBR and no GPT. When one drive died I 
> replaced it but didn't notice that the replacement from Seagate came with 
> GPT tables already on the drive. ZFS was happy to use this, and now is set 
> to use the -part1 area of the drive (all but 8MB) which I don't want for 
> OCD reasons of neatness. Then in a way I don't quite understand, this was 
> silvered over to a second drive in the array and now I've got two drives, 
> with GPT headers and annoying partitions that I was trying to get rid of. I 
> figured I could accept to lose 512kb here or there and didn't realise that 
> the labels at the start of the drive would throw ZFS out that much.
>
> Oddly, even though it's now working the labels (zdb -l) for those two 
> drives are still blank. How can I get them repaired?
>
> Ben.
>
> Interesting - a search through this group for 'label' suggests that the 
behaviour might have changed since I created the pool using ZFS-FUSE and 
that ZFS now *intentionally* creates GPT information when picking up a new 
whole-disk. If that's the case then there's nothing wrong at all. Wonder if 
I can get the other two to match for consistency..... 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20121118/eb2aac88/attachment.html>


More information about the zfs-discuss mailing list