Your RAID is failing you (Re: [zfs-discuss] Re: Question regarding resizeing/expanding a zpool)

Gregor Kopka gregor at kopka.net
Thu Apr 19 14:50:52 EDT 2012


Am 19.04.2012 17:37, schrieb peram:
> You have somewhat confirmed my understanding that using RAID on the HW-
> level was kind of a mistake.  The plan now is to get some more
> diskspace and copy the data to the new disks.
Good plan.
> After i've gotten the data "off the raid", then what ?  Would the best
> thing be to break up the raid, and expose the disks directly to the OS
> and create zpools from the individual disk ?
In case you get to get enough disks to completely vacate the pool you 
have now _and_ plan to go for zfs style raid 10 (striped mirrors) you could:
1) setup a new pool (without redundany) using the new disks
2) zfs send/recv the old pool into the new one
3) destroy the old pool
4) break up the raid into individual disks
5) zpool ATTACH the old disks to the vdevs of the new pool
6) wait for resilver and be done.

Downside is that your data wouldn't be redundant while steps 3-6 are 
running.
Upside is that you'll be able to keep all snapshots you currently have.

In case you don't want to go this way but find enough diskspace to 
create a pool like step 1 then i'll suggest to do step 1-4 as above and:
5) zpool create NEWnew pool with disk from broken up raid
6) zfs send/recv new pool to NEWnew pool
7) get rid of new pool

in the end you can:
zpool export new/(or NEWnew)/
zpool import new (or NEWnew) old_pool_name

Just remember to hand zfs /dev/disk/by-id devices (or import the pool 
with -d /dev/disk/by-id) so zfs won't get confused in case you /dev/sd* 
layout changes because of added/removed devices somewhen in the future.

> Will I then get all the "good stuff" from zfs ?
In case zfs can handle the redundancy it'll be able to repair faults 
like the one you experienced.
> As far as I can tell from the reading I've
> done, zraid2 is RAID6'ish and zraid is RAID5'is, correct ?
Yes.

In case you go raidz: create vdevs with n+p members - n data disks (base 
2 for n=2,4 or 8 drives) *plus* p parity disks (p=number in raidz level) 
for best results. So in case of raidz1: 3/5/9 or raidz2: 4/6/10.
Some zfs gurus from SUN advised to NOT create vdevs with more than 10 
members.


Fun stuff:
The _daring_ /could also get the ideas/ like splitting off the 
redundancy disks from the raid to get diskspace for the new pool, or 
create the new pool with degradev vdevs (= faulted disks for parity to 
online them later with drives from the then-deceased RAID) to keep the 
number of additional disks needed low. *But given that your RAID already 
handed you corrupt data once i **strongly **advise against it* /unless/ 
you have a full backup somewhere else, feel good about the idea of 
having to restore it /and /are in the mood to experiment with the 
resiliency of zfs in case things go south (since you may lose one or 
both pools in the process).


Always remember to view data which has no backup as if it's aready 
deleted and gone. ZFS is resilient, but no replacement for backups.

Hope this helps.

Gregor

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20120419/925b634b/attachment.html>


More information about the zfs-discuss mailing list