[zfs-discuss] Migrating raidz2 to more drives

David Gossage dgossage at carouselchecks.com
Wed Dec 7 09:53:35 EST 2016


On Wed, Dec 7, 2016 at 8:12 AM, Jan Schermer via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> The chassis has 24(+2) drive slots, the plan is to have 3x raidz2 of 8
> drives.
> All the drives are SSDs (Intel S3610), so resilvering is not really an
> issue, I think we easily could have gone with 24 drives in one raidz2 :-)
> The storage itself is in fact redunant as well, but due to architecture on
> top of that we can lose thin provisioning after doing maintenance when data
> is resyncing from the other storages, that's why I don't want to bring the
> pool down.
>

Probably create a new pool then with 8 drive vdev and migrate data.  then
you can destroy the 6 drive vdev and make it an 8 and add to pool.  Then
add your 3rd vdev whenever

>
> Thanks
> Jan
>
> On 7 Dec 2016, at 14:56, Gordan Bobic <gordan.bobic at gmail.com> wrote:
>
> You can't. You can only add another vdev.
> That is why it is critical to work out your geometry according to the
> requirements and disk size and reliability to begin with, as you can
> subsequently only expand by replacing each disk with a bigger disk in turn,
> or adding an additional vdev.
>
> Regarding your configuration, a 6-disk RAIDZ2, especially with modern
> sized disks is pretty reasonable in terms of risk of losing data during
> resilvering.
>
>
>
> On Wed, Dec 7, 2016 at 1:26 PM, Jan Schermer via zfs-discuss <
> zfs-discuss at list.zfsonlinux.org> wrote:
>
>> Hi,
>> let's say I have a zpool that looks like this:
>>
>>         NAME          STATE     READ WRITE CKSUM
>>         zfstest       ONLINE       0     0     0
>>           raidz2-0    ONLINE       0     0     0
>>             zfstest1  ONLINE       0     0     0
>>             zfstest2  ONLINE       0     0     0
>>             zfstest3  ONLINE       0     0     0
>>             zfstest4  ONLINE       0     0     0
>>             zfstest5  ONLINE       0     0     0
>>             zfstest6  ONLINE       0     0     0
>>
>> Now I would like to replace this raidz2 configuration with one that has
>> more drives.
>> I thought I could create a mirror on top of that raidz2-0 like this (now
>> I know I was wrong):
>>
>> # zpool attach zfstest raidz2-0 raidz2 /dev/zvol/somepool/zfstest7
>> /dev/zvol/somepool/zfstest8 /dev/zvol/somepool/zfstest9
>> /dev/zvol/somepool/zfstest10 /dev/zvol/somepool/zfstest11
>> /dev/zvol/somepool/zfstest12 /dev/zvol/somepool/zfstest13
>> /dev/zvol/somepool/zfstest14 /dev/zvol/somepool/zfstest15
>> /dev/zvol/somepool/zfstest16
>> too many arguments
>>
>> this clearly isn't meant to work ^
>>
>> and even if I could attach vdev, it wouldn't work either:
>> # zpool attach zfstest raidz2-0 /dev/zvol/somepool/zfstest7
>> cannot attach /dev/zvol/somepool/zfstest7 to raidz2-0: can only attach to
>> mirrors and top-level disks
>>
>> Looking into docs, I realized nested vdevs aren't supported at all with
>> the exception of striping/linear (thus I always assumed it was possible),
>> and this is all expected behaviour.
>>
>> But is there maybe some other way to do this? I know I can send/receive
>> the pool to a new pool and then rename it, but that's disruptive and I'd
>> like to avoid that if at all possible.
>>
>> Thanks
>> Jan
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss at list.zfsonlinux.org
>> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>
>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20161207/99ad0b9b/attachment-0002.html>


More information about the zfs-discuss mailing list