[zfs-discuss] Anyone having success with zpool split?

Aidan Williams aidanonym at gmail.com
Tue Oct 1 20:35:45 EDT 2013


Hey Gregor,

Good suggestion - it allowed the pool to be split.

The zpool status still shows whole disks, but of the sdb/sdc whole-disk 
form (see output below).
It seems strange that a sdb/sdc device works and a symlink to the same 
device does not..

The behaviour of the partitions with ZFS on Linux is confusing to me.
On Solaris, I exclusively use whole disks and the devices have labels 
managed by ZFS.
The same approach on Linux seems to work fine, mostly..

Is there a description somewhere of how labels and ZFS work on Linux?

- aidan


root at plutus:~# zpool export offsite

root at plutus:~# zpool import offsite -d /dev

root at plutus:~# zpool status

  pool: offsite

 state: ONLINE

  scan: resilvered 2.65T in 14h43m with 0 errors on Wed Oct  2 09:09:20 2013

config:

NAME        STATE     READ WRITE CKSUM

offsite     ONLINE       0     0     0

  mirror-0  ONLINE       0     0     0

    sdb     ONLINE       0     0     0

    sdc     ONLINE       0     0     0

errors: No known data errors

 

root at plutus:~# zpool split offsite offsite-2013-10-02

root at plutus:~# zpool status

  pool: offsite

 state: ONLINE

  scan: resilvered 2.65T in 14h43m with 0 errors on Wed Oct  2 09:09:20 2013

config:

NAME        STATE     READ WRITE CKSUM

offsite     ONLINE       0     0     0

  sdb       ONLINE       0     0     0

errors: No known data errors




On Tuesday, 1 October 2013 18:30:14 UTC+10, Gregor Kopka wrote:
>
>  Can you export/import  the pool (maybe with -d /dev, or by removing the 
> symlinks in /dev/disk/by-id who point to the whole disks) so the devices 
> show up with -part1 (or as scb1/sdc1) ? Then the error should vanish since 
> zfs won't deal with complete disks anymore...
>
> Gregor
>
> Am 01.10.2013 10:08, schrieb Aidan Williams:
>  
> Hi, 
>
>  I'm using Ubuntu 12.04.3 LTS with the zfs-native/stable/ubuntu PPA.
>
>  My setup is used for backups.  A pair of drives is mirrored and I 
> regularly split the mirror and take a disk offsite.
>
>  Unfortunately, the "zpool split" command isn't working for me anymore, 
> possibly as a result of an software update.  It used to be kind of klunky, 
> but it was possible to split the mirror.
>
>  My pool looks like this:
>
>   root at plutus:~# zpool status
>
>    pool: offsite
>
>   state: ONLINE
>
>    scan: resilvered 120K in 0h0m with 0 errors on Tue Oct  1 17:47:03 2013
>
>  config:
>
>   NAME                                 STATE     READ WRITE CKSUM
>
>   offsite                              ONLINE       0     0     0
>
>    mirror-0                           ONLINE       0     0     0
>
>      ata-ST3000DM001-1CH166_Z1F23AZ2  ONLINE       0     0     0
>
>      ata-ST3000DM001-1CH166_W1F290Q2  ONLINE       0     0     0
>
>  errors: No known data errors
>
>   
>  The "zpool split" command gives the following error:
>
>   root at plutus:~# zpool split offsite offsite-2013-10-01 
>> /dev/disk/by-id/ata-ST3000DM001-1CH166_W1F290Q2
>
>  the kernel failed to rescan the partition table: 16
>
>  cannot label 'sdc': try using parted(8) and then provide a specific 
>> slice: -1
>
>    
>  Using different devices (e.g. sdc), etc doesn't help.
>
>  The drives were added to the pool as whole disks and labels were created 
> automatically.
> The partitions look like this:
>
>   Model: ATA ST3000DM001-1CH1 (scsi)
>
>  Disk /dev/sdb: 3001GB
>
>  Sector size (logical/physical): 512B/4096B
>
>  Partition Table: gpt
>
>  Number  Start   End     Size    File system  Name  Flags
>
>   1      1049kB  3001GB  3001GB  zfs          zfs
>
>   9      3001GB  3001GB  8389kB
>
>  
>>   Model: ATA ST3000DM001-1CH1 (scsi)
>
>  Disk /dev/sdc: 3001GB
>
>  Sector size (logical/physical): 512B/4096B
>
>  Partition Table: gpt
>
>  Number  Start   End     Size    File system  Name  Flags
>
>   1      1049kB  3001GB  3001GB  zfs          zfs
>
>   9      3001GB  3001GB  8389kB
>
>  
>>  
>  Using split *without* a specific drive fails in a different way:
>  
>   root at plutus:~# zpool split offsite offsite-2013-10-01 
>
>  Unable to split offsite: no such pool or dataset
>
>  root at plutus:~# zpool status
>
>    pool: offsite
>
>   state: DEGRADED
>
>  status: One or more devices could not be used because the label is 
>> missing or
>
>   invalid.  Sufficient replicas exist for the pool to continue
>
>   functioning in a degraded state.
>
>  action: Replace the device using 'zpool replace'.
>
>     see: http://zfsonlinux.org/msg/ZFS-8000-4J
>
>    scan: resilvered 120K in 0h0m with 0 errors on Tue Oct  1 17:47:03 2013
>
>  config:
>
>   NAME                                 STATE     READ WRITE CKSUM
>
>   offsite                              DEGRADED     0     0     0
>
>    mirror-0                           DEGRADED     0     0     0
>
>      ata-ST3000DM001-1CH166_Z1F23AZ2  ONLINE       0     0     0
>
>      ata-ST3000DM001-1CH166_W1F290Q2  UNAVAIL      0     0     0
>
>  errors: No known data errors
>
>  
>  
>  A reboot restores the pool to normalcy.
>
>  Suggestions most welcome.
>
>  - aidan
>
>  To unsubscribe from this group and stop receiving emails from it, send 
> an email to zfs-discuss... at zfsonlinux.org <javascript:>.
>
>
>  

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20131001/46e79ef0/attachment.html>


More information about the zfs-discuss mailing list