[zfs-discuss] can't get second ZFS pool to automount

LosingMyZFS zfs at thekorn.net
Sat May 18 13:23:32 EDT 2013


Since apparently nobody seems to have any ideas whatosever, is there a way
to turn on logging?  Right now I have no visibility as to what zfs is even
*attempting* to do (much less why it's failing!), and every google for zfs
log I try talks about adding a ZIL drive.  Not really what I'm looking for!

--------------------------------------------------
From: "LosingMyZFS" <zfs at thekorn.net>
Sent: Friday, May 17, 2013 1:09 PM
To: <zfs-discuss at zfsonlinux.org>
Subject: [zfs-discuss] can't get second ZFS pool to automount

> Ugh, banging my head here.
>
> Running (k)ubuntu 12.04.2 LTS.  Have two pools set up, root_pool and 
> data_pool.  data_pool is a raidz across 5x4TB drives.  root_pool mounts at 
> startup just fine, but data_pool will never automount, and I can't for the 
> life of me figure out why.  Right after boot, here's what I have:
>
>
> root at kubuntu:/home/vince# zpool status -v
>  pool: root_pool
> state: ONLINE
>  scan: none requested
> config:
>
>        NAME                                           STATE     READ WRITE 
> CKSUM
>        root_pool                                      ONLINE       0     0 
> 0
>          scsi-SATA_SanDisk_SDSSDP0130804401449-part3  ONLINE       0     0 
> 0
>
> errors: No known data errors
> root at kubuntu:/home/vince# zpool status -v data_pool
> cannot open 'data_pool': no such pool
>
>
> But I can *always* import the pool OK (have to use -f, don't know why):
>
>
> root at kubuntu:/home/vince# zpool import -d /dev/disk/by-id data_pool
> cannot import 'data_pool': pool may be in use from other system
> use '-f' to import anyway
> root at kubuntu:/home/vince# zpool import -f -d /dev/disk/by-id data_pool
> root at kubuntu:/home/vince# zpool list data_pool
> NAME        SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> data_pool  18.1T  1.38M  18.1T     0%  1.00x  ONLINE  -
>
>
>
> Doing a "zfs get all data_pool" says the pool canmount, has a mount point, 
> and basically everything looks hunky dory.
>
>
>
> I know what you're thinking, stale cache file, right?  Nope.  Rebuilt it, 
> checked the date/time on it and it's the new time.  Also, doing a strings 
> on /etc/zfs/zpool.cache tells me that there's SOMETHING about data_pool 
> (and root_pool) in it:
>
> root at kubuntu:/home/vince# strings /etc/zfs/zpool.cache
>        data_pool
> version
> name
>        data_pool
> state
>        pool_guid
> hostname
> kubuntu
> vdev_children
>        vdev_tree
> type
> root
> guid
> children
> type
> raidz
> guid
> nparity
> metaslab_array
> metaslab_shift
> ashift
> asize
> is_log
> create_txg
> children
> type
> disk
> guid
> path
>>/dev/disk/by-id/scsi-SATA_Hitachi_HDS7240_PK2331PAH2XZNT-part1
> whole_disk
> create_txg
> type
> disk
> guid
> path
>>/dev/disk/by-id/scsi-SATA_Hitachi_HDS7240_PK2331PAH3893T-part1
> whole_disk
> create_txg
> type
> disk
> guid
> path
>>/dev/disk/by-id/scsi-SATA_Hitachi_HDS7240_PK2331PAH3KZ0T-part1
> whole_disk
> create_txg
> type
> disk
> guid
> path
>>/dev/disk/by-id/scsi-SATA_Hitachi_HDS7240_PK2381PAH4MUTT-part1
> whole_disk
> create_txg
> type
> disk
> guid
> path
>>/dev/disk/by-id/scsi-SATA_Hitachi_HDS7240_PK2381PAH4YKMT-part1
> whole_disk
> create_txg
> features_for_read
>        root_pool
> version
> name
>        root_pool
> state
>        pool_guid
> Je@(
> hostname
> kubuntu
> vdev_children
>        vdev_tree
> type
> root
> guid
> Je@(
> children
> type
> disk
> guid
> `)bt
> path
> ;/dev/disk/by-id/scsi-SATA_SanDisk_SDSSDP0130804401449-part3
> whole_disk
> metaslab_array
> metaslab_shift
> ashift
> asize
> is_log
> create_txg
> features_for_read
>
>
> I'm at a loss here.   No error messages in dmesg, nothing in syslog.  I 
> feel like I'm missing something huge but am at a loss here.  zdb has 
> nothing much (history of creation and imports, that's it.)
>
> All suggestions (no matter how simple) appreciated!  Thanks!
 




More information about the zfs-discuss mailing list