[zfs-discuss] Failed to Import Pool via Cache File

Gregor Kopka (@zfs-discuss) zfs-discuss at kopka.net
Mon Dec 18 15:08:50 EST 2017


You will most likely need to update your initramfs once after that.
And best make sure that you don't have a polluted /etc/zpool.cache (from
an USB backup pool you have attached in that moment, etc) in situations
where you need to recreate it.

Gregor

Am 18.12.2017 um 20:12 schrieb Dominic Robinson:
> So can I just set cachefile=none on the storage pool, but leave it on
> the root pool? Will it just work with no further modifications?
>
> On 18 December 2017 at 19:05, Gregor Kopka (@zfs-discuss) via
> zfs-discuss <zfs-discuss at list.zfsonlinux.org
> <mailto:zfs-discuss at list.zfsonlinux.org>> wrote:
>
>     Depends on how you approach it.
>
>     As I personally hate zpool.cache I neuter it for any pool (apart
>     from the root ones) by setting /cachefile=none/ and inport these
>     using a custon OpenRC  init script that also keeps the pools in
>     the order I want. I some datasets (SSD pool based) mount into
>     mountpoints supplied by other (spinning rust based) pools.
>
>     Needed to do that as zpool.cache wasn't stable enough import order
>     wise, apart from the other problems with it (like the initramfs
>     issues).
>
>     Ceterum censeo /etc/zpool.cache should be abolished.
>     Turn it into a /proc entry so external tools can inform themselves
>     about currently imported pools, but please get rid of it for pool
>     imports and replace it with something sane.
>
>     Gregor
>
>
>     Am 18.12.2017 um 18:45 schrieb Dominic Robinson via zfs-discuss:
>>     Oh ok that is starting to make some sense to me...
>>
>>     So I have this in my kernel parameters: rd.luks.uuid= - I'm
>>     assuming that means that that disk is getting decrypted in the
>>     initramfs, whereas the disks that make up my storage pool are
>>     not. This is confusing because I'm only entering my decryption
>>     passphrase once for all disks, since it's the same and all disks
>>     are referenced in /etc/crypttab.
>>
>>     So with that in mind, does that mean I'll have to follow this
>>     procedure with every kernel and/or zfs upgrade?
>>
>>     On 18 December 2017 at 17:26, Gregor Kopka (@zfs-discuss) via
>>     zfs-discuss <zfs-discuss at list.zfsonlinux.org
>>     <mailto:zfs-discuss at list.zfsonlinux.org>> wrote:
>>
>>         Most likely the zpool.cache in your initramfs contains pools
>>         that are only available after the system had come up fully
>>         (like being on additional crypted containers).
>>
>>         Solution is to bring the system into a state that it only has
>>         the pool(s) that are available while booting it (so only
>>         these are accounted in /etc/zpool.cache), then update the
>>         initramfs with the /etc/zpool.cache from the running system.
>>
>>         Gregor
>>
>>         Am 18.12.2017 um 15:17 schrieb Dominic Robinson via zfs-discuss:
>>>         So I’ve started seeing a message appear in the journal
>>>         recently, saying something along the lines of failed to
>>>         import pool via cache file.
>>>
>>>         This error doesn’t actually appear to prevent the pool(s)
>>>         importing on boot though, it has been synonymous with a
>>>         massively increased boot time.
>>>
>>>         I’ve tried the usual candidates:
>>>         - Deleting the cache file and rebuilding it with an
>>>         export/import.
>>>         - Regenerating the initramfs
>>>
>>>         Haven’t managed to solve the issue.
>>>
>>>         Could anyone advise?
>>>
>>>         For reference - this is a CentOS 7.4 install on a zfs root,
>>>         on top of a dm-crypt container. I also have an additional
>>>         storage pool, that is again made up of disks using do-crypt
>>>         containers.
>>>
>>>         Thanks,
>>>         Dominic
>>>         -- 
>>>         Sent from my iPhone
>>>
>>>
>>>         _______________________________________________
>>>         zfs-discuss mailing list
>>>         zfs-discuss at list.zfsonlinux.org
>>>         <mailto:zfs-discuss at list.zfsonlinux.org>
>>>         http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>>         <http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss>
>>
>>
>>         _______________________________________________
>>         zfs-discuss mailing list
>>         zfs-discuss at list.zfsonlinux.org
>>         <mailto:zfs-discuss at list.zfsonlinux.org>
>>         http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>         <http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss>
>>
>>
>>
>>
>>     _______________________________________________
>>     zfs-discuss mailing list
>>     zfs-discuss at list.zfsonlinux.org
>>     <mailto:zfs-discuss at list.zfsonlinux.org>
>>     http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>     <http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss>
>
>
>     _______________________________________________
>     zfs-discuss mailing list
>     zfs-discuss at list.zfsonlinux.org
>     <mailto:zfs-discuss at list.zfsonlinux.org>
>     http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>     <http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20171218/b0ded680/attachment.html>


More information about the zfs-discuss mailing list