[zfs-discuss] cannot import 'home': I/O error Destroy and re-create the pool from a backup source

Jeff Johnson jeff.johnson at aeoncomputing.com
Wed Apr 25 11:06:49 EDT 2018


Anton,

You can add `-m` option to zpool import in order to bypass the log device.
You'll lose any write transactions on the zil not committeed to the backend
pool.

Maybe try `zpool -m -o readonly=on *poolname*` and see what you get.

--Jeff

On Wed, Apr 25, 2018 at 10:01 AM, Антон Губарьков <anton.gubarkov at iits.ru>
wrote:

> Jeff,
>
> The *-part2 device is a partition on SATA SSD disk. It serves as ZIL. It
> is a single partition, i.e. no redundancy.
>
> The main storage vdev is raidz2 backed by 6 SATA Seagate ironwolf 3tb
> disks. All devices are connected via Adaptec hba 1000-8i.
>
> If my ZIL is gone, can I recover the main storage?
>
> Anton.
>
> ср, 25 апр. 2018 г., 17:43 Jeff Johnson <jeff.johnson at aeoncomputing.com>:
>
>> Anton,
>>
>> "disk vdev '/dev/disk/by-id/wwn-0x30000d1700d9d40f-part2': failed to
>> read label config. Trying again without txg restrictions"
>>
>>    - The import process is stumbling on the same block device, disk
>>    wwn-0x30000d1700d9d40f-part2
>>       - The SAS wwn 0x30000d1700d9d40f, specifically the 0x30000d17 MSB.
>>       What kind of device is this? A hardware RAID? Virtual disk? Partition?
>>    - XXXX-part2
>>       - With whole disk based vdevs ZFS typically use partitions 1 and
>>       9. How was this pool built?
>>
>> It does look like the import process keeps stumbling over that one
>> specific vdev block device. If it is something other than a standard
>> direct-attached SATA or SAS disk drive you might be able to solve the issue
>> by addressing any underlying problems of that block device.
>>
>> --Jeff
>>
>> On Wed, Apr 25, 2018 at 8:27 AM, Raghuram Devarakonda <
>> draghuram at gmail.com> wrote:
>>
>>> Hi Anton,
>>>
>>> I recognize the message:
>>>
>>> "spa_load_failed(): spa_load(home, config untrusted): FAILED: no valid
>>> uberblock found"
>>>
>>> I got exact same one in my case as well but it did turn out that data
>>> was indeed corrupt on the device (backed by my own driver). Also, I
>>> wasn't using RAID-Z.
>>>
>>> Did you try Jeff's suggestion of importing with older TXNs?
>>>
>>> Good luck,
>>> Raghu
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Apr 25, 2018 at 6:59 AM, Anton Gubar'kov via zfs-discuss
>>> <zfs-discuss at list.zfsonlinux.org> wrote:
>>> > I'm also following
>>> > http://list.zfsonlinux.org/pipermail/zfs-discuss/2018-
>>> April/030934.html for
>>> > a person with similar problems. So I've built the driver and userland
>>> tools
>>> > from the branch https://github.com/zfsonlinux/zfs/pull/7459 , set
>>> > zfs_dbgmsg=1 and tried to import my pool with txg from several
>>> snapshots I
>>> > have.
>>> >
>>> > I attach the output from /proc/spl/kstat/zfs/dbgmsg. I don't know
>>> enough to
>>> > decipher it and I ask for your help.
>>> >
>>> > KR
>>> > Anton.
>>> >
>>> >
>>> >>
>>> >>
>>> >
>>> > _______________________________________________
>>> > zfs-discuss mailing list
>>> > zfs-discuss at list.zfsonlinux.org
>>> > http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>> >
>>>
>>
>>
>>
>> --
>> ------------------------------
>> Jeff Johnson
>> Co-Founder
>> Aeon Computing
>>
>> jeff.johnson at aeoncomputing.com
>> www.aeoncomputing.com
>> t: 858-412-3810 x1001   f: 858-412-3845
>> m: 619-204-9061
>>
>> 4170 Morena Boulevard, Suite D - San Diego, CA 92117
>> <https://maps.google.com/?q=4170+Morena+Boulevard,+Suite+D+-+San+Diego,+CA+92117&entry=gmail&source=g>
>>
>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>
>


-- 
------------------------------
Jeff Johnson
Co-Founder
Aeon Computing

jeff.johnson at aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite D - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180425/cb746ef7/attachment.html>


More information about the zfs-discuss mailing list