[zfs-discuss] cannot import 'home': I/O error Destroy and re-create the pool from a backup source
richard.elling at richardelling.com
Fri Apr 27 15:54:07 EDT 2018
> On Apr 27, 2018, at 12:05 PM, Gionatan Danti via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
> Il 27-04-2018 20:20 Anton Gubar'kov via zfs-discuss ha scritto:
>> Dear friends,
>> I still fail to understand the meaning of CKSUM errors on radiz0 vdev
>> and pool levels. Can someone explain this to me? Before I understand
>> these numbers I discard them as non credible (noise).
to see the details of repaired data, including expected and actual checksums:
zpool events -v
pro tip: pay attention to the vdev and offset to see if the damage is clustered. know that some hdd failures resist repair
>> When it comes to CKSUM errors on drives, I have 2 suspects: cables and
>> drives. I can't imagine that controller/RAM issues affect only 2
>> drives and leave other 4 intact (0 errors).
>> I'll start examining these once my read verify of backups is over.
>> Can anyone suggest a torture (read/write/verify-by-compare or
>> checksum) test for the drives? I was thinking of creating a new pool
>> on the defective drive and placing a zvol on it and then running
>> badblocks on the zvol. But this method won't produce a list of bad
>> blocks, but I would like to have them.
>> Seagate support suggested their non-destructive Seatools. I suspect
>> it's their warranty procedure and they rely on either SMART or other
>> driver embedded error reporting not serving my purpose.
> I strongly suspect this is power supply related. Are the two affected drives powered by the same rail/cable? Can you share the output of "smartctl --all /dev/sdX" for each disk, explicitly marking the problematic ones?
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti at assyoma.it - info at assyoma.it
> GPG public key ID: FF5F32A8
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
More information about the zfs-discuss