[zfs-discuss] cannot import 'home': I/O error Destroy and re-create the pool from a backup source
anton.gubarkov at gmail.com
Fri Apr 27 14:20:39 EDT 2018
I still fail to understand the meaning of CKSUM errors on radiz0 vdev
and pool levels. Can someone explain this to me? Before I understand
these numbers I discard them as non credible (noise).
When it comes to CKSUM errors on drives, I have 2 suspects: cables and
drives. I can't imagine that controller/RAM issues affect only 2 drives
and leave other 4 intact (0 errors).I'll start examining these once my
read verify of backups is over.Can anyone suggest a torture
(read/write/verify-by-compare or checksum) test for the drives? I was
thinking of creating a new pool on the defective drive and placing a
zvol on it and then running badblocks on the zvol. But this method
won't produce a list of bad blocks, but I would like to have them.
Seagate support suggested their non-destructive Seatools. I suspect
it's their warranty procedure and they rely on either SMART or other
driver embedded error reporting not serving my purpose.
В Пт, 27/04/2018 в 13:13 -0300, Durval Menezes via zfs-discuss пишет:
> Hello Anton, Bryn.
> On Fri, Apr 27, 2018 at 12:54 PM, Bryn Hughes via zfs-discuss <zfs-di
> scuss at list.zfsonlinux.org> wrote:
> > Glad you got it working!!
> +1, and also thanks to Anton for posting a complete write-up of his
> recovery attempts; I can see this being really useful to the next
> person with this issue who comes here a-googling.
> > Do we have any hints as to the cause of the initial corruption?
> I'm also interested in this, but given the *large* number of CKSUMs
> reported, I'd bet Anton's machine had some data-corrupting hardware
> issue(s)... like failing non-ECC RAM and/or faulty PSU and/or faulty
> disk controller...
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zfs-discuss