[zfs-discuss] Permanent errors in older snapshots

devsk internet_everyone at yahoo.com
Sun Dec 11 02:37:57 EST 2016


I think this might have been discussed here before because I am very 
sure I myself ran into this issue several years ago.

One fine dayafter the update to v0.6.5.8-r0-gentoo, a scrub finds a file 
which has permanent error(I guess it means that it can't correct the 
blocks in error) on a file in an old snapshot. The file is unreadable 
(Input/Output error) in all snapshots taken after that but they were 
taken in the years since.

# zpool status -v
   pool: backup
  state: DEGRADED
status: One or more devices has experienced an error resulting in data
         corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
         entire pool from backup.
    see: http://zfsonlinux.org/msg/ZFS-8000-8A
   scan: scrub repaired 0 in 9h16m with 1 errors on Sat Dec 10 23:19:10 2016

         NAME                                            STATE READ 
         backup DEGRADED     0     0     1
           raidz2-0 DEGRADED     0     0     2
             ata-WDC_WD4001FAEX-00MJRA0_WD-WCCxxxxxxxxx ONLINE       
0     0     0
             ata-ST4000VN000-1H4168_Z302C80M ONLINE       0     0     0
             ata-WDC_WD4001FAEX-00MJRA0_WD-WCCxxxxxxxxx ONLINE       
0     0     0
             /mnt/serviio/4tbFile OFFLINE      0     0     0

errors: Permanent errors have been detected in the following files:

backup/zfs-backup at move_backup_to_4tb_external_sep25_2014:/Installs/clonezilla/live/filesystem.squashfs

I tried to read the file in all subsequent snapshots (using 
.zfs/snapshot folder) since Sept 2014 and its unreadable in all of them. 
I can copy the correct  file over and take snapshots and they are all 
fine. I can keep deleting snapshots and the scrub keep pointing to the 
next snapshot.

Neither of the 3 disks in there show any pending, uncorrectable or 
reallocated sectors. Overall health is fine with the disks and scrub has 
never failed on these for last any number of months I can remember (and 
from zpool history).

Any ideas? I remember I had to restore from backup (of backup in this 
case) last time I ran into this. Is there any other way? Its a pain to 
start over.

Also, I want to add a 4TB disk to replace 4tbFile but I am wondering if 
the resilver will even succeed in this state. I am afraid it will fail 
at this snapshot and it will be a waste of time.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20161210/c276d6e7/attachment.html>

More information about the zfs-discuss mailing list