[zfs-discuss] How resilient is a zfs send stream? How can it be improved?
aarcane at aarcane.org
Wed Oct 22 14:22:45 EDT 2014
I've been following the list recently, and a few questions have come to
First and foremost, how resilient is a zfs send stream? Does it include all
the checksums that a zfs filesystem has? How much data corruption can be
detected? Perhaps more important, how much data corruption can be repaired
or ignored? If there are packet errors early in the stream, is a single
dataset damaged, or the entire send stream?
For the second question, I'm going to assume what I consider to be a
minimal condition that the original zfs devs would have accepted: full
checksums but no repair data.
How can this be improved upon? Clearly zfs is good, but send seems to be a
weak point. Zfs guarantees that data will be read correctly or not at all,
but a lack of error recovery in zfs send means that errors may render
backups unrecoverable. Sometimes simply trying again on a failed send is
infeasible, such as when using FedEx to ship large archives. Can zfs send
be extended to add raid 5 level parity to the stream? Can zfs itself be
extended to support receive of incremental send streams without basis, and
just mark them as incomplete or unmountable? Then administrators could send
into zfs images, and just send n+p disks.
What about size limiting zfs streams? If I need to send 12gb of data using
dvd5s, clearly I need 3 disks, more if I want to add parity data, but
there's currently no convenient way to do that. The only option is to store
the zfs send stream into some other container that supports splitting, then
trust this new container. A native zfs option would be nice.
What is the safest, most trustworthy way to do large zfs sends with
redundancy and capacity limits?
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zfs-discuss