[zfs-discuss] How to get the actual ZFS dataset size & how to recover a ZFS dataset?
nickeforos at gmail.com
Thu Dec 22 12:39:22 EST 2016
thanks for your response!
The server is down. Making full block-level copies of the drives to work on
is almost impossible. I don't have so much free capacity all at one place
and I have a spent a lot of money to build the new server.
Thanks for the link. I have already this and I've already tested on a VM
I've set up for this reason. But unfortunately it doesn't work and the
reason is, there is an open bug since 2014 preventing the rollback:
The good news are there is a patch which I'm about to try soon. But Is it
really like this? ZFS has not recovery tools or recovery mechanisms? There
is no other way to recover my data? I find it hard to believe.
On Thu, Dec 22, 2016 at 9:10 AM, Gregor Kopka (@zfs-discuss) <
zfs-discuss at kopka.net> wrote:
> In case you're even mildly interested in recovering any data: shutdown the
> pool immediately (export) and make full block-level copies of the drives to
> work on (and only on the copies)!
> You can then try what is outlined in http://www.c0t0d0s0.org/
> archives/7621-Back-in-time-or-zpool-import-T.html - but the amount of
> TXGs you can go back is limited so unless the pool was shutdown in time
> (before the last uberblock referencing the destroyed dataset was
> overwritten) you have a problem.
> As far as I know there isn't a recovery tool to exhume ZFS data structures
> that are no longer linked by an uberblock. It is possible in theory though:
> as ZFS data structures are fully verifyable (parent has checksum of the
> child, some types also are self-checksummed) it is possible to identify
> good data and through this (following down potential pointers to see if
> they would reach valid data) it would be possible to verify a guess about
> what a block could be and to (unless overwritten) locate the data trees
> on-disk. But as the ZFS data trees are only linked (and checksummed)
> downwards you would have to walk the whole pool in the quest to locate
> their roots, and as every write to the pool creates a whole new tree (as
> all changed metadata up to the uberblock is newly written to free space)
> it'll be an interesting (and time consuming) problem to track and sort
> through all of these to locate the one you actually want to find.
> Am 21.12.2016 um 20:36 schrieb Nick Gilmour via zfs-discuss:
> Hi all,
> Short story: I've accidentally destroyed a dataset with a backup on my new
> It started like this:
> I was moving some files from an external drive to my pool (raidz2 with
> compression on). After some time I got a message that my pool was full. In
> a dataset I had 2 folders and an archive. All 3 were actually backups of my
> old server. Since I had no other disks I have deleted one folder with my
> file manager but the system was still reporting that the pool was full.
> After that I deleted also the archive but again no change to the pool
> capacity. Finally I've accidentally destroyed the whole dataset with the
> 3rd backup on it. (Don't ask me why...) I still have an older backup but If
> I do not recover the dataset I will loose some valuable data.
> I've already googled but I haven't found satisfying answers to the
> 1. How can I get the actual size of the pool and the datasets? Why zfs was
> reporting the wrong capacity even though I have deleted files?
> 2. How can I recover a dataset? According to this:
> it was done but it was on FreeNAS. How can I make it work for Linux
> (Ubuntu 16.04)? In particular with this step:
> 2) Select recovery mode by loading the ZFS KLD with "vfs.zfs.recover=1"
>> set in /boot/loader.conf
> Plz help!
> zfs-discuss mailing listzfs-discuss at list.zfsonlinux.orghttp://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zfs-discuss