[zfs-discuss] How to get the actual ZFS dataset size & how to recover a ZFS dataset?

Cédric Lemarchand cedric.lemarchand at ixblue.com
Wed Dec 28 11:28:34 EST 2016


> Le 28 déc. 2016 à 16:59, Nick Gilmour via zfs-discuss <zfs-discuss at list.zfsonlinux.org> a écrit :
> 
> OK my hope was that the txg's were stored in a file from which I could restore them. So zdb should actually show me the txg's.

txg's are parts of ZFS's metadata, that can only be accessed/understood by specific tools, like zdb. I cannot explain why zdb didn't give you anything, maybe some incompatibility somewhere, a more safer approach would be to use a ZFS stack provided by trusted distro like for exemple Ubuntu 16.04, just MHO.

> 
> How can I make a block-based copy?

Use dd to dump each devices used in the pool. 

> The pool is over 20 TB. At the moment I'm not willing to invest money for such a high capacity. Is it somehow possible to reduce the size of the copy for only the data (i.e. the deleted dataset) I'm interested in?

AFAIK, no, except skipping redundancy, but being in a recovery process it is a risky choice ...

Cheers
> 
>> On Wed, Dec 28, 2016 at 4:27 PM, Gregor Kopka (@zfs-discuss) <zfs-discuss at kopka.net> wrote:
>> ZFS stores the TXG uberblocks (amount of them depending on ashift) in the vdev labels, they are kept round-robin (always overwriting the oldest one) there. So, as I wrote earlier, unless you stopped the pool from updating soon after destroying the dataset you won't have any luck with finding an uberblock for a TXG where the dataset was still in existance.
>> 
>> Thus my pointer to cease using the pool immediately and make block-based copies on which any further attempts should be performed.
>> 
>> Gregor
>> 
>> 
>>> Am 28.12.2016 um 13:40 schrieb Nick Gilmour:
>>> Gregor thanks for the quick response.
>>> 
>>> I can try that but why I cannot see old txg's now? Is something wrong with my current configuration?
>>> 
>>> I have already tested without the patch but it didn't work.
>>> 
>>> Nick
>>> 
>>>> On Wed, Dec 28, 2016 at 1:32 PM, Gregor Kopka (@zfs-discuss) <zfs-discuss at kopka.net> wrote:
>>>> You shouldn't need a patch to import with -T.
>>>> Get the latest Ubuntu 16 (64bit) live DVD, boot it, login as root and issue
>>>>> apt-add-repository universe
>>>>> apt update
>>>>> apt install --yes debootstrap gdisk zfs-initramfs
>>>> After that you should have a ZFS stack ready to import the pool.
>>>> 
>>>> Keep in mind to work on copies only (or at least with -o readonly=on), so you can retry in case things go south.
>>>> 
>>>> Gregor
>>>> 
>>>> 
>>>>> Am 28.12.2016 um 13:06 schrieb Nick Gilmour via zfs-discuss:
>>>>> Hi all,
>>>>> 
>>>>> following the instructions from here:
>>>>> http://www.c0t0d0s0.org/archives/7621-Back-in-time-or-zpool-import-T.html
>>>>> 
>>>>> I was able to restore a pool with an older txg on a testing VM after I have built and compiled the patch mentioned before. So I was able to get data from a previously deleted dataset. This has worked great.
>>>>> 
>>>>> Now I'm trying to do the same with my real pool. I don't want to mess up my current ZFS installation so I'm working on my server with an Ubuntu 14.04 Live. After building and installing ZFS I've encountered the error:
>>>>>> zpool: error while loading shared libraries: ...
>>>>> which went away after:
>>>>>> $ sudo ldconfig
>>>>> Everything seems to be OK and I'm able to import my pool except I cannot restore to a previous state.
>>>>> When I try to import with an older txg I get the error:
>>>>>> cannot import 'pool': No such file or directory
>>>>> Also
>>>>>> # zdb pool
>>>>> 
>>>>> shows nothing.
>>>>> 
>>>>> So my assumption is that zdb has no knowledge                         of the old txg's and that they are not stored in the pool itself which brings me to my question:
>>>>> Where are the txg's stored? How can I load old txg's?
>>>>> 
>>>>> Thanks in advance.
>>>>> Nick
>>>>> 
>>>>>> On Fri, Dec 23, 2016 at 12:11 AM, Ray Pating <ray.pating at gmail.com> wrote:
>>>>>> One caveat of ZFS is that, for all its capabilities for ensuring the integrity of its data, when a user issues a command they better be 110% sure of it. 
>>>>>> 
>>>>>> Pools have been affected by:
>>>>>> 
>>>>>> * Accidentally adding a striped vdev to an existing pool (due to user error or insufficient understanding) 
>>>>>> * Not understanding ashift and causing pool IOPS to grind to a halt. 
>>>>>> *Destruction of critical datasets due to user error. 
>>>>>> 
>>>>>> This is why you really should be doubly sure when issuing commands like zfs destroy, since this usually does hard-to-recover-from actions on a pool. The assumption is that you know what you are doing and really want to destroy the dataset, thus the lack of recovery options. 
>>>>>> 
>>>>>> On Dec 23, 2016 4:04 AM, "Gordan Bobic via zfs-discuss" <zfs-discuss at list.zfsonlinux.org> wrote:
>>>>>>>> On 22/12/16 17:39, Nick Gilmour via zfs-discuss wrote:
>>>>>>>> Hi Gregor,
>>>>>>>> 
>>>>>>>> thanks for your response!
>>>>>>>> The server is down. Making full block-level copies of the drives to work
>>>>>>>> on is almost impossible. I don't have so much free capacity all at one
>>>>>>>> place and I have a spent a lot of money to build the new server.
>>>>>>>> 
>>>>>>>> Thanks for the link. I have already this and I've already tested on a VM
>>>>>>>> I've set up for this reason. But unfortunately it doesn't work and the
>>>>>>>> reason is, there is an open bug since 2014 preventing the rollback:
>>>>>>>> https://github.com/zfsonlinux/zfs/issues/2452
>>>>>>>> 
>>>>>>>> The good news are there is a patch which I'm about to try soon. But Is
>>>>>>>> it really like this? ZFS has not recovery tools or recovery mechanisms?
>>>>>>>> There is no other way to recover my data? I find it hard to believe.
>>>>>>> 
>>>>>>> You have to consider that ZFS has built in recovery mechanisms for just about every reasonable failure scenario where data might still be meaningfully recoverable, and it deals with it as well as can be reasonably expected - except for user error. Unfortunately, there is no reasonable defence against the latter. :-(
>>>>>>> 
>>>>>>> Gordan
>>>>>>> _______________________________________________
>>>>>>> zfs-discuss mailing list
>>>>>>> zfs-discuss at list.zfsonlinux.org
>>>>>>> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> zfs-discuss mailing list
>>>>> zfs-discuss at list.zfsonlinux.org
>>>>> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20161228/d53365ff/attachment-0001.html>


More information about the zfs-discuss mailing list