[zfs-discuss] Re: can rsync transverse zfs filesystems

Mark ipodmb at googlemail.com
Sun Jul 10 05:59:43 EDT 2011


Hi,

I keep getting dataset is busy when trying to send an incremental, any
ideas?

I start by creating a snapshot of a test volume and then send it to
the target host as a full data set...

root at saturn:~# zfs snapshot -r tank/testrep at 1036
root at saturn:~# zfs send tank/testrep at 1036 | ssh root at pluto zfs receive
-v tank/testrep
receiving full stream of tank/testrep at 1036 into tank/testrep at 1036
received 79.2MB stream in 3 seconds (26.4MB/sec)

On the target system all seems well...

root at pluto:~# zfs list -t all
NAME                USED  AVAIL  REFER  MOUNTPOINT
tank               1.53T   705G  51.5K  /tank
tank/backups       20.0G   530G  46.5K  /tank/backups
tank/backups/sysb  44.8K  20.0G  44.8K  /tank/backups/sysb
tank/data          44.8K   900G  44.8K  /tank/data
tank/home          44.8K   120G  44.8K  /tank/home
tank/testrep       79.0M   705G  79.0M  /tank/testrep
tank/testrep at 1036      0      -  79.0M  -

So I make a second  snapshot on the source and try to send it
incremental to the target...

root at saturn:~# zfs snapshot tank/testrep at 1042
root at saturn:~# zfs send -i tank/testrep at 1036 tank/testrep at 1042 | ssh
root at pluto zfs receive -v tank/testrep
receiving incremental stream of tank/testrep at 1042 into tank/
testrep at 1042
cannot receive incremental stream: dataset is busy


What causes the dataset to be busy? I just can't get it to accept the
incremental.

Thanks,
Mark



On Jul 8, 6:00 pm, Gunnar Beutner <gun... at beutner.name> wrote:
> I'm using a couple of scripts to automatically create, send and destroy
> snapshots. They can be found athttps://gunnar-beutner.de/files/zfs-snapshot/
>
> In my setup they're executed by cron using the following crontab
> entries:
>
> 0  0    * * *   root    /usr/local/sbin/zfs-auto-snapshot
> lxc-private/vms daily 31 > /dev/null
> 0  *    * * *   root    /usr/local/sbin/zfs-auto-snapshot
> lxc-private/vms hourly 24 > /dev/null
>
> 5 */3   * * *   root    flock -x /tmp/zfs-replication.lock
> /usr/local/sbin/zfs-replication lxc-private backup/vz4-lxc-private
> vz3.shroudbox.net > /dev/null
>
> That way you'll get hourly snapshots for the last 24 hours + daily
> snapshots for the last 31 days:
>
> lxc-private/vms/vps123.shroudbox.net                                    
> 2.20G  47.8G   940M  /var/lib/lxc/vps123.shroudbox.net
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-06-08-00:00        
> 3.08M      -   633M  -
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-06-09-00:00        
> 2.93M    
>  -   634M  -
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-06-10-00:00        
> 2.94M      -   635M  -
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-06-11-00:00        
> 30.3M      -   796M  -
> [...]
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-07-05-00:00        
> 4.11M      -   934M  -
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-07-06-00:00        
> 4.09M      -   935M  -
> lxc-private/vms/vps123.shroudbox.net at auto:daily-2011-07-07-00:00        
> 35.1M      -   936M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-07-19:00      
> 1.79M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-07-20:00      
> 1.62M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-07-21:00      
> 1.61M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-07-22:00      
> 2.55M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-07-23:00      
> 1.61M      -   938M  -
> lxc-private
> /vms/vps123.shroudbox.net at auto:daily-2011-07-08-00:00        
>     0      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-00:00      
>     0      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-01:00      
> 1.59M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-02:00      
> 1.59M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-03:00      
> 1.64M      -   938M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-04:00      
> 1.69M      -   938M  -
> [...]
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-16:00      
> 1.57M      -   940M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-17:00      
> 1.63M      -   940M  -
> lxc-private/vms/vps123.shroudbox.net at auto:hourly-2011-07-08-18:00      
> 1.40M      -   940M  -
> lxc-private/vms/vps123.shroudbox.net at auto:replication-2011-07-08-18:05  
> 1.40M      -   940M  -
>
>  The last line takes care of sendi
> ng those snapshots to another box
> every three hours (assuming you have properly set up SSH pubkey auth).
>
> The scripts don't have any documentation but I guess they might give
> you an idea what could be done with zfs send/recv. :)
>
> Regards,
> Gunnar
>
> On Freitag, 8. Juli 2011 15:24:46, Thomas Harvey wrote:
>
>
>
> > Thanks, thought so...just was thrown by the earlier comment from Michael:
>
> >> keep in mind that this does a full transfer, not an incremental ones that rsync can do.  zfs send/recieve is *excellent* for small files.
>
> > Glad to have it cleared up.
>
> > Tom
>
> > On 8 Jul 2011, at 14:19, Fajar A. Nugraha wrote:
>
> >> On Fri, Jul 8, 2011 at 8:08 PM, Thomas Harvey
> >> <tom.har... at onefinestay.com> wrote:
> >>> I am about to hit the button on using this as well, can you clarify one thing - zfs send/receive will do a whole transfer, but will the snapshotting not be incremental changes since the last time?
>
> >> Lookup zfs administration guide from solaris to understand the concept
> >> of snapshot and send/receive:
> >>http://download.oracle.com/docs/cd/E19253-01/819-5461/gavvx/index.html
>
> >> Here's an example of using zfs send/receive for replication (Google's
> >> first hit):http://www.markround.com/archives/38-ZFS-Replication.html
>
> >> Basically you can choose whether to send complete data from a
> >> snapshot, or an incremental between two snapshots, or everything on
> >> the dataset (including properties, snapshots, descendent file systems,
> >> and clones)
>
> >> --
> >> Fajar



More information about the zfs-discuss mailing list