[zfs-discuss] How give hight priority to zfs receive respect to other intensive backup operation ??

Ted Cabeen ted.cabeen at lscg.ucsb.edu
Mon Apr 16 14:30:41 EDT 2018


Have you tried setting the I/O scheduling class and priority with ionice?

--Ted

On 04/16/2018 03:57 AM, Sam Van den Eynde via zfs-discuss wrote:
> Hi,
> 
> 
> I experienced a similar issue on a small home setup with an limited 
> storage backend (1 mirrored VDEV). The snapshot maintenance on the 
> receiving side (query/create/destroy) seemed to be the issue.
> 
> Alternative idea: as you have 2 SSD for ZIL (I assume mirrored) you 
> could cache the incoming streams into files on the SSD (ZIL typically 
> does not use all of the SSD, you could create a largely idle pool on a 
> second partition) while the backup is running. When the backup is done, 
> the first incremental send/receive makes the received files obsolete so 
> you can delete them. Or you could load them onto the pool sequentially 
> (but not during the backup, that will probably lead to the same issue 
> you describe). Easy, cheap solution, trivial to script, and maintains 
> redundancy for all stages of the send/receive.
> 
> I'll be following this thread for better solutions :) I'd love a generic 
> solution where I could just create a hybrid SSD/HDD pool (to avoid the 
> term ZIL) and force all incoming writes on the SSD. That would be a 
> killer feature for me.
> 
> 
> Krgds
> Sam
> 
> On Mon, Apr 16, 2018 at 11:07 AM, Alberto Maria Fiaschi via zfs-discuss 
> <zfs-discuss at list.zfsonlinux.org 
> <mailto:zfs-discuss at list.zfsonlinux.org>> wrote:
> 
>     I have two configured file servers, one master and one slave. Both
>     servers have a zfs pool of type z2 with 2 vdev of 6 disks, two ssd
>     for zil and two ssd for cache. dedub is on and compression is set to
>     lz4.  zfs version is 0.6.5.6-0ubuntu16 ( official Ubuntu 16.04.4 LTS)
>     I have the following problem on the slave: While  HP Data Protector
>     agent is running (backup operation), the zfs receive operation gets
>     stuck for hours.
>     HP data protector make a lot of async reads , but run with a
>     process  priority set to 20. Instead, zfs receive is started with a
>     priority of -20.
>     I tried to change the scheduler of physical disks from noop to
>     deadline without any result.
> 
>     Can you give me some advice?
> 
>     Below iostat output while the hp dataprotector is running
> 
>     Device: rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
>     avgqu-sz   await r_await w_await  svctm %util
>     sdc               0,00     0,00    0,00    0,00 0,00     0,00    
>     0,00     0,00    0,00    0,00 0,00   0,00   0,00
>     sdd               0,00     0,00    0,00    0,00 0,00     0,00    
>     0,00     0,00    0,00    0,00 0,00   0,00   0,00
>     sde               0,00     0,00   66,25   12,50 2716,93   854,12   
>     90,69     0,04    0,49 0,42    0,88   0,35   2,72
>     sdf               0,00     0,00   62,90    7,15 2763,00   425,95   
>     91,05     0,03    0,41 0,39    0,56   0,34   2,36
>     md2               0,00     0,00    0,00    0,45 0,00     1,80    
>     8,00     0,00    0,00    0,00 0,00   0,00   0,00
>     md1               0,00     0,00    0,00    0,00 0,00     0,00    
>     0,00     0,00    0,00    0,00 0,00   0,00   0,00
>     md0               0,00     0,00    0,00    0,00 0,00     0,00    
>     0,00     0,00    0,00    0,00 0,00   0,00   0,00
>     sdg               0,00     0,00  354,90    0,00 10155,60     0,00   
>     57,23     0,59    1,66 1,66    0,00   1,09  38,72
>     sdh               0,00     0,00  319,65    0,00 9260,60     0,00   
>     57,94     0,98    3,08 3,08    0,00   1,75  55,86
>     sdj               0,00     0,00  433,15    0,00 12692,20     0,00   
>     58,60     0,79    1,83 1,83    0,00   1,16  50,28
>     sdi               0,00     0,00  427,05    0,00 12322,20     0,00   
>     57,71     0,93    2,17 2,17    0,00   1,23  52,58
>     sdk               0,00     0,00  314,95    0,00 9097,40     0,00   
>     57,77     1,02    3,23 3,23    0,00   1,69  53,36
>     sdl               0,00     0,00  343,90    0,00 9631,20     0,00   
>     56,01     0,69    1,99 1,99    0,00   1,17  40,40
>     sdm               0,00     0,00  489,70    0,00 14524,60     0,00   
>     59,32     2,42    4,94 4,94    0,00   1,73  84,78
>     sdn               0,00     0,00  374,70    0,00 10642,80     0,00   
>     56,81     0,82    2,18 2,18    0,00   1,34  50,08
>     sdo               0,00     0,00  352,65    0,00 9751,20     0,00   
>     55,30     0,77    2,19 2,19    0,00   1,31  46,20
>     sdp               0,00     0,00  489,50    0,00 14493,20     0,00   
>     59,22     1,96    4,00 4,00    0,00   1,59  77,64
>     sdr               0,00     0,00  272,70    0,00 7215,00     0,00   
>     52,92     0,47    1,72 1,72    0,00   1,25  34,18
>     sdq               0,00     0,00  295,15    0,00 8074,40     0,00   
>     54,71     0,56    1,90 1,90    0,00   1,26  37,32
>     zpool status
>        pool: pool_z2_samba
>       state: ONLINE
>        scan: scrub canceled on Fri Dec  1 09:06:55 2017
>     config:
> 
>          NAME        STATE     READ WRITE CKSUM
>          pool_z2_samba  ONLINE       0     0     0
>            raidz2-0  ONLINE       0     0     0
>              sdg     ONLINE       0     0     0
>              sdh     ONLINE       0     0     0
>              sdi     ONLINE       0     0     0
>              sdj     ONLINE       0     0     0
>              sdk     ONLINE       0     0     0
>              sdl     ONLINE       0     0     0
>            raidz2-1  ONLINE       0     0     0
>              sdm     ONLINE       0     0     0
>              sdn     ONLINE       0     0     0
>              sdo     ONLINE       0     0     0
>              sdp     ONLINE       0     0     0
>              sdq     ONLINE       0     0     0
>              sdr     ONLINE       0     0     0
>          logs
>            mirror-2  ONLINE       0     0     0
>              sdc     ONLINE       0     0     0
>              sdd     ONLINE       0     0     0
>          cache
>            sde       ONLINE       0     0     0
>            sdf       ONLINE       0     0     0
> 
> 
> 
> 
> 
> 
>     -- 
>     ------------------------------------------------------------------------
>     *Alberto Maria Fiaschi*
>     /Dip.to: / Tecnologie Informatiche e Sanitarie
>     /Area: /Infrastrutture Processi e Flussi
>     /UOC: / Infrastrutture
>     /UOS: / Infrastrutture Nord Ovest
> 
>     */ESTAR / *
>     /c/o / Azienda Ospedaliero Universitaria Pisana
>     Presidio Ospedaliero Spedali Riuniti Santa Chiara
>     Via Roma, 67 - 56126 Pisa, Italy
>     /Tel.: / +39 050 99 3117
>     /Fax: / +39 050 99 3396
>     /mail:/alberto.fiaschi at estar.toscana.it
>     <mailto:alberto.fiaschi at estar.toscana.it>
>     /profilo su http://it.linkedin.com/pub/alberto-fiaschi/
>     <http://it.linkedin.com/pub/alberto-fiaschi/38/783/a5>
> 
>     _______________________________________________
>     zfs-discuss mailing list
>     zfs-discuss at list.zfsonlinux.org <mailto:zfs-discuss at list.zfsonlinux.org>
>     http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>     <http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss>
> 
> 
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
> 


More information about the zfs-discuss mailing list