[zfs-discuss] How give hight priority to zfs receive respect to other intensive backup operation ??

Sam Van den Eynde svde.tech at gmail.com
Tue Apr 17 09:22:11 EDT 2018


Hi,

That would require me to change the scheduler, which is not recommended by
ZoL. And it does not really suit my workflow, as I don't know beforehand
which client systems will be used concurrently, sending backup data streams
to our NAS. So I would need to configure different settings on those
systems to account for that and even then they would still collide.

To be clear: I can accept my storage backend has low performance, I just
try to avoid hitting it with too many incoming write streams. Cheap SSDs
make it easy to fix so I imagined it had to be possible somehow. I now use
a 2-step approach which works very well.

I am extremely happy with ZFS and I realize this might be an edge case. I
still think a configuration parameter to force every write to the ZIL/SLOG
(my initial thought process) would be a very relevant addition for a lot of
uses cases involving a hybrid pool nowadays.

You know what, I'm going to log a request tonight for this. Just see what
comes back.


Kind regards
Sam

On Mon, Apr 16, 2018 at 8:30 PM, Ted Cabeen <ted.cabeen at lscg.ucsb.edu>
wrote:

> Have you tried setting the I/O scheduling class and priority with ionice?
>
> --Ted
>
> On 04/16/2018 03:57 AM, Sam Van den Eynde via zfs-discuss wrote:
>
>> Hi,
>>
>>
>> I experienced a similar issue on a small home setup with an limited
>> storage backend (1 mirrored VDEV). The snapshot maintenance on the
>> receiving side (query/create/destroy) seemed to be the issue.
>>
>> Alternative idea: as you have 2 SSD for ZIL (I assume mirrored) you could
>> cache the incoming streams into files on the SSD (ZIL typically does not
>> use all of the SSD, you could create a largely idle pool on a second
>> partition) while the backup is running. When the backup is done, the first
>> incremental send/receive makes the received files obsolete so you can
>> delete them. Or you could load them onto the pool sequentially (but not
>> during the backup, that will probably lead to the same issue you describe).
>> Easy, cheap solution, trivial to script, and maintains redundancy for all
>> stages of the send/receive.
>>
>> I'll be following this thread for better solutions :) I'd love a generic
>> solution where I could just create a hybrid SSD/HDD pool (to avoid the term
>> ZIL) and force all incoming writes on the SSD. That would be a killer
>> feature for me.
>>
>>
>> Krgds
>> Sam
>>
>> On Mon, Apr 16, 2018 at 11:07 AM, Alberto Maria Fiaschi via zfs-discuss <
>> zfs-discuss at list.zfsonlinux.org <mailto:zfs-discuss at list.zfsonlinux.org>>
>> wrote:
>>
>>     I have two configured file servers, one master and one slave. Both
>>     servers have a zfs pool of type z2 with 2 vdev of 6 disks, two ssd
>>     for zil and two ssd for cache. dedub is on and compression is set to
>>     lz4.  zfs version is 0.6.5.6-0ubuntu16 ( official Ubuntu 16.04.4 LTS)
>>     I have the following problem on the slave: While  HP Data Protector
>>     agent is running (backup operation), the zfs receive operation gets
>>     stuck for hours.
>>     HP data protector make a lot of async reads , but run with a
>>     process  priority set to 20. Instead, zfs receive is started with a
>>     priority of -20.
>>     I tried to change the scheduler of physical disks from noop to
>>     deadline without any result.
>>
>>     Can you give me some advice?
>>
>>     Below iostat output while the hp dataprotector is running
>>
>>     Device: rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
>>     avgqu-sz   await r_await w_await  svctm %util
>>     sdc               0,00     0,00    0,00    0,00 0,00     0,00
>> 0,00     0,00    0,00    0,00 0,00   0,00   0,00
>>     sdd               0,00     0,00    0,00    0,00 0,00     0,00
>> 0,00     0,00    0,00    0,00 0,00   0,00   0,00
>>     sde               0,00     0,00   66,25   12,50 2716,93   854,12
>>  90,69     0,04    0,49 0,42    0,88   0,35   2,72
>>     sdf               0,00     0,00   62,90    7,15 2763,00   425,95
>>  91,05     0,03    0,41 0,39    0,56   0,34   2,36
>>     md2               0,00     0,00    0,00    0,45 0,00     1,80
>> 8,00     0,00    0,00    0,00 0,00   0,00   0,00
>>     md1               0,00     0,00    0,00    0,00 0,00     0,00
>> 0,00     0,00    0,00    0,00 0,00   0,00   0,00
>>     md0               0,00     0,00    0,00    0,00 0,00     0,00
>> 0,00     0,00    0,00    0,00 0,00   0,00   0,00
>>     sdg               0,00     0,00  354,90    0,00 10155,60     0,00
>>    57,23     0,59    1,66 1,66    0,00   1,09  38,72
>>     sdh               0,00     0,00  319,65    0,00 9260,60     0,00
>>  57,94     0,98    3,08 3,08    0,00   1,75  55,86
>>     sdj               0,00     0,00  433,15    0,00 12692,20     0,00
>>    58,60     0,79    1,83 1,83    0,00   1,16  50,28
>>     sdi               0,00     0,00  427,05    0,00 12322,20     0,00
>>    57,71     0,93    2,17 2,17    0,00   1,23  52,58
>>     sdk               0,00     0,00  314,95    0,00 9097,40     0,00
>>  57,77     1,02    3,23 3,23    0,00   1,69  53,36
>>     sdl               0,00     0,00  343,90    0,00 9631,20     0,00
>>  56,01     0,69    1,99 1,99    0,00   1,17  40,40
>>     sdm               0,00     0,00  489,70    0,00 14524,60     0,00
>>    59,32     2,42    4,94 4,94    0,00   1,73  84,78
>>     sdn               0,00     0,00  374,70    0,00 10642,80     0,00
>>    56,81     0,82    2,18 2,18    0,00   1,34  50,08
>>     sdo               0,00     0,00  352,65    0,00 9751,20     0,00
>>  55,30     0,77    2,19 2,19    0,00   1,31  46,20
>>     sdp               0,00     0,00  489,50    0,00 14493,20     0,00
>>    59,22     1,96    4,00 4,00    0,00   1,59  77,64
>>     sdr               0,00     0,00  272,70    0,00 7215,00     0,00
>>  52,92     0,47    1,72 1,72    0,00   1,25  34,18
>>     sdq               0,00     0,00  295,15    0,00 8074,40     0,00
>>  54,71     0,56    1,90 1,90    0,00   1,26  37,32
>>     zpool status
>>        pool: pool_z2_samba
>>       state: ONLINE
>>        scan: scrub canceled on Fri Dec  1 09:06:55 2017
>>     config:
>>
>>          NAME        STATE     READ WRITE CKSUM
>>          pool_z2_samba  ONLINE       0     0     0
>>            raidz2-0  ONLINE       0     0     0
>>              sdg     ONLINE       0     0     0
>>              sdh     ONLINE       0     0     0
>>              sdi     ONLINE       0     0     0
>>              sdj     ONLINE       0     0     0
>>              sdk     ONLINE       0     0     0
>>              sdl     ONLINE       0     0     0
>>            raidz2-1  ONLINE       0     0     0
>>              sdm     ONLINE       0     0     0
>>              sdn     ONLINE       0     0     0
>>              sdo     ONLINE       0     0     0
>>              sdp     ONLINE       0     0     0
>>              sdq     ONLINE       0     0     0
>>              sdr     ONLINE       0     0     0
>>          logs
>>            mirror-2  ONLINE       0     0     0
>>              sdc     ONLINE       0     0     0
>>              sdd     ONLINE       0     0     0
>>          cache
>>            sde       ONLINE       0     0     0
>>            sdf       ONLINE       0     0     0
>>
>>
>>
>>
>>
>>
>>     --     -----------------------------------------------------------
>> -------------
>>     *Alberto Maria Fiaschi*
>>     /Dip.to: / Tecnologie Informatiche e Sanitarie
>>     /Area: /Infrastrutture Processi e Flussi
>>     /UOC: / Infrastrutture
>>     /UOS: / Infrastrutture Nord Ovest
>>
>>     */ESTAR / *
>>     /c/o / Azienda Ospedaliero Universitaria Pisana
>>     Presidio Ospedaliero Spedali Riuniti Santa Chiara
>>     Via Roma, 67 - 56126 Pisa, Italy
>>     /Tel.: / +39 050 99 3117
>>     /Fax: / +39 050 99 3396
>>     /mail:/alberto.fiaschi at estar.toscana.it
>>     <mailto:alberto.fiaschi at estar.toscana.it>
>>     /profilo su http://it.linkedin.com/pub/alberto-fiaschi/
>>     <http://it.linkedin.com/pub/alberto-fiaschi/38/783/a5>
>>
>>     _______________________________________________
>>     zfs-discuss mailing list
>>     zfs-discuss at list.zfsonlinux.org <mailto:zfs-discuss at list.zfson
>> linux.org>
>>     http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>     <http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss>
>>
>>
>>
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss at list.zfsonlinux.org
>> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180417/f0e60ec1/attachment-0001.html>


More information about the zfs-discuss mailing list