[zfs-discuss] ZFS raidz2 sequential reads are CPU bound?

Alex Chekholko alex at calicolabs.com
Tue Dec 19 11:41:04 EST 2017


More generally, to avoid any kind of caching effects, use double your RAM
size.

On Tue, Dec 19, 2017 at 8:09 AM Bradley Merchant via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> Because you're total test size is only 8GB and the read portion is
> entirely in cache. Make the test size at least 2x ARC size.
>
>
> ________________________________________
> From: zfs-discuss <zfs-discuss-bounces at list.zfsonlinux.org> on behalf of
> Bond Masuda via zfs-discuss <zfs-discuss at list.zfsonlinux.org>
> Sent: Tuesday, December 19, 2017 10:59 AM
> To: zfs-discuss
> Subject: [zfs-discuss] ZFS raidz2 sequential reads are CPU bound?
>
> I recently ran an iozone benchmark on a newly built ZFS single raidz2
> pool consisting of 24x 8TB SATA drives. This was just a simple
> throughput test with iozone just to see ballpark figures, but the first
> thing I noticed was the dramatic difference in CPU load between reads vs
> writes; reads being a lot more CPU intensive; question is WHY are reads
> in raidz2 so much more CPU intensive? BTW, ashift is correct as far as i
> can tell looking at zdb and smartctl output.
>
> Bond
>
> iozone output below:
>
> Run began: Mon Dec 18 01:32:06 2017
>
>       CPU utilization Resolution = 0.000 seconds.
>       CPU utilization Excel chart enabled
>       File size set to 524288 kB
>       Record Size 1024 kB
>       Command line used: iozone -i 0 -i 1 -t 16 -+u -s 512M -r 1M -b
> /root/iozone-raidz2_24xHDD_r4k.xls
>       Output is in kBytes/sec
>       Time Resolution = 0.000001 seconds.
>       Processor cache size set to 1024 kBytes.
>       Processor cache line size set to 32 bytes.
>       File stride size set to 17 * record size.
>       Throughput test with 16 processes
>       Each process writes a 524288 kByte file in 1024 kByte records
>
>       Children see throughput for 16 initial writers     = 1653522.18
> kB/sec
>       Parent sees throughput for 16 initial writers     = 863597.91 kB/sec
>       Min throughput per process             =   76611.81 kB/sec
>       Max throughput per process             =  120246.17 kB/sec
>       Avg throughput per process             =  103345.14 kB/sec
>       Min xfer                     =  334848.00 kB
>       CPU Utilization: Wall time    5.527    CPU time 4.519    CPU
> utilization  81.76 %
>
>
>       Children see throughput for 16 rewriters     = 1576915.59 kB/sec
>       Parent sees throughput for 16 rewriters     = 967602.80 kB/sec
>       Min throughput per process             =   89775.03 kB/sec
>       Max throughput per process             =  109601.74 kB/sec
>       Avg throughput per process             =   98557.22 kB/sec
>       Min xfer                     =  430080.00 kB
>       CPU utilization: Wall time    4.791    CPU time 4.586    CPU
> utilization  95.72 %
>
>
>       Children see throughput for 16 readers         = 15425965.56 kB/sec
>       Parent sees throughput for 16 readers         = 15395943.28 kB/sec
>       Min throughput per process             =  620682.69 kB/sec
>       Max throughput per process             = 1272234.88 kB/sec
>       Avg throughput per process             =  964122.85 kB/sec
>       Min xfer                     =  256000.00 kB
>       CPU utilization: Wall time    0.413    CPU time 6.589    CPU
> utilization 1595.50 %
>
>
>       Children see throughput for 16 re-readers     = 13513061.75 kB/sec
>       Parent sees throughput for 16 re-readers     = 13494052.43 kB/sec
>       Min throughput per process             =  639264.56 kB/sec
>       Max throughput per process             =  980692.50 kB/sec
>       Avg throughput per process             =  844566.36 kB/sec
>       Min xfer                     =  342016.00 kB
>       CPU utilization: Wall time    0.535    CPU time 8.523    CPU
> utilization 1592.41 %
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-- 
Sent from a "phone".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20171219/1e89fcfa/attachment-0001.html>


More information about the zfs-discuss mailing list