[zfs-discuss] ZFS on Large SSD Arrays

Doug Dumitru dougdumitruredirect at gmail.com
Tue Oct 29 18:12:26 EDT 2013



On Tuesday, October 29, 2013 2:45:43 PM UTC-7, Niels de Carpentier wrote:
>
> > 
> > ashift is at 12 (4k).  I re-ran the test with logbias set as throughput 
> > for 
> > the zvol and it had no impact. 
>
> Weird. Another thing you can try is to set zil_slog_limit very small (I 
> don't know if 0 is allowed). That should prevent double writing of the 
> data. 
>

The parameter was accepted in /sys/.../parameters but the test remained the 
same.
 

> > 
>
> >> I expect 4x redundancy, but not 4x space usage.  This is supposed to be 
> > "parity" so it should be data+3. 
>
> If you write a 4kB block, it also needs to write 3 parity blocks of 4kB 
> (ashift = 12 means a minimum blocksize of 4kB). If you set ashift=9, the 
> overhead will be lower. ( 8 512B data blocks and 3 512B parity blocks). A 
> large raidz vdev with a high ashift and small block size is not a good 
> combination. 
>
> > 
> > I set this to 64 (from10) in /sys/modules/zfs/parameters at it "seemed" 
> to 
> > accept a new value.   No change in write performance or underlying 
> writes. 
> > Oh well. 
>
> What is the CPU load during the test? 
>
> Pretty heavy, but no single thread 100% saturated.  Here is a 10 second 
'htop'


  0  [|||||||||||||||||||||||||||||||||||||||            68.6%]     6  
[||||||||||||||||||||||||||||                       46.9%]
  1  [|||||||||||||||||||||||||||||||||||||              64.8%]     7  
[|||||||||||||||||||||||||||||                      48.6%]
  2  [|||||||||||||||||||||||||||||||||||||||            67.7%]     8  
[|||||||||||||||||||||||||                          43.3%]
  3  [||||||||||||||||||||||||||||||||||||||             65.3%]     9  
[||||||||||||||||||||||||||                         43.6%]
  4  [|||||||||||||||||||||||||||||||||                  57.4%]     10 
[||||||||||||||||||||||||||||                       48.6%]
  5  [|||||||||||||||||||||||||||||||                    53.2%]     11 
[|||||||||||||||||||||||||||||                      48.7%]
  Mem[|||||||||||||||||||||||||||||||||||||||||||56710/64401MB]     Tasks: 
44, 4 thr, 269 kthr; 16 running
  Swp[                                                 0/647MB]     Load 
average: 14.51 3.33 1.14 
                                                                    Uptime: 
21:12:32

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
16578 root        0 -20     0     0     0 D 34.0  0.0  3:05.83 txg_sync
16474 root       39  19     0     0     0 S 21.0  0.0  2:22.25 z_wr_iss/4
16481 root       39  19     0     0     0 S 20.0  0.0  2:32.68 z_wr_iss/11
16477 root       39  19     0     0     0 S 19.0  0.0  2:30.06 z_wr_iss/7
16480 root       39  19     0     0     0 S 18.0  0.0  2:35.65 z_wr_iss/10
16478 root       39  19     0     0     0 S 17.0  0.0  2:30.42 z_wr_iss/8
16479 root       39  19     0     0     0 S 15.0  0.0  2:33.96 z_wr_iss/9
16496 root       39  19     0     0     0 D 15.0  0.0  1:46.70 z_wr_int/9
16497 root       39  19     0     0     0 D 15.0  0.0  1:46.01 z_wr_int/10
16500 root       39  19     0     0     0 R 15.0  0.0  1:22.57 z_wr_int/13
16472 root       39  19     0     0     0 S 14.0  0.0  1:58.21 z_wr_iss/2
16475 root       39  19     0     0     0 S 14.0  0.0  2:21.05 z_wr_iss/5
16490 root       39  19     0     0     0 R 14.0  0.0  1:21.04 z_wr_int/3
16493 root       39  19     0     0     0 R 14.0  0.0  1:47.74 z_wr_int/6
16495 root       39  19     0     0     0 D 14.0  0.0  1:52.40 z_wr_int/8
16498 root       39  19     0     0     0 D 14.0  0.0  1:43.26 z_wr_int/11
16476 root       39  19     0     0     0 S 13.0  0.0  2:32.47 z_wr_iss/6
16488 root       39  19     0     0     0 D 13.0  0.0  1:22.40 z_wr_int/1
16492 root       39  19     0     0     0 D 13.0  0.0  1:40.57 z_wr_int/5
16494 root       39  19     0     0     0 R 13.0  0.0  1:50.35 z_wr_int/7
16471 root       39  19     0     0     0 S 12.0  0.0  1:57.12 z_wr_iss/1
16473 root       39  19     0     0     0 S 12.0  0.0  2:00.79 z_wr_iss/3
16489 root       39  19     0     0     0 D 12.0  0.0  1:22.78 z_wr_int/2
16499 root       39  19     0     0     0 R 12.0  0.0  1:14.59 z_wr_int/12
16501 root       39  19     0     0     0 D 12.0  0.0  1:23.20 z_wr_int/14
16502 root       39  19     0     0     0 R 12.0  0.0  1:21.28 z_wr_int/15
16487 root       39  19     0     0     0 D 11.0  0.0  1:15.60 z_wr_int/0
16470 root       39  19     0     0     0 S 10.0  0.0  1:58.60 z_wr_iss/0
16491 root       39  19     0     0     0 R 10.0  0.0  1:38.90 z_wr_int/4
13920 root       39  19     0     0     0 S  6.0  0.0  2:30.65 
spl_kmem_cache/
13942 root       39  19     0     0     0 D  5.0  0.0  0:28.65 zvol/6
13943 root       39  19     0     0     0 S  5.0  0.0  0:28.46 zvol/7
13944 root       39  19     0     0     0 D  5.0  0.0  0:30.19 zvol/8
13945 root       39  19     0     0     0 D  5.0  0.0  0:30.01 zvol/9
13947 root       39  19     0     0     0 S  5.0  0.0  0:29.42 zvol/11
13956 root       39  19     0     0     0 S  5.0  0.0  0:30.21 zvol/20
13957 root       39  19     0     0     0 S  5.0  0.0  0:29.58 zvol/21
 
The calling benchmark program is not even in the list.

Niels 
>
>
Doug Dumitru 

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20131029/97926f25/attachment.html>


More information about the zfs-discuss mailing list