spl/zfs-0.6.0-rc3

Manuel Amador (Rudd-O) rudd-o at rudd-o.com
Sun Apr 10 16:05:53 EDT 2011


Not only that.  find of a large directory no longer hangs the machine quickly, 
but rsync still does.  I will continue investigating this issue.


El Sunday, April 10, 2011, Steve Costaras escribió:
> Have been trying this since last night with the PPA install under ubuntu
> 10.04. testing done on 6 drives (ST2000DL003-9VT1) in a raidz2
> configuration on a supermicro X8DTH-6F w/ 2xX5680 cpu's @ 24GB ram.
> 
> root at loki:/opt/bacula/var/bacula/spool# zpool history mpool
> History for 'mpool':
> 2011-04-09.21:18:48 zpool create -f -O atime=off -O mountpoint=none -O
> canmount=off -o version=23 mpool raidz2
> /dev/disk/by-path/pci-0000:84:00.0-scsi-0:0:15:0
> /dev/disk/by-path/pci-0000:84:00.0-scsi-0:0:15:1
> /dev/disk/by-path/pci-0000:84:00.0-scsi-0:0:15:2
> /dev/disk/by-path/pci-0000:84:00.0-scsi-0:0:15:3
> /dev/disk/by-path/pci-0000:84:00.0-scsi-0:0:15:4
> /dev/disk/by-path/pci-0000:84:00.0-scsi-0:0:15:5 2011-04-09.21:19:47 zfs
> create -o mountpoint=/mnt/test mpool/test
> 2011-04-09.21:19:50 zfs create -o mountpoint=/mnt/test/pub mpool/test/pub
> 
> test file created with dd if=/dev/urandom of=test0.dat bs=1M count=32768 on
> another array (raid-0 w/ metasuite off 5 ST32000444SS drives;
> 
>  I know performance is not the main goal at this stage, however I have
> noticed some items. Performance for scrub has greatly improved from what I
> can see (~250MB/s) however direct I/O & copy performance still way down:
> 
> dd if=test0.dat of=/mnt/test/test0.dat bs=1M
> 
>  capacity operations bandwidth
> pool alloc free read write read write
> ---------------------------------- ----- ----- ----- ----- ----- -----
> mpool 59.5G 10.8T 0 185 0 19.2M
>  raidz2 59.5G 10.8T 0 185 0 19.2M
>  pci-0000:84:00.0-scsi-0:0:15:0 - - 0 58 0 4.80M
>  pci-0000:84:00.0-scsi-0:0:15:1 - - 0 57 0 4.80M
>  pci-0000:84:00.0-scsi-0:0:15:2 - - 0 58 0 4.81M
>  pci-0000:84:00.0-scsi-0:0:15:3 - - 0 56 0 4.80M
>  pci-0000:84:00.0-scsi-0:0:15:4 - - 0 55 0 4.80M
>  pci-0000:84:00.0-scsi-0:0:15:5 - - 0 55 0 4.80M
> ---------------------------------- ----- ----- ----- ----- ----- -----
> 
> vg-cpu: %user %nice %system %iowait %steal %idle
>  0.05 0.00 1.37 0.40 0.00 98.18
> 
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
> %util sdc 0.14 0.00 0.14 0.00 0.04 0.00 512.00 0.00 30.00 30.00 0.43
> sdb 0.00 0.00 0.29 0.00 0.04 0.00 256.00 0.00 15.00 10.00 0.29
> sdd 0.00 0.00 0.29 0.00 0.04 0.00 256.00 0.01 20.00 10.00 0.29
> sda 0.00 0.00 0.29 0.00 0.04 0.00 256.00 0.00 15.00 10.00 0.29
> sdg 0.14 0.00 0.14 0.00 0.04 0.00 512.00 0.00 20.00 20.00 0.29
> sdh 0.00 0.00 0.29 0.00 0.04 0.00 256.00 0.01 20.00 10.00 0.29
> sdf 0.14 0.00 0.29 0.00 0.04 0.00 256.00 0.01 20.00 10.00 0.29
> sdj 5.29 0.00 30.57 0.00 3.66 0.00 245.16 0.07 2.20 1.12 3.43
> sdk 12.14 0.00 27.71 0.00 3.65 0.00 269.94 0.08 2.94 1.49 4.14
> sdi 0.43 0.00 2.14 0.00 0.57 0.00 546.13 0.02 8.00 1.33 0.29
> sdl 6.00 0.00 25.71 0.00 3.67 0.00 292.22 0.09 3.44 1.67 4.29
> sdn 10.71 0.00 26.57 0.00 3.68 0.00 283.31 0.11 4.19 1.88 5.00
> sdm 6.00 0.00 26.57 0.00 3.67 0.00 282.97 0.11 4.09 1.72 4.57
> sdo 0.00 5.71 0.00 55.00 0.00 4.81 179.15 4.59 84.31 16.75 92.14
> sdp 0.00 6.14 0.00 53.57 0.00 4.81 183.90 3.69 67.71 14.59 78.14
> sdr 0.00 6.14 0.00 51.43 0.00 4.81 191.61 3.24 61.03 13.61 70.00
> sds 0.00 8.29 0.00 49.14 0.00 4.81 200.45 3.17 62.65 13.63 67.00
> sdt 0.00 7.57 0.00 48.86 0.00 4.81 201.55 2.88 57.37 13.42 65.57
> sdq 0.00 8.14 0.00 51.14 0.00 4.81 192.67 3.21 60.20 13.44 68.71
> sde 0.14 0.00 0.14 0.00 0.04 0.00 512.00 0.00 20.00 20.00 0.29
> md0 0.00 0.00 177.29 0.00 18.33 0.00 211.73 0.00 0.00 0.00 0.00
> 
> 
> as you can see drive utilization is well over the ~40% 'knee' for sata
> drives at this low performance level. possible issue w/ ashift?
> 
> ---------------
> 
> dd if=/mnt/test/test0.dat of=/dev/null bs=1M
> 
>  capacity operations bandwidth
> pool alloc free read write read write
> ---------------------------------- ----- ----- ----- ----- ----- -----
> mpool 103G 10.8T 3.34K 0 427M 0
>  raidz2 103G 10.8T 3.34K 0 427M 0
>  pci-0000:84:00.0-scsi-0:0:15:0 - - 0 0 0 0
>  pci-0000:84:00.0-scsi-0:0:15:1 - - 854 0 107M 0
>  pci-0000:84:00.0-scsi-0:0:15:2 - - 854 0 107M 0
>  pci-0000:84:00.0-scsi-0:0:15:3 - - 854 0 107M 0
>  pci-0000:84:00.0-scsi-0:0:15:4 - - 854 0 107M 0
>  pci-0000:84:00.0-scsi-0:0:15:5 - - 0 0 0 0
> ---------------------------------- ----- ----- ----- ----- ----- -----
> 
> avg-cpu: %user %nice %system %iowait %steal %idle
>  0.03 0.00 23.89 0.01 0.00 76.06
> 
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
> %util sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdi 0.00 0.00 6.86 0.00 0.86 0.00 256.00 0.20 29.17 2.71 1.86
> sdl 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdn 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdm 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdp 0.00 0.00 847.29 0.00 105.91 0.00 255.99 9.99 11.79 1.18 100.00
> sdr 0.00 0.00 848.57 0.00 106.07 0.00 255.99 9.98 11.77 1.18 100.00
> sds 0.00 0.00 847.86 0.00 105.98 0.00 255.99 9.98 11.77 1.18 100.00
> sdt 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> sdq 0.00 0.00 847.29 0.00 105.91 0.00 255.99 9.98 11.79 1.18 100.00
> sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> 
> Read performance for streaming reads appears much better of ~100MB/data
> drive & 850 i/o's per drive at 128KiB each as to be expected.
> 
> 
> 
> --------------------
> Also it seems that 0.6.0rc3 still needs to be able to modify the ashift
> parameter for 4K drives since it's still setting a value of 9 not 12:
> 
> zdb -C | grep -B1 -a9 "type: 'raidz'"
>  children[0]:
>  type: 'raidz'
>  id: 0
>  guid: 12951126416098951077
>  nparity: 2
>  metaslab_array: 23
>  metaslab_shift: 36
>  ashift: 9
>  asize: 12002308128768
>  is_log: 0
>  create_txg: 4
> 
> --------------
> 
> 
> 
> 
> 
> -----Original Message-----
> From: Brian Behlendorf [mailto:behlendorf1 at llnl.gov]
> Sent: Friday, April 8, 2011 10:09 PM
> To: 'zfs-discuss'
> Subject: spl/zfs-0.6.0-rc3
> 
> The spl/zfs-0.6.0-rc3 release candidate is available. It includes
> numerous bug fixes and performance improvements. Additionally, this
> release is compatible with Linux 2.6.26-2.6.38 kernels. Please give it
> a try and report any issues you encounter.
> 
> Thank you to everything who provided feedback and worked with me to
> address issues in the last release candidate!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20110410/7d55bd08/attachment.html>


More information about the zfs-discuss mailing list