ZFS on Gentoo Linux

sozo1976 sozo1976 at gmail.com
Thu Apr 7 09:30:06 EDT 2011


On Apr 7, 12:57 pm, Nils Bausch <nils.bau... at googlemail.com> wrote:
> Corrections:
> Reading is blazing fast, writing not that much.
> I am accessing my zfs pool via AFP/Netatalk and reading through 1 Gbit/
> s network connection results in 92 MiB/s maximum read speed. The write
> speed is different when being used on a compression activated or
> deduplication zfs filesystem and varies from 3 MiB/s up to 70 MiB/s,
> sometimes stalling and continues a bit later.
> Is there a way to improve this e.g. activating specific compression
> algorithms in the kernel or allowing more cache in a config file?
>
> cheers


What does 'iostat -x 5' show you on your disk and system utilization
while you do this?

I just put together a machine with a p8p67 MB, core i5 2500k cpu, 16GB
RAM and 5x Hitatchi 7k3000 2TB drives for home use. Running Debian
Wheezy in VMware ESXi with raw device mappings from the Intel ICH10
onboard controller.

root at StorageBox:/usr/local/crashplan/log# zfs get compression tank/
home
NAME       PROPERTY     VALUE     SOURCE
tank/home  compression  on        local


fio --filename=/zfs/home/file1 --rw=write --bs=1M --size=10G --
numjobs=1 --iodepth=8 --runtime=160 --group_reporting --name=file1
file1: (g=0): rw=write, bs=1M-1M/1M-1M, ioengine=sync, iodepth=8
fio 1.50
Starting 1 process
Jobs: 1 (f=1): [W] [100.0% done] [0K/376.5M /s] [0 /367  iops] [eta
00m:00s]
file1: (groupid=0, jobs=1): err= 0: pid=3338
  write: io=10240MB, bw=371796KB/s, iops=363 , runt= 28203msec
    clat (usec): min=186 , max=1080.2K, avg=2752.45, stdev=42359.99
     lat (usec): min=186 , max=1080.2K, avg=2752.56, stdev=42359.99
    bw (KB/s) : min= 9102, max=2185216, per=108.02%, avg=401614.17,
stdev=362921.34
  cpu          : usr=0.07%, sys=11.32%, ctx=1833, majf=0, minf=28
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued r/w/d: total=0/10240/0, short=0/0/0
     lat (usec): 250=55.67%, 500=32.29%, 750=2.51%, 1000=4.83%
     lat (msec): 2=3.21%, 4=0.24%, 10=0.53%, 20=0.24%, 50=0.03%
     lat (msec): 100=0.08%, 250=0.05%, 500=0.07%, 750=0.09%,
1000=0.12%
     lat (msec): 2000=0.03%

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=371795KB/s, minb=380719KB/s, maxb=380719KB/
s, mint=28203msec, maxt=28203msec


iostat -x 5 output:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.05    0.00   46.83    0.00    0.00   53.12

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.00    0.00    0.20     0.00     0.80
8.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00   556.20    0.80  298.80    26.20 101853.80
680.11     2.68    8.97   18.00    8.95   2.92  87.52
sdc               0.00   561.80    0.80  296.20    26.20 101969.70
686.84     2.94    9.86   13.00    9.86   3.30  98.08
sdf               0.00   563.80    0.40  297.60    25.60 103031.60
691.66     2.78    9.37   48.00    9.32   3.09  92.16
sdb               0.00   565.20    0.80  298.20    26.20 103057.60
689.52     2.65    8.88   21.00    8.85   2.95  88.08
sde               0.00   565.00    0.80  300.00    26.20 103312.20
687.09     2.75    9.16    7.00    9.17   2.99  89.92
sdg               0.00    14.00    0.00    6.00     0.00  2204.80
734.93     0.06    9.73    0.00    9.73   5.20   3.12
zd0               0.00     0.00    0.00    0.00     0.00     0.00
0.00     0.00    0.00    0.00    0.00   0.00   0.00



I can easily saturate the 1Gb connection for both reads and writes...
Even a bit random (multiple streams) writes (with high avg IO sizes)
will produce similar write performance.



More information about the zfs-discuss mailing list