[zfs-discuss] ZFSoL Performance

Gordan Bobic gordan.bobic at gmail.com
Wed Feb 3 15:18:58 EST 2016


So is your performance low only while the scrub is running in the
background?

On Wed, Feb 3, 2016 at 8:16 PM, Callahan, Tom via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> Currently, the system is doing a scrub, which is why the reads are so high.
>
> --
> Tom Callahan
> Director
> Infrastructure and Operations
> 410-229-1361  Tel
> 410-588-7605  Mobile
> CallahanT at TESSCO.com
> www.tessco.com
>
>
> -----Original Message-----
> From: Hendrik Visage [mailto:hvjunk at gmail.com]
> Sent: Wednesday, February 03, 2016 3:11 PM
> To: zfs-discuss at list.zfsonlinux.org; Callahan, Tom <CallahanT at tessco.com>
> Subject: Re: [zfs-discuss] ZFSoL Performance
>
> On Wed, Feb 3, 2016 at 8:03 PM, Callahan, Tom via zfs-discuss <
> zfs-discuss at list.zfsonlinux.org> wrote:
> > I’m running ZFSoL on an Ubuntu precise server. Hardware is a
> > supermicro system with 24 drives in the nose (2TB WD Caviar Black –
> 7200RPM drives).
> > I’m seeing very slow performance, and need some assistance in
> > determining if I should expect this performance or something is wrong.
> >
> >
> >
> > Here is my pool setup:
> >
> >                                                   capacity     operations
> > bandwidth
> >
> > pool                                           alloc   free   read  write
> > read  write
> >
> > ---------------------------------------------  -----  -----  -----
> > -----
> > -----  -----
> >
> > tank                                           13.2T  30.3T     21     84
> > 818K   356K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 102K  44.9K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY02575745      -      -      1      4
> > 50.5K  23.2K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY02760460      -      -      1      4
> > 50.5K  23.2K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY02803497      -      -      1      4
> > 50.5K  23.2K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 102K  44.7K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY02866345      -      -      1      4
> > 50.6K  23.0K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY02992504      -      -      1      4
> > 50.6K  23.1K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY02993103      -      -      1      4
> > 50.6K  23.1K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 102K  44.6K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03085661      -      -      1      4
> > 50.4K  23.0K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03091649      -      -      1      4
> > 50.4K  23.0K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03291843      -      -      1      4
> > 50.4K  23.0K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 103K  44.8K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03333265      -      -      1      4
> > 50.8K  23.1K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03336708      -      -      1      4
> > 50.8K  23.1K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03358213      -      -      1      4
> > 50.8K  23.1K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 103K  44.3K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03360053      -      -      1      4
> > 50.8K  22.8K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03360479      -      -      1      4
> > 50.8K  22.8K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03373789      -      -      1      4
> > 50.8K  22.8K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 102K  43.9K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03374608      -      -      1      4
> > 50.6K  22.7K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03375844      -      -      1      4
> > 50.6K  22.7K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03376009      -      -      1      4
> > 50.6K  22.7K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 102K  43.9K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03376211      -      -      1      4
> > 50.4K  22.7K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03381316      -      -      1      4
> > 50.4K  22.7K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03395467      -      -      1      4
> > 50.4K  22.7K
> >
> >   raidz1                                       1.65T  3.79T      2     10
> > 101K  44.7K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03414868      -      -      1      4
> > 50.1K  23.1K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03423352      -      -      1      4
> > 50.1K  23.1K
> >
> >     ata-WDC_WD2002FAEX-007BA0_WD-WMAY03417590      -      -      1      4
> > 50.1K  23.1K
> >
> > ---------------------------------------------  -----  -----  -----
> > -----
> > -----  -----
> Hi there,
>
>  Being a newby to this list, I'm just wondering why 8x3disk RAIDZ vdevs?
> I was under the impression that RAID-Z1's sweet spot is around 5disks per
> vdev, isn't it?
>  (As just for  RAID5, the point of diminishing returns)
>
> > Deleting larger single files (3-4GB) can take upwards of 10 minutes to
> > complete, which seems very poor. What info can I provide to help with
> > more details?
>
>
> On Wed, Feb 3, 2016 at 9:57 PM, Callahan, Tom via zfs-discuss <
> zfs-discuss at list.zfsonlinux.org> wrote:
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> > sda               0.00     0.00   71.50    0.00     4.29     0.00
>  122.91     1.34   18.80   18.80    0.00   7.61  54.40
> > sde               0.00     0.00   66.50    0.00     4.04     0.00
>  124.39     0.97   14.56   14.56    0.00   7.13  47.40
> > sdd               0.00     0.00   71.00    0.00     4.29     0.00
>  123.77     1.34   18.90   18.90    0.00   7.63  54.20
> > sdr               0.00     0.00   73.00    0.00     4.41     0.00
>  123.81     1.55   21.01   21.01    0.00   7.78  56.80
> > sdi               0.00     0.00   71.50    0.00     4.29     0.00
>  122.93     1.97   27.69   27.69    0.00   9.96  71.20
> > sdm               0.00     0.00   67.50    0.00     4.04     0.00
>  122.58     1.00   14.79   14.79    0.00   7.41  50.00
> > sdo               0.00     0.00   67.00    0.00     3.94     0.00
>  120.51     1.18   17.82   17.82    0.00   8.21  55.00
>
> Looking at the iostat output, you have a high read rate, so no other
> random reads are taking place?
> What is your recordsizes set to?
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20160203/0d62b366/attachment.html>


More information about the zfs-discuss mailing list