zpool scrub speed

devsk devsku at gmail.com
Sat May 7 18:09:54 EDT 2011


Ok, the scrub actually killed my box. Basically, while it was 80%
done, the machine locked hard. I have no idea what happened. The
machine had a lot of free memory and the box was mostly idle apart
from scrub. Nothing showed up in the /var/log/messages after I booted
back up.

I think we still have some work to do with respect to lock ups, but
without any info in /var/log/messages and no way to debug the system
directly, I think we are pretty much depended on the zfs stress
testing test suite, which we should make effort to get going.

The good thing is that the scrub continued from where it left off and
it finished faster than before (86 mins vs. 73 mins (43 mins before
lockup, 30mins after)). So, it looks like scrub speed is also
determined by how much memory is fragmented i.e. on a box which is
freshly booted, things are generally faster and as the time goes on
and zfs allocates and arc_reclaim eventually reclaims memory
repeatedly, things begin to slow down.

-devsk


On May 7, 8:28 am, devsk <dev... at gmail.com> wrote:
> Just throwing in my numbers for RAIDZ2 in there. This is a zfs-fuse
> pool which I upgraded to version 26 today. The numbers seem better
> with version 26 compared to 23 (I had compared numbers for version 23
> for zfs-fuse and native zfs recently).
>
> # zpool status -v
>   pool: mydata
>  state: ONLINE
>    see:http://www.sun.com/msg/ZFS-8000-EY
>  scan: scrub in progress since Sat May  7 08:07:12 2011
>     182G scanned out of 1.42T at 365M/s, 0h59m to go
>     0 repaired, 12.44% done
>
> This will slow down I guess as time passes. It has typically taken
> about 3 hrs in the past.
>
> -devsk
>
> On May 6, 8:29 am, "Jason J. W. Williams" <jasonjwwilli... at gmail.com>
> wrote:
>
>
>
>
>
>
>
> > Depends on how fragmented the volume is and how much data is in the pool (it'll only scrub the amount of data in the pool as opposed to scrubbing freespace). Had to scrub an onv_131 system the other night...22x 7200rpm disks with 247gb of data took about 19mins.
>
> > I've seen a 14 drive (7200rpm) pool with 1.8tb of data (raid-z2) take over a day to scrub due to failing disks. If you start to see lots of checksum errors on the drives you'll see a long scrub.
>
> > -J
>
> > Sent via iPhone
>
> > Is your e-mail Premiere?
>
> > On May 6, 2011, at 2:07, Gordan Bobic <gordan.bo... at gmail.com> wrote:
>
> > > What is the general performance expected from zfs scrub? I would have expected it to be in the region of the combined read speed of the data bearing disks (i.e. excluding the parity disks). But my 13 disk array only gets about 170MB/s on a scrub. Is this normal, i.e. explained by the disk-seeking required?
>
> > > Gordan



More information about the zfs-discuss mailing list