[zfs-discuss] Replacing multiple drives at once - performance

Gordan Bobic gordan.bobic at gmail.com
Wed Sep 2 05:24:06 EDT 2015

I believe this is the case when you are resilvering multiple disks,
starting at nearly the same time. If one is half resilvered and you kick
off the second resilvering, I believe it will go at half the speed for each
of the two disks.

If two resilvers get kicked off at the same time (seconds apart is fine,
minutes probably not, depending on the size of your caches), the second one
will be able to get all the data it needs to fetch from the ARC, and thus
the healthy disks only get hit once (when the first resilver traverses that
data). If you kick off the resilvering a long time later when the data that
was fetched is already churned out of the caches, it will have to be

Or at least that's what I think is happening from a cursory investigation
of such circumstances in the past, but I could be wrong.


On Tue, Sep 1, 2015 at 3:59 PM, Chris Siebenmann via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> > My question is about the performance. Since the resilvering takes a
> > couple of weeks to complete (replacing one drive), would replacing
> > two drives at once be slower or faster than replacing them one after
> > another?
>  Replacing multiple drives at once will generally resilver at
> essentially the same speed as replacing a single drive. Resilvers are
> generally limited by how fast your disks can seek (generally your
> existing disks), not by how much data you can read and write at once.
> (To simplify a lot, resilvers work by walking down the ZFS tree of
> objects, identifying objects that need to be written to the new disks,
> and then writing them there. Walking down the tree requires random IO
> all over your existing disks and that's generally the limiting part.  If
> you have only a little space used, resilvers and scrubs are much faster
> than normal software RAID bulk copying; if you have a lot of space used,
> they can be significantly slower on spinning rust HDs. Pervasive SSDs
> will make this a lot better.)
>         - cks
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20150902/1fb959e3/attachment-0001.html>

More information about the zfs-discuss mailing list