[zfs-discuss] Unstable ZFS setup on big-ass setup

Sander Smeenk ssmeenk at freshdot.net
Mon Aug 5 10:16:29 EDT 2013

Quoting Sander Klein (roedie at roedie.nl):

> After some testing I found out that the complete pool was faster
> than using some SSD's as ZIL. While pushing a lot of data the load
> of the system also became quite high. I found out that disabling
> sync was the remedy for this. We could push more data into the pool
> and the load wouldn't become high. And since I don't mind losing up
> to the last 5 sec's I decided to disable it.

OK. I don't care about a little loss either, so i disabled it.
Haven't had the time to check what this has done for my system.

> >I'm running a 26.5T pool, provided by two storage servers as two iSCSI
> >LUNs (the pool is a mirror vdev over these two LUNs). The server has
> >two 8-core CPUs (16 cores, 32 threads) with 192GB of memory.
> Could it be that the connection between your ZFS server and the
> storage server becomes saturated?

No, 10Gbit/s links to both. No way they are full.

> Am I right that you have 2 big ass vdev's which you mirror? If so, you
> have some kind of hardware raid on the two storage servers? If that's
> the case I would export every disk as an iscsi device and create
> multiple mirrors in one pool using zfs.

I've had that tip before. Currently the sotrage servers do raid, and
offer one lun each to the ZFS box:

| NAME                                      STATE     READ WRITE CKSUM
| backup                                    ONLINE       0     0 0
|  mirror-0                                 ONLINE       0     0 0
|   scsi-3600000e00d1100000011239000000000  ONLINE       0     0 0
|   scsi-3600000e00d1100000011228d00000000  ONLINE       0     0 0

Al though i understand this would only affect the performance of the
pool, as ZFS is better capable of grouping writes etc. when all the
disks are provided to it. (some max_vdev_pending setting comes to mind?)

| Did Noah keep his bees in archives?
| 4096R/20CC6CD2 - 6D40 1A20 B9AA 87D4 84C7  FBD6 F3A9 9442 20CC 6CD2

More information about the zfs-discuss mailing list