[zfs-discuss] ZFS on RAID-5 array

Pascal Meunier pmeunier at purdue.edu
Fri Apr 13 13:23:01 EDT 2018


On Fri, 2018-04-13 at 12:31 +0000, Edward Ned Harvey (zfsonlinux) via zfs-discuss
wrote:
> Allow zfs to toggle the disk's cache on (give zfs the whole disk, not just a
> partition). Truly, the dumber the hba, the better. Raw, JBOD, disk access is the
> best for zfs.

When does that toggling happen?  I gave whole disks to the zpool create operation but
sdparm reports the write cache as disabled:

# sdparm --get=WCE /dev/sd[a-z]
    /dev/sda: HGST      HUH721010AL5200   LS03
WCE         0  [cha: y, def:  0, sav:  0]
    /dev/sdb: HGST      HUH721010AL5200   LS03
WCE         0  [cha: y, def:  0, sav:  0]
    /dev/sdc: HGST      HUH721010AL5200   LS03
WCE         0  [cha: y, def:  0, sav:  0]
    /dev/sdd: HGST      HUH721010AL5200   LS03
WCE         0  [cha: y, def:  0, sav:  0]
    /dev/sde: HGST      HUH721010AL5200   LS03
WCE         0  [cha: y, def:  0, sav:  0]
    /dev/sdf: HGST      HUH721010AL5200   LS03
WCE         0  [cha: y, def:  0, sav:  0]
...

This is on a Dell with an HBA330 (no RAID support;  reported by omreport as "HBA330
Mini" and by lspci as an LSI SAS3008).  In the thread "Slow read performance", it was
suggested to force the write cache on using sdparm.  Is it better to let ZFS manage it
and not use sdparm?

Thanks,
Pascal


More information about the zfs-discuss mailing list