[zfs-discuss] ZFS on RAID-5 array

Gordan Bobic gordan.bobic at gmail.com
Fri Apr 13 08:59:45 EDT 2018

On Fri, Apr 13, 2018 at 1:53 PM, Edward Ned Harvey (zfsonlinux) <
zfsonlinux at nedharvey.com> wrote:

> > From: zfs-discuss <zfs-discuss-bounces at list.zfsonlinux.org> On Behalf Of
> > Gordan Bobic via zfs-discuss
> >
> >> Allow zfs to toggle the disk's cache on (give zfs the whole disk, not
> just a
> >> partition).
> >
> > This may be a little misleading. Even when zpool is told to use a whole
> disk as
> > a vdev it will create two partitions, 1 and 9. Partition 9 is 8MB, and
> not used,
> > but it is useful when you discover that the replacement disk has a few
> > sectors fewer than the original one.
> I've never seen zpool create a partition on a disk. I give zpool the whole
> disk, and it uses the whole disk. Not sure how it can end up the way you've
> described... But even assuming that's correct and not a mistake... The
> point is for zpool to know it has control of the whole disk, so zpool will
> enable the on-disk cache. When the situation happens that you've described,
> it sounds like zpool should still know it has control over the whole disk
> and enable the cache, right?

The last implementation I have seen that doesn't create partitions when not
told to is zfs-fuse.
I don't recall ZoL ever using the entire disk. I don't know if other
OpenZFS implementations do it.

Try it - create a sparse file, connect it to /dev/loop and do zpool create
on it. You'll find it has created a GPT partition table with two partitions
as I described.

I know dmesg shows my write caches on disks get enabled by the kernel
before ZFS modules even load.

I don't think this is an issue because enabling disk caches and disks
ignoring barriers commands are entirely different things - the latter is
always broken behaviour. As long as the disk obeys barrier calls and the
fs/application issues them appropriately, enabling the write caches should
always be safe.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180413/3c6abd8d/attachment.html>

More information about the zfs-discuss mailing list