[zfs-discuss] Re: first real zfs deployment -sanity check

Richard Laager rlaager at wiktel.com
Sun Aug 11 15:52:19 EDT 2013


I would suggest you stick with raidz2 or raidz3 on big drives; don't go
down to raidz1.

I assume you're going to re-use disks that were previously in an MD
array in your new pool. Make sure to mdadm --zero-superblock the disks
*after* they are out of the md array and *before* you put them in the
zpool. You don't want md to autodetect an array later and cause
corruption; there have been people on IRC that have had that happen.

Use lz4 instead of lzjb. It's faster and compresses better. There is no
downside, unless you're concerned about compatibility with older ZFS
implementations (which you're probably not).

I always recommend atime=off just like I recommend noatime on non-ZFS
filesystems. Almost nothing cares about atime and turning it off is a
performance win.

On ZFS, I like normalization=formD, which I've discussed before.

I discourage the use of the top-level dataset (the one with the same
name as the pool itself) since you can't rename it later to restructure.
So I like to set canmount=no and set a mountpoint of "/". That way, the
inherited behavior of the mountpoint property works to your advantage
(tank/home mounts at /home without needing to specify it).

For small numbers of users, you might as well use separate datasets for
each home directory.

The devices and setuid things below are equivalent to nodev and nosuid,
respectively; they are optional tweaks that may slightly enhance system
security.

That leaves you with something like this:
zpool create -o ashift=12 \
    -O atime=off -O canmount=off -O compression=lz4 -O mountpoint=/ -O normalization=formD -O devices=off \
    tank raidz3 /dev/disk/by-id/...
zfs create -o setuid=off tank/home
zfs create tank/home/matthew
zfs create tank/home/matthews-wifes-username

I have a couple of monitoring scripts you can adapt if needed:
wget -q -O /etc/cron.hourly/zfs-check https://github.com/rlaager/zfs/wiki/zfs-check.sh
wget -q -O /etc/cron.monthly/zfs-scrub https://github.com/rlaager/zfs/wiki/zfs-scrub.sh
chmod 755 /etc/cron.hourly/zfs-check /etc/cron.monthly/zfs-scrub

You might also want to enable the read-only commands through sudo for
all users:

wget -q -O /etc/sudoers.d/zfs https://github.com/rlaager/zfs/wiki/zfs.sudoers
chmod 440 /etc/sudoers.d/zfs

Also, if zfs-auto-snapshot is packaged for CentOS, you might want to use
that instead of rolling your own scripts.

Finally, I would personally configure NFS and Samba manually the
"normal" way rather than using the ZFS share attributes. Part of that is
personal preference. Part of it is because people seem to keep needing
NFS options that can be set in /etc/export but can't be set through
ZFS's sharenfs.  Part of it is because if I was using Samba I'd want to
setup the mapping of ZFS snapshots to Windows's Previous Versions
feature. On the latter, Googling suggests setting the following options
on the share, though I haven't yet tested it:
    vfs objects = shadow_copy2
    shadow: format = auto-%Y-%m-%d_%H.%M.%S--28d
    shadow: sort = desc
    shadow: snapdir = .zfs/snapshot

-- 
Richard
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20130811/fef94c37/attachment.sig>


More information about the zfs-discuss mailing list