[zfs-discuss] ashift=12 on 512 bytes disks?

Richard Elling richard.elling at richardelling.com
Tue Dec 19 10:44:54 EST 2017



> On Dec 17, 2017, at 1:05 AM, Gionatan Danti via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
> 
> Hi all,
> I recently created a new ZFS pool using 4x WD Gold 2TB (configured in mirrored pairs) and 2x Samsung 850 EVO 500GB for caching ans SLOG.
> 
> The main pool disks (WD Gold) have 512-bytes sectors and I left ashift to the default value (ashift=9). Cache and SLOG were added with "-o ashift=12" to align at 4K boundaries.

slogs workloads are 4k anyway, so setting ashift doesn't do much for 512e devices.

For cache devices, it depends on the release. In the bad old days, cache only worked well for 512e/512n.
Proper 4k support was added https://github.com/zfsonlinux/zfs/commit/82710e993a3481b2c3cdefb6f5fc31f65e1c6798 <https://github.com/zfsonlinux/zfs/commit/82710e993a3481b2c3cdefb6f5fc31f65e1c6798>
> 
> Now I wonder if the original choice to use ashift=9 on the main disks was the better one, or if it can bite me in the future. Suppose a disk fails and I can not replace it with another 512-bytes disk, rather a 4K HD must be used. I know I can manually set the correct ashift value (12) during replacing; however, I wonder if this "mixed" mode (ashift=9 on some disks, ashift=12 on the new 4K one) can lead to lower performance and/or other problems.

Mixed mode is very common. Ideally, it should be handled automatically and do the right thing.

> 
> The obvious solution would be to recreate the entire pool with ashift=12, treating all disks as 4K ones. I know this implies some space loss, but it should be no too severe.
> 
> Any suggestions?

Personally, I wouldn't bother with recreating the pool. Eventually these things tend to migrate towards
more modern hardware and I would do the migration when ready to decommission the old hardware.
 -- richard

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20171219/a2b0ca28/attachment.html>


More information about the zfs-discuss mailing list