[zfs-discuss] zfs periodically writes to disk on a quiescent system

Phil Harman phil.harman at gmail.com
Sun Feb 7 03:48:51 EST 2016


... and, of course, the default filesystem behaviour is sync=standard, which means all writes that are not explicitly synchronous will be issued by zfs daemon threads, not the writing thread.

You could use fuser or similar to find out who has what open for writing. With tools like dtrace or systemtap, you could find out who is calling zfs_write etc.

> On 7 Feb 2016, at 08:34, Phil Harman <phil.harman at gmail.com> wrote:
> 
> As well as zpool status, it would help to see some representative "zfs get all" and "zpool get all" output.
> 
> Have you turned atime updates off? (e.g. "zfs set atime=off <dataset>" for each mounted filesystem) It not, it only takes one thread issuing just one read (from cache) to trigger multiple writes.
> 
>> On 7 Feb 2016, at 03:39, user3871075 via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
>> 
>> Hi everyone,
>> 
>> I'm setting up a home router/firewall/fileserver with Ubuntu 14.04.  The system boots directly into a ZFS filesystem.  This server will be idle a large percentage of the time, so I'd like to spin down the hard drives during extended idle periods.  I've changed /tmp to tmpfs and moved /var/log onto a flash drive.  However, I'm still seeing a lot of disk activity during periods when the system is quiescent.
>> 
>> By following these excellent instructions http://askubuntu.com/questions/216594/investigate-disk-writes-further-to-find-out-which-process-writes-to-my-ssd, it appears that zfs is doing the writes on its own behalf.  I'll attach the full output, but here's a sample:
>> 
>> TASK                   PID      TOTAL       READ      WRITE      DIRTY DEVICES
>> z_null_iss             966       1776          0       1776          0 sdb3, sda3, sdc3, sdd3
>> z_wr_iss              1073       1169          0       1169          0 sda3, sdb3, sdc3, sdd3
>> z_wr_iss              1074        865          0        865          0 sdb3, sda3, sdc3, sdd3
>> z_wr_iss              1072        826          0        826          0 sdd3, sda3, sdb3, sdc3
>> segctord              2591        819          0        819          0 sde1
>> z_wr_int_0            1083        434          0        434          0 sda3, sdb3, sdc3, sdd3
>> z_wr_int_0            1080        429          0        429          0 sdb3, sda3, sdc3, sdd3
>> z_wr_int_7            1169        424          0        424          0 sdc3, sda3, sdb3, sdd3
>> z_wr_int_4            1132        423          0        423          0 sdc3, sda3, sdb3, sdd3
>> z_wr_int_6            1158        422          0        422          0 sda3, sdb3, sdc3, sdd3
>> z_wr_int_6            1160        417          0        417          0 sdd3, sdc3, sdb3, sda3
>> z_wr_int_5            1141        417          0        417          0 sda3, sdb3, sdc3, sdd3
>> z_wr_int_7            1172        415          0        415          0 sdc3, sdb3, sda3, sdd3
>> z_wr_int_6            1161        414          0        414          0 sdd3, sdc3, sda3, sdb3
>> z_wr_int_3            1121        414          0        414          0 sdd3, sdc3, sda3, sdb3
>> z_wr_int_4            1131        414          0        414          0 sdd3, sdc3, sda3, sdb3
>> z_wr_int_5            1149        413          0        413          0 sdb3, sda3, sdc3, sdd3
>> z_wr_int_5            1148        411          0        411          0 sda3, sdb3, sdc3, sdd3
>> 
>> 
>> The segctord supports nilfs on my flash drive.  All the rest are these z_* tasks, and they're exclusively writing to disk, not reading.  I checked zpool status, and there isn't a scrub or resilver going on.  (I've attached that output as well, just in case...)
>> 
>> Does anyone know what's going on, and more importantly, whether it's possible to disable this behavior?
>> 
>> Thanks!
>> <iodump.txt>
>> <zpool_status.txt>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss at list.zfsonlinux.org
>> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20160207/051f154e/attachment.html>


More information about the zfs-discuss mailing list