a difficult-to-describe problem involving long write times and enormous load levels

Brian Behlendorf behlendorf1 at llnl.gov
Tue May 24 12:52:15 EDT 2011

Typically the first place to look for ZFS debugging information is the
console log.  If your system is still responsive you can view the
console by running 'dmesg'.  Please post any stack traces or additional
debugging output in to the Github issue tracker so we can track the
problem and get it fixed.

For additional debugging you can always build the spl/zfs code with the
--enable-debug option.  The option enables additional run time checks to
ensure everything is working properly.


On Mon, 2011-05-23 at 21:01 -0700, Daniel Brooks wrote:
> On Sun, May 22, 2011 at 11:54 PM, Fajar A. Nugraha <list at fajar.net>
> wrote:
>         On Mon, May 23, 2011 at 2:08 AM, Daniel Brooks
>         <dlb48x at gmail.com> wrote:
>         > Ok, This morning the load is only 19, but everything that
>         interacts with a
>         > zfs filesystem is locked up. iostat says that there is zero
>         activity on
>         > those disks:
>         That definitely looks like a bug.
>         If there's additional data (like the deadlock on
>         https://github.com/behlendorf/zfs/issues/232) you should file
>         a bug
>         report.
> Can you recommend any other ways of collecting more data? Can I build
> with more debug options or something?
>         > Although as you can see it can still read from sda, which
>         isn't being used
>         > by zfs. The drives themselves still work, I can interrogate
>         them with
>         > hdparm, for instance.
>         Have you tried limiting max arc size? I use zfs on a
>         relatively busy
>         mysql server (several hundred write tps on Innodb, sync after
>         every
>         transaction, SSD), with zfs_arc_max=134217728 and so far it's
>         working
>         good.
> It's enabled now; we'll see if it has any effect.
> db48x

More information about the zfs-discuss mailing list