[zfs-discuss] "du -ksh ." taking hours to complete

Durval Menezes durval.menezes at gmail.com
Wed Nov 14 06:31:48 EST 2018


Hello Kash,

On Wed, Nov 14, 2018 at 12:21 AM Kash Pande via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> On 2018-11-13 7:18 p.m., Luki Goldschmidt via zfs-discuss wrote:
> > On 11/13/18 4:06 PM, Welison B. Floriano via zfs-discuss wrote:
> >>   We have a Dell Power Edge R730 connected to a HGST JBOD 4U60 G1 via
> >> a HBA LSI 9400-8e. We are running Red Hat 7.6, zfs 0.7.11-1, 5x
> >> raidz2(12 x 10TB HDs each). 296TB used, 107TB free. Read and write
> >> seems to be fine, but when we use "du -ksh ." in directories with lost
> >> of files, it takes hours to get a result. Any help will be appreciated.
> > That sounds about right if you have tens of millions of files and have
> > spinning disks in raidz[123]. Suggestions:
> >
> > 1) Use separate datasets and monitor usage with zfs list
> > 2) Use zfs userspace to get a per-user utilization for a dataset
> > 3) Add a L2ARC SSD for metadata caching (secondarycache=metadata)
> >
> >
>
> #4 - use 0.8.0-rc2 with the metadata vdev allocation feature (special
> vdev).
>

This https://github.com/zfsonlinux/zfs/pull/5182 (via
https://github.com/zfsonlinux/zfs/issues/3779), correct?

Interesting.

What is supposed to be the main advantage over L2ARC on SSD plus
"secondary cache=metadata"? Just to keep the secondary cache still
available for normal (non-meta) data?

Cheers,
-- 
   Durval.




>
> Kash
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20181114/fe167b45/attachment.html>


More information about the zfs-discuss mailing list