[zfs-discuss] Very slow when accessing many files

John Goerzen jgoerzen at complete.org
Thu Jan 23 09:40:41 EST 2014


Hello,

Since I have recently switched to ZFS on my workstation (Debian wheezy, 
previously running ext4) and laptop (Debian jessie, previously running 
btrfs), I have noticed generally good performance.  However, two 
situations in particular are quite slow -- what used to take maybe 20 
seconds now takes a minute or two. These are the "Reading database" 
phase of a dpkg invocation, or the digikam startup.

The common thread through both of these is accessing many small files.  
In the dpkg case, it is probably opening each file (13,000 or so) in 
/var/lib/dpkg/info for reading.  In the digikam case, it is probably 
walking the directory tree and stat() on about 53,000 entries, though 
not opening the files.

I believe I am seeing at least two issues here:

1) that it is so slow when reading from disk

2) that subsequent runs are not much faster; the ARC doesn't seem to be 
doing its job

More concerning is what happens the second time I call dpkg -i.  It can 
be just as slow as the first.  With ext4 or btrfs, in these situations, 
the data would be in cache and it would be virtually instant startup the 
second time.

I started to do some experimentation, trying to make a small 
self-contained test case.  I have a directory tree containing 10,456 
entries, and I simply ran time find . | wc -l.

The first run produced:

real    0m7.257s
user    0m0.024s
sys    0m0.164s

Second and subsequent runs look like this:

real    0m5.088s
user    0m0.044s
sys    0m0.152s

I've included below the arcstats from immediately before and immediately 
after running this command. Note that on this system, the system was 
otherwise idle, zfs_arc_max is 1GB, and memory use was as follows:

$ free

              total       used       free     shared    buffers     cached

Mem:       8180424    7455780     724644          0     231188    1277108

-/+ buffers/cache:    5947484    2232940

Swap:      1949692       1308    1948384


So there should be no cause for issues.

Strangely, I can reproduce this behavior 100% of the time right now, and 
100% of the time yesterday afternoon.  However, yesterday evening, after 
the system ran a dist-upgrade involving quite a lot of wheezy updates, 
and then sitting idle for a few minutes, the first run was slow and the 
subsequent runs were fast cache hits.  I cannot explain the difference.

Any ideas?

# before find
jgoerzen at hephaestus:~$ cat /proc/spl/kstat/zfs/arcstats

4 1 0x01 84 4032 4281300378 575603467436676

name                            type data

hits                            4    26657597

misses                          4    4930677

demand_data_hits                4    15723874

demand_data_misses              4    596515

demand_metadata_hits            4    9461555

demand_metadata_misses          4    888887

prefetch_data_hits              4    139350

prefetch_data_misses            4    3204642

prefetch_metadata_hits          4    1332818

prefetch_metadata_misses        4    240633

mru_hits                        4    8628133

mru_ghost_hits                  4    602747

mfu_hits                        4    16557377

mfu_ghost_hits                  4    507572

deleted                         4    4136586

recycle_miss                    4    817232

mutex_miss                      4    564

evict_skip                      4    145425071

evict_l2_cached                 4    0

evict_l2_eligible               4    409797097984

evict_l2_ineligible             4    73999825408

hash_elements                   4    177190

hash_elements_max               4    199763

hash_collisions                 4    7726002

hash_chains                     4    51271

hash_chain_max                  4    10

p                               4    1247122833

c                               4    2147483648

c_min                           4    523547136

c_max                           4    2147483648

size                            4    2146281904

hdr_size                        4    54513728

data_size                       4    1371088384

other_size                      4    720679792

anon_size                       4    614400

anon_evict_data                 4    0

anon_evict_metadata             4    0

mru_size                        4    1248394240

mru_evict_data                  4    1050892800

mru_evict_metadata              4    53248

mru_ghost_size                  4    899074048

mru_ghost_evict_data            4    0

mru_ghost_evict_metadata        4    899074048

mfu_size                        4    122079744

mfu_evict_data                  4    48306688

mfu_evict_metadata              4    1406976

mfu_ghost_size                  4    1229147136

mfu_ghost_evict_data            4    905634816

mfu_ghost_evict_metadata        4    323512320

l2_hits                         4    0

l2_misses                       4    0

l2_feeds                        4    0

l2_rw_clash                     4    0

l2_read_bytes                   4    0

l2_write_bytes                  4    0

l2_writes_sent                  4    0

l2_writes_done                  4    0

l2_writes_error                 4    0

l2_writes_hdr_miss              4    0

l2_evict_lock_retry             4    0

l2_evict_reading                4    0

l2_free_on_write                4    0

l2_abort_lowmem                 4    0

l2_cksum_bad                    4    0

l2_io_error                     4    0

l2_size                         4    0

l2_asize                        4    0

l2_hdr_size                     4    0

l2_compress_successes           4    0

l2_compress_zeros               4    0

l2_compress_failures            4    0

memory_throttle_count           4    0

duplicate_buffers               4    0

duplicate_buffers_size          4    0

duplicate_reads                 4    0

memory_direct_count             4    32

memory_indirect_count           4    34954

arc_no_grow                     4    0

arc_tempreserve                 4    0

arc_loaned_bytes                4    0

arc_prune                       4    4264

arc_meta_used                   4    1047082416

arc_meta_limit                  4    1047094272

arc_meta_max                    4    1059818128

# after find
jgoerzen at hephaestus:~$ cat /proc/spl/kstat/zfs/arcstats

4 1 0x01 84 4032 4281300378 575615556131316

name                            type data

hits                            4    26658981

misses                          4    4932333

demand_data_hits                4    15723907

demand_data_misses              4    596515

demand_metadata_hits            4    9462900

demand_metadata_misses          4    890285

prefetch_data_hits              4    139350

prefetch_data_misses            4    3204642

prefetch_metadata_hits          4    1332824

prefetch_metadata_misses        4    240891

mru_hits                        4    8628164

mru_ghost_hits                  4    603012

mfu_hits                        4    16558724

mfu_ghost_hits                  4    508944

deleted                         4    4136657

recycle_miss                    4    818687

mutex_miss                      4    565

evict_skip                      4    145566443

evict_l2_cached                 4    0

evict_l2_eligible               4    409804549632

evict_l2_ineligible             4    74004068864

hash_elements                   4    177149

hash_elements_max               4    199763

hash_collisions                 4    7726167

hash_chains                     4    51272

hash_chain_max                  4    10

p                               4    1246563217

c                               4    2147483648

c_min                           4    523547136

c_max                           4    2147483648

size                            4    2144320688

hdr_size                        4    54513728

data_size                       4    1369132032

other_size                      4    720674928

anon_size                       4    691712

anon_evict_data                 4    0

anon_evict_metadata             4    0

mru_size                        4    1246742016

mru_evict_data                  4    1049326592

mru_evict_metadata              4    0

mru_ghost_size                  4    900621824

mru_ghost_evict_data            4    1073664

mru_ghost_evict_metadata        4    899548160

mfu_size                        4    121698304

mfu_evict_data                  4    47782400

mfu_evict_metadata              4    1517056

mfu_ghost_size                  4    1229406720

mfu_ghost_evict_data            4    906070016

mfu_ghost_evict_metadata        4    323336704

l2_hits                         4    0

l2_misses                       4    0

l2_feeds                        4    0

l2_rw_clash                     4    0

l2_read_bytes                   4    0

l2_write_bytes                  4    0

l2_writes_sent                  4    0

l2_writes_done                  4    0

l2_writes_error                 4    0

l2_writes_hdr_miss              4    0

l2_evict_lock_retry             4    0

l2_evict_reading                4    0

l2_free_on_write                4    0

l2_abort_lowmem                 4    0

l2_cksum_bad                    4    0

l2_io_error                     4    0

l2_size                         4    0

l2_asize                        4    0

l2_hdr_size                     4    0

l2_compress_successes           4    0

l2_compress_zeros               4    0

l2_compress_failures            4    0

memory_throttle_count           4    0

duplicate_buffers               4    0

duplicate_buffers_size          4    0

duplicate_reads                 4    0

memory_direct_count             4    32

memory_indirect_count           4    34954

arc_no_grow                     4    0

arc_tempreserve                 4    0

arc_loaned_bytes                4    0

arc_prune                       4    4264

arc_meta_used                   4    1047082160

arc_meta_limit                  4    1047094272

arc_meta_max                    4    1059818128


To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.



More information about the zfs-discuss mailing list