[zfs-discuss] Re: Crash bug related to cache

nazir.adeel at gmail.com nazir.adeel at gmail.com
Tue Mar 20 20:40:53 EDT 2012


Hi Brian,
   The only zfs option I ever set was the zfs_arc_max, and the value
was essentially set to 2400M, so I have no idea how zfs is determining
what the max value for c_max should be. As I mentioned, this is a
single purpose NAS/SAN box, so I have no problems giving all 4 gigs to
zfs. I don't plan on using de-duplication, and will probably toss in a
small SSD I have to act as an ARC cache. I don't see any tuning
parameters for c_max in the modinfo zfs output, so what's calculating
it to be so low?

On Mar 19, 12:23 pm, Brian Behlendorf <behlendo... at llnl.gov> wrote:
> Nazir,
>
> According to the arcstats you posted you've set zfs_arc_c_max to 64mb.
> That may be good for stability on a 32-bit system with limited virtual
> address space, but it also explains why you didn't see zfs doing much
> caching.
>
> >>> c_max                           4    67108864
>
> So my guess is the weird behavior you described was caused by this
> rather than the proposed patch.
>
> --
> Thanks,
> Brian
>
>
>
>
>
>
>
> On Fri, 2012-03-16 at 19:06 -0700, nazir.ad... at gmail.com wrote:
> > Hi Brian,
> >    No problems with the testing; I'm hoping to have this box go into
> > semi-production ASAP, so i'm happy to have issues reported and
> > resolved. By the way, kudos on the great work thus far.
>
> > on a freshly booted box here are the stats, including /proc/spl/kstat/
> > zfs/{arcstats,dmu_tx} values:
>
> > zfsFiler ~ # uptime && free -m && cat /proc/meminfo
> >  17:01:34 up 1 min,  1 user,  load average: 0.23, 0.10, 0.04
> >              total       used       free     shared    buffers
> > cached
> > Mem:          4022         48       3973          0          8
> > 12
> > -/+ buffers/cache:         28       3994
> > Swap:            0          0          0
> > MemTotal:        4119308 kB
> > MemFree:         4068932 kB
> > Buffers:            8300 kB
> > Cached:            12908 kB
> > SwapCached:            0 kB
> > Active:            10696 kB
> > Inactive:          14100 kB
> > Active(anon):       3648 kB
> > Inactive(anon):      156 kB
> > Active(file):       7048 kB
> > Inactive(file):    13944 kB
> > Unevictable:           0 kB
> > Mlocked:               0 kB
> > HighTotal:       3648968 kB
> > HighFree:        3625036 kB
> > LowTotal:         470340 kB
> > LowFree:          443896 kB
> > SwapTotal:             0 kB
> > SwapFree:              0 kB
> > Dirty:                44 kB
> > Writeback:             0 kB
> > AnonPages:          3632 kB
> > Mapped:             3620 kB
> > Shmem:               212 kB
> > Slab:              10464 kB
> > SReclaimable:       3208 kB
> > SUnreclaim:         7256 kB
> > KernelStack:        1320 kB
> > PageTables:          292 kB
> > NFS_Unstable:          0 kB
> > Bounce:                0 kB
> > WritebackTmp:          0 kB
> > CommitLimit:     2059652 kB
> > Committed_AS:     411024 kB
> > VmallocTotal:    2621440 kB
> > VmallocUsed:        6264 kB
> > VmallocChunk:    2555940 kB
> > HardwareCorrupted:     0 kB
> > AnonHugePages:         0 kB
> > HugePages_Total:       0
> > HugePages_Free:        0
> > HugePages_Rsvd:        0
> > HugePages_Surp:        0
> > Hugepagesize:       2048 kB
> > DirectMap4k:        8184 kB
> > DirectMap2M:      501760 kB
>
> > zfsFiler ~ # cat  /proc/spl/kstat/zfs/{arcstats,dmu_tx}
> > 4 1 0x01 77 3696 5839009571 133578365208
> > name                            type data
> > hits                            4    14620
> > misses                          4    68
> > demand_data_hits                4    0
> > demand_data_misses              4    0
> > demand_metadata_hits            4    14620
> > demand_metadata_misses          4    64
> > prefetch_data_hits              4    0
> > prefetch_data_misses            4    0
> > prefetch_metadata_hits          4    0
> > prefetch_metadata_misses        4    4
> > mru_hits                        4    1254
> > mru_ghost_hits                  4    0
> > mfu_hits                        4    13366
> > mfu_ghost_hits                  4    0
> > deleted                         4    6
> > recycle_miss                    4    0
> > mutex_miss                      4    0
> > evict_skip                      4    0
> > evict_l2_cached                 4    0
> > evict_l2_eligible               4    0
> > evict_l2_ineligible             4    2048
> > hash_elements                   4    65
> > hash_elements_max               4    65
> > hash_collisions                 4    1
> > hash_chains                     4    1
> > hash_chain_max                  4    1
> > p                               4    33554432
> > c                               4    67108864
> > c_min                           4    67108864
> > c_max                           4    67108864
> > size                            4    663320
> > hdr_size                        4    57600
> > data_size                       4    523776
> > other_size                      4    81944
> > anon_size                       4    16384
> > anon_evict_data                 4    0
> > anon_evict_metadata             4    0
> > mru_size                        4    425472
> > mru_evict_data                  4    0
> > mru_evict_metadata              4    142848
> > mru_ghost_size                  4    12288
> > mru_ghost_evict_data            4    0
> > mru_ghost_evict_metadata        4    12288
> > mfu_size                        4    81920
> > mfu_evict_data                  4    0
> > mfu_evict_metadata              4    79872
> > mfu_ghost_size                  4    12288
> > mfu_ghost_evict_data            4    0
> > mfu_ghost_evict_metadata        4    12288
> > l2_hits                         4    0
> > l2_misses                       4    0
> > l2_feeds                        4    0
> > l2_rw_clash                     4    0
> > l2_read_bytes                   4    0
> > l2_write_bytes                  4    0
> > l2_writes_sent                  4    0
> > l2_writes_done                  4    0
> > l2_writes_error                 4    0
> > l2_writes_hdr_miss              4    0
> > l2_evict_lock_retry             4    0
> > l2_evict_reading                4    0
> > l2_free_on_write                4    0
> > l2_abort_lowmem                 4    0
> > l2_cksum_bad                    4    0
> > l2_io_error                     4    0
> > l2_size                         4    0
> > l2_hdr_size                     4    0
> > memory_throttle_count           4    0
> > memory_direct_count             4    0
> > memory_indirect_count           4    0
> > arc_no_grow                     4    0
> > arc_tempreserve                 4    0
> > arc_loaned_bytes                4    0
> > arc_prune                       4    0
> > arc_meta_used                   4    663320
> > arc_meta_limit                  4    16777216
> > arc_meta_max                    4    703700
>
> > 3 1 0x01 12 576 5838935101 133578698373
> > name                            type data
> > dmu_tx_assigned                 4    1
> > dmu_tx_delay                    4    0
> > dmu_tx_error                    4    0
> > dmu_tx_suspended                4    0
> > dmu_tx_group                    4    0
> > dmu_tx_how                      4    0
> > dmu_tx_memory_reserve           4    0
> > dmu_tx_memory_reclaim           4    0
> > dmu_tx_memory_inflight          4    0
> > dmu_tx_dirty_throttle           4    0
> > dmu_tx_write_limit              4    0
> > dmu_tx_quota                    4    0
>
> > Once I start: rsync -rax --exclude /proc --exclude /dev --exclude /sys
> > --exclude /tmp --exclude /mnt / /mnt/gentoo
>
> > (I'm also including the /proc/vmalloc output)
>
> > zfsFiler ~ # uptime && free -m && cat /proc/meminfo && cat /proc/
> > vmallocinfo
> >  17:04:13 up 3 min,  3 users,  load average: 1.39, 0.42, 0.16
> >              total       used       free     shared    buffers
> > cached
> > Mem:          4022        468       3554          0         47
> > 26
> > -/+ buffers/cache:        393       3629
> > Swap:            0          0          0
> > MemTotal:        4119308 kB
> > MemFree:         3640012 kB
> > Buffers:           49004 kB
> > Cached:            27264 kB
> > SwapCached:            0 kB
> > Active:            92724 kB
> > Inactive:          57532 kB
> > Active(anon):      74044 kB
> > Inactive(anon):      156 kB
> > Active(file):      18680 kB
> > Inactive(file):    57376 kB
> > Unevictable:           0 kB
> > Mlocked:               0 kB
> > HighTotal:       3648968 kB
> > HighFree:        3311072 kB
> > LowTotal:         470340 kB
> > LowFree:          328940 kB
> > SwapTotal:             0 kB
> > SwapFree:              0 kB
> > Dirty:                 0 kB
> > Writeback:             0 kB
> > AnonPages:         76056 kB
> > Mapped:             4412 kB
> > Shmem:               212 kB
> > Slab:              38980 kB
> > SReclaimable:      27612 kB
> > SUnreclaim:        11368 kB
> > KernelStack:        1400 kB
> > PageTables:          764 kB
> > NFS_Unstable:          0 kB
> > Bounce:                0 kB
> > WritebackTmp:          0 kB
> > CommitLimit:     2059652 kB
> > Committed_AS:     580808 kB
> > VmallocTotal:    2621440 kB
> > VmallocUsed:      234696 kB
> > VmallocChunk:    2235484 kB
> > HardwareCorrupted:     0 kB
> > AnonHugePages:     59392 kB
> > HugePages_Total:       0
> > HugePages_Free:        0
> > HugePages_Rsvd:        0
> > HugePages_Surp:        0
> > Hugepagesize:       2048 kB
> > DirectMap4k:        8184 kB
> > DirectMap2M:      501760 kB
> > 0x5f9fe000-0x5fa00000    8192 acpi_os_map_memory+0xa0/0x103
> > phys=f7ef8000 ioremap
> > 0x5fa00000-0x5fa05000   20480 acpi_os_map_memory+0xa0/0x103
> > phys=f7ef4000 ioremap
> > 0x5fa05000-0x5fa11000   49152 zisofs_init+0xd/0x1c pages=11 vmalloc
> > N0=11
> > 0x5fa14000-0x5fa16000    8192 mod_init+0x131/0x1b8 phys=ffbc0000
> > ioremap
> > 0x5fa16000-0x5fa18000    8192 twa_probe+0x3e4/0x762 phys=fb200000
> > ioremap
> > 0x5fa18000-0x5fa1a000    8192 twa_probe+0x3e4/0x762 phys=fb500000
> > ioremap
> > 0x5fa23000-0x5fa28000   20480 module_alloc_update_bounds+0xc/0x4b
> > pages=4 vmalloc N0=4
> > 0x5fa37000-0x5fa3d000   24576 module_alloc_update_bounds+0xc/0x4b
> > pages=5 vmalloc N0=5
> > 0x5fa4b000-0x5fa51000   24576 module_alloc_update_bounds+0xc/0x4b
> > pages=5 vmalloc N0=5
> > 0x5fa54000-0x5fa56000    8192 module_alloc_update_bounds+0xc/0x4b
> > pages=1 vmalloc N0=1
> > 0x5fa80000-0x5fa98000   98304 module_alloc_update_bounds+0xc/0x4b
> > pages=23 vmalloc N0=23
> > 0x5fab1000-0x5faba000   36864 module_alloc_update_bounds+0xc/0x4b
> > pages=8 vmalloc N0=8
> > 0x5fad3000-0x5fadc000   36864 module_alloc_update_bounds+0xc/0x4b
> > pages=8 vmalloc N0=8
> > 0x5faeb000-0x5faf0000   20480 module_alloc_update_bounds+0xc/0x4b
> > pages=4 vmalloc N0=4
> > 0x5fafb000-0x5fb00000   20480 module_alloc_update_bounds+0xc/0x4b
> > pages=4 vmalloc N0=4
> > 0x5fb10000-0x5fb18000   32768 module_alloc_update_bounds+0xc/0x4b
> > pages=7 vmalloc N0=7
> > 0x5fb1e000-0x5fb20000    8192 module_alloc_update_bounds+0xc/0x4b
> > pages=1 vmalloc N0=1
> > 0x5fb26000-0x5fb28000    8192 module_alloc_update_bounds+0xc/0x4b
>
> ...
>
> read more »



More information about the zfs-discuss mailing list