[zfs-discuss] Re: Extreme memory usage during zfs send

Prakash Surya surya1 at llnl.gov
Tue Apr 10 12:16:21 EDT 2012


Well, I can't say for sure, but I don't think this has anything to do
with the VM changes. Can you open a bug on the github project tracker
with this information? Brian might be able to better diagnose this, and
would know if this is already an open issue.

-- 
Cheers, Prakash

On Fri, Apr 06, 2012 at 05:03:41PM -0700, Chun-Yu wrote:
> Uh-oh, not sure if this is related to the VM changes, but I just
> encountered this:
> 
> [34778.609112] ------------[ cut here ]------------
> [34778.609117] WARNING: at fs/inode.c:966 unlock_new_inode+0x2a/0x4f()
> [34778.609119] Hardware name: 4286CTO
> [34778.609120] Modules linked in: vmnet(O) vmblock(O) vsock(O) vmci(O)
> vmmon(O) zfs(PO) zcommon(PO) znvpair(PO) zavl(PO) zunicode(PO) spl(O)
> zlib_deflate xts gf128mul dm_crypt dm_mod rtc_cmos i2c_i801 iwlwifi
> [last unloaded: vmmon]
> [34778.609132] Pid: 31271, comm: tar Tainted: P           O 3.3.1 #5
> [34778.609133] Call Trace:
> [34778.609138]  [<ffffffff810322ef>] ? warn_slowpath_common+0x78/0x8c
> [34778.609140]  [<ffffffff810dde11>] ? unlock_new_inode+0x2a/0x4f
> [34778.609153]  [<ffffffffa0175e33>] ? zfs_znode_alloc+0x4a5/0x4c2
> [zfs]
> [34778.609170]  [<ffffffffa01768ab>] ? zfs_mknode+0xa5b/0xaf2 [zfs]
> [34778.609179]  [<ffffffffa0172645>] ? zfs_create+0x40d/0x655 [zfs]
> [34778.609182]  [<ffffffff810c2e44>] ? __kmalloc+0x64/0xd9
> [34778.609190]  [<ffffffffa018297a>] ? zpl_create+0x93/0xbd [zfs]
> [34778.609192]  [<ffffffff810d4f26>] ? vfs_create+0x89/0xe0
> [34778.609195]  [<ffffffff810d687f>] ? do_last+0x3ab/0x775
> [34778.609198]  [<ffffffff810d6d3f>] ? path_openat+0xcf/0x37c
> [34778.609201]  [<ffffffff810d70ae>] ? do_filp_open+0x2a/0x6e
> [34778.609205]  [<ffffffff810ad8ab>] ? __split_vma+0x175/0x1f5
> [34778.609208]  [<ffffffff810c289a>] ? kmem_cache_free+0x11/0x87
> [34778.609212]  [<ffffffff810e020e>] ? alloc_fd+0x64/0x109
> [34778.609215]  [<ffffffff810ca80e>] ? do_sys_open+0xf8/0x17f
> [34778.609219]  [<ffffffff81521262>] ? system_call_fastpath+0x16/0x1b
> [34778.609222] ---[ end trace fb3507cd3738427f ]---
> [34778.609229] general protection fault: 0000 [#1] SMP
> [34778.609258] CPU 0
> [34778.609267] Modules linked in: vmnet(O) vmblock(O) vsock(O) vmci(O)
> vmmon(O) zfs(PO) zcommon(PO) znvpair(PO) zavl(PO) zunicode(PO) spl(O)
> zlib_deflate xts gf128mul dm_crypt dm_mod rtc_cmos i2c_i801 iwlwifi
> [last unloaded: vmmon]
> [34778.609372]
> [34778.609380] Pid: 31271, comm: tar Tainted: P        W  O 3.3.1 #5
> LENOVO 4286CTO/4286CTO
> [34778.609406] RIP: 0010:[<ffffffffa01757ad>]  [<ffffffffa01757ad>]
> zfs_inode_destroy+0x50/0xc0 [zfs]
> [34778.609415] RSP: 0018:ffff8801458c18c8  EFLAGS: 00010282
> [34778.609416] RAX: ffff8803d9f5c9b0 RBX: ffff8803d9f5c9d8 RCX:
> dead000000100100
> [34778.609417] RDX: dead000000200200 RSI: ffff8803d9f5ca70 RDI:
> ffff8802ba175470
> [34778.609418] RBP: ffff8802ba175000 R08: 00000000fffffffe R09:
> 00000000fffffffe
> [34778.609420] R10: ffff8803d9f5cad0 R11: 0000000000000000 R12:
> ffff8803d9f5c830
> [34778.609421] R13: ffff8802ba175470 R14: ffff880335177ce8 R15:
> ffff880046b32240
> [34778.609422] FS:  00007f10e5317700(0000) GS:ffff88041e200000(0000)
> knlGS:0000000000000000
> [34778.609424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [34778.609425] CR2: 00007f10e52e4000 CR3: 000000013bace000 CR4:
> 00000000000406f0
> [34778.609426] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [34778.609427] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [34778.609429] Process tar (pid: 31271, threadinfo ffff8801458c0000,
> task ffff8802ff5a19c0)
> [34778.609430] Stack:
> [34778.609431]  ffff8803d9f5ca70 ffff8802ba175000 ffff8803d9f5c9d8
> 0000000000000000
> [34778.609432]  ffff88040bcd4970 ffffffffa0175e3b 0000000000000000
> ffff88010000002c
> [34778.609434]  ffff880400000000 000000000008b407 0000000000000002
> ffff8803d9f5c950
> [34778.609436] Call Trace:
> [34778.609443]  [<ffffffffa0175e3b>] ? zfs_znode_alloc+0x4ad/0x4c2
> [zfs]
> [34778.609451]  [<ffffffffa01768ab>] ? zfs_mknode+0xa5b/0xaf2 [zfs]
> [34778.609459]  [<ffffffffa0172645>] ? zfs_create+0x40d/0x655 [zfs]
> [34778.609461]  [<ffffffff810c2e44>] ? __kmalloc+0x64/0xd9
> [34778.609467]  [<ffffffffa018297a>] ? zpl_create+0x93/0xbd [zfs]
> [34778.609469]  [<ffffffff810d4f26>] ? vfs_create+0x89/0xe0
> [34778.609471]  [<ffffffff810d687f>] ? do_last+0x3ab/0x775
> [34778.609473]  [<ffffffff810d6d3f>] ? path_openat+0xcf/0x37c
> [34778.609475]  [<ffffffff810d70ae>] ? do_filp_open+0x2a/0x6e
> [34778.609477]  [<ffffffff810ad8ab>] ? __split_vma+0x175/0x1f5
> [34778.609479]  [<ffffffff810c289a>] ? kmem_cache_free+0x11/0x87
> [34778.609480]  [<ffffffff810e020e>] ? alloc_fd+0x64/0x109
> [34778.609482]  [<ffffffff810ca80e>] ? do_sys_open+0xf8/0x17f
> [34778.609484]  [<ffffffff81521262>] ? system_call_fastpath+0x16/0x1b
> [34778.609485] Code: 48 89 df e8 f6 6f fe ff 4c 8d ad 70 04 00 00 4c
> 89 ef e8 4d 97 3a e1 4c 89 e0 4c 89 ef 48 03 85 50 04 00 00 48 8b 08
> 48 8b 50 08 <48> 89 51 08 48 89 0a 48 ba 00 01 10 00 00 00 ad de 48 b9
> 00 02
> [34778.609496] RIP  [<ffffffffa01757ad>] zfs_inode_destroy+0x50/0xc0
> [zfs]
> [34778.609504]  RSP <ffff8801458c18c8>
> [34778.999055] ---[ end trace fb3507cd37384280 ]---
> [34779.000891] ------------[ cut here ]------------
> [34779.000897] WARNING: at fs/inode.c:966 unlock_new_inode+0x2a/0x4f()
> [34779.000898] Hardware name: 4286CTO
> [34779.000899] Modules linked in: vmnet(O) vmblock(O) vsock(O) vmci(O)
> vmmon(O) zfs(PO) zcommon(PO) znvpair(PO) zavl(PO) zunicode(PO) spl(O)
> zlib_deflate xts gf128mul dm_crypt dm_mod rtc_cmos i2c_i801 iwlwifi
> [last unloaded: vmmon]
> [34779.000912] Pid: 31273, comm: ebuild.sh Tainted: P      D W  O
> 3.3.1 #5
> [34779.000914] Call Trace:
> [34779.000918]  [<ffffffff810322ef>] ? warn_slowpath_common+0x78/0x8c
> [34779.000920]  [<ffffffff810dde11>] ? unlock_new_inode+0x2a/0x4f
> [34779.000933]  [<ffffffffa0175e33>] ? zfs_znode_alloc+0x4a5/0x4c2
> [zfs]
> [34779.000942]  [<ffffffffa01768ab>] ? zfs_mknode+0xa5b/0xaf2 [zfs]
> [34779.000950]  [<ffffffffa0172645>] ? zfs_create+0x40d/0x655 [zfs]
> [34779.000954]  [<ffffffff810c2e6e>] ? __kmalloc+0x8e/0xd9
> [34779.000961]  [<ffffffffa018297a>] ? zpl_create+0x93/0xbd [zfs]
> [34779.000964]  [<ffffffff810d4f26>] ? vfs_create+0x89/0xe0
> [34779.000966]  [<ffffffff810d687f>] ? do_last+0x3ab/0x775
> [34779.000968]  [<ffffffff810d6d3f>] ? path_openat+0xcf/0x37c
> [34779.000971]  [<ffffffff810d70ae>] ? do_filp_open+0x2a/0x6e
> [34779.000973]  [<ffffffff810ad8ab>] ? __split_vma+0x175/0x1f5
> [34779.000975]  [<ffffffff810c289a>] ? kmem_cache_free+0x11/0x87
> [34779.000978]  [<ffffffff810e020e>] ? alloc_fd+0x64/0x109
> [34779.000980]  [<ffffffff810ca80e>] ? do_sys_open+0xf8/0x17f
> [34779.000982]  [<ffffffff815224df>] ? sysenter_dispatch+0x7/0x1a
> [34779.000984] ---[ end trace fb3507cd37384281 ]---
> 
> Chun-Yu
> 
> On Apr 4, 8:27 pm, Prakash Surya <sur... at llnl.gov> wrote:
> > Thanks for the feedback! :) I'm glad to hear that patch is behaving
> > itself.
> >
> > --
> > Cheers,
> > Prakash
> >
> >
> >
> >
> >
> >
> >
> > On Wed, Apr 04, 2012 at 08:26:48AM -0700, Chun-Yu wrote:
> > > Great, thanks!  It turns out that even with swap enabled, I was still
> > > hitting occasional issues with the Intel graphics driver, but I've now
> > > been running your vm branch for 3 days now with no problems… I'll keep
> > > an eye on things and let you know if any more problems crop up!
> >
> > > Chun-Yu
> >
> > > On Apr 2, 1:15 pm, Brian Behlendorf <behlendo... at llnl.gov> wrote:
> > > > Hi Chun-Yu,
> >
> > > > Yes, if the memory is available zfs will attempt to use it for caching.
> > > > It will be released to other applications as needed, however this code
> > > > is still being refined.
> >
> > > > I've had a side branch for a while now where I've been testing some VM
> > > > improvements.  I'd be very interested to hear if the changes on this
> > > > branch help with your Intel graphics driver issue.
> >
> > > >https://github.com/behlendorf/zfs/tree/vm
> >
> > > > --
> > > > Thanks,
> > > > Brian
> >
> > > > On Sat, 2012-03-31 at 17:33 -0700, Chun-Yu wrote:
> > > > > While doing a large ZFS send, ZFS (0.6.0-rc8) seems to eat up almost
> > > > > all available memory before finally releasing most of it.  I don't
> > > > > have any module tuning parameters set up, so everything is at the
> > > > > default settings.  Here's a screenshot of Gnome's system monitor when
> > > > > it happens:
> >
> > > > >http://i.imgur.com/ie3cC.png
> >
> > > > > I actually had to add a tiny bit of swap, since every time this
> > > > > happened, my Intel linux graphics driver would crash (unrecoverably,
> > > > > without a reboot) and leave some error in dmesg.
> >
> > > > > Btw, thanks to Brian and everyone else for all their hard work on ZFS…
> > > > > it's been fantastic to finally be able to convert my home server from
> > > > > Solaris to Linux!
> >
> > > > > Chun-Yu



More information about the zfs-discuss mailing list