[prev in list] [next in list] [prev in thread] [next in thread]
List: illumos-discuss
Subject: Re: [discuss] Re: increasing kernel memory use
From: "Garrett D'Amore" <garrett () damore ! org>
Date: 2015-10-26 22:38:13
Message-ID: 003745E8-02A7-4B2A-AEF6-EFF189236895 () damore ! org
[Download RAW message or body]
There is still ordinary buffer cache such as used for any ufs or tmpfs filesystems.
Sent from my iPhone
> On Oct 26, 2015, at 12:38 PM, Richard PALO <richard@netbsd.org> wrote:
>
> Le 26/10/15 17:42, Robert Mustacchi a écrit :
> > On 10/26/15 9:39 , Richard PALO wrote:
> > > I just set kmem_flags = 0xf in /etc/system and rebooted.
> > >
> > > Now, immediately after reboot I already see:
> > > > richard@omnis:/home/richard$ echo ::kmem_verify|pfexec mdb -k
> > > > Cache Name Addr Cache Integrity
> > > > ...
> > > > kmem_bufctl_audit_cache ffffff0900424a88 8 corrupt buffers
> > > > ...
> > > > streams_dblk_80 ffffff090043ea88 1 corrupt buffer
> > > > ...
> >
> > While I haven't looked at the exact implementation of ::kmem_verify, I
> > am rather skeptical it could ever work for a live system with mdb -k.
> > It's not like the kernel is stopping when you're reading something like
> > this, so it's clearly going to be racy. If you dumped the system and
> > then looked at the ::kmem_verify when the system was static, or you used
> > kmdb to pause the system and run the ::kmem_verify, I think you'll get a
> > much more realistic answer.
> >
> > Robert
>
> Okay, yeah I see these are more post-mortem.
> When my gate build was done I did a reboot -d and there was no corruption found...
> Naturally the memstat usage remains.
>
> I'm still at a loss for the steadily growing kmem usage... especially since
> ZFS data bufs are accounted for separately.
>
> > > > > memstat
> > Page Summary Pages MB %Tot
> > ------------ ---------------- ---------------- ----
> > Kernel 1792379 7001 21%
> > ZFS File Data 3234435 12634 39%
> > Anon 203391 794 2%
> > Exec and libs 3195 12 0%
> > Page cache 32765 127 0%
> > Free (cachelist) 120602 471 1%
> > Free (freelist) 2998307 11712 36%
> >
> > Total 8385074 32754
> > Physical 8385072 32754
> > > > kmastat
> > ...
> > ------------------------------ ----- --------- --------- ------ ---------- -----
> > Total [hat_memload] 67.3M 418681942 0
> > Total [kmem_msb] 1.36G 28747926 0
> > Total [kmem_firewall] 812M 10899172 0
> > Total [kmem_va] 1.21G 278065 0
> > Total [kmem_default] 1.33G 1620719363 0
> > Total [kmem_io_64G] 76M 9728 0
> > Total [kmem_io_4G] 44K 35 0
> > Total [kmem_io_2G] 12K 5 0
> > Total [umem_np] 0 475 0
> > Total [id32] 4K 81 0
> > Total [zfs_file_data] 773M 51574 0
> > Total [zfs_file_data_buf] 12.3G 1189691 0
> > Total [segkp] 640K 675115 0
> > Total [ip_minor_arena_sa] 64 2089 0
> > Total [ip_minor_arena_la] 64 1692 0
> > Total [spdsock] 0 1 0
> > Total [namefs_inodes] 64 260 0
> > ------------------------------ ----- --------- --------- ------ ---------- -----
> >
> > vmem memory memory memory alloc alloc
> > name in use total import succeed fail
> > ------------------------------ --------- ---------- --------- ---------- -----
> > heap 6.01G 987G 0 10981607 0
> > vmem_metadata 620M 620M 620M 38864 0
> > vmem_seg 605M 605M 605M 38730 0
> > vmem_hash 14.1M 14.1M 14.1M 69 0
> > vmem_vmem 288K 320K 284K 92 0
> > static 0 0 0 0 0
> > static_alloc 0 0 0 0 0
> > hat_memload 67.3M 67.3M 67.3M 18283 0
> > kstat 786K 824K 760K 2424 0
> > kmem_metadata 1.40G 1.40G 1.40G 356335 0
> > kmem_msb 1.36G 1.36G 1.36G 355878 0
> > kmem_cache 586K 604K 604K 520 0
> > kmem_hash 37.8M 37.8M 37.8M 802 0
> > kmem_log 1.23G 1.23G 1.23G 12 0
> > kmem_firewall_va 1.04G 1.04G 1.04G 10899421 0
> > kmem_firewall 812M 812M 812M 10899175 0
> > kmem_oversize 254M 254M 254M 247 0
> > mod_sysfile 358 4K 4K 10 0
> > kmem_va 1.33G 1.33G 1.33G 11126 0
> > kmem_default 1.33G 1.33G 1.33G 279223 0
> > kmem_io_64G 76M 76M 76M 9728 0
> > kmem_io_4G 44K 44K 44K 11 0
> > kmem_io_2G 68K 68K 68K 80 0
> > kmem_io_16M 0 0 0 0 0
> > bp_map 0 0 0 150 0
> > umem_np 0 0 0 398 0
> > ksyms 2.13M 2.43M 2.43M 498 0
> > ctf 964K 1.09M 1.09M 492 0
> > heap_core 2.02M 888M 0 61 0
> > heaptext 9.71M 64M 0 197 0
> > module_text 11.4M 11.7M 9.71M 497 0
> > id32 4K 4K 4K 1 0
> > module_data 1.22M 2.30M 2.01M 636 0
> > logminor_space 28 256K 0 36 0
> > taskq_id_arena 122 2.00G 0 204 0
> > zfs_file_data 12.4G 32.0G 0 107884 0
> > zfs_file_data_buf 12.3G 12.3G 12.3G 153241 0
> > device 1.65M 1G 0 33002 0
> > segkp 32.9M 2G 0 25872 0
> > mac_minor_ids 8 127K 0 9 0
> > rctl_ids 41 32.0K 0 41 0
> > zoneid_space 0 9.76K 0 0 0
> > taskid_space 40 977K 0 88 0
> > pool_ids 1 977K 0 1 0
> > contracts 43 2.00G 0 94 0
> > ddi_periodic 0 1023 0 0 0
> > ip_minor_arena_sa 64 256K 0 17 0
> > ip_minor_arena_la 64 4.00G 0 14 0
> > lport-instances 0 64K 0 0 0
> > rport-instances 0 64K 0 0 0
> > ibcm_local_sid 0 4.00G 0 0 0
> > ibcm_ip_sid 0 64.0K 0 0 0
> > lib_va_32 7.68M 1.99G 0 20 0
> > tl_minor_space 308 256K 0 1433 0
> > keysock 0 4.00G 0 0 0
> > spdsock 0 4.00G 0 1 0
> > namefs_inodes 64 64K 0 1 0
> > lib_va_64 105M 125T 0 630 0
> > Hex0xffffff09611a4488_minor 0 4.00G 0 0 0
> > Hex0xffffff09611a4490_minor 0 4.00G 0 0 0
> > devfsadm_event_channel 1 101 0 1 0
> > devfsadm_event_channel 1 2 0 1 0
> > syseventd_channel 1 101 0 1 0
> > syseventd_channel 1 2 0 1 0
> > syseventconfd_door 0 101 0 0 0
> > syseventconfd_door 1 2 0 1 0
> > dtrace 68 4.00G 0 40830 0
> > dtrace_minor 0 4.00G 0 0 0
> > ipf_minor 0 4.00G 0 0 0
> > ipmi_id_space 1 127 0 3 0
> > eventfd_minor 80 4.00G 0 219 0
> > logdmux_minor 0 256 0 0 0
> > ptms_minor 2 16 0 2 0
> > Client_id_space 0 128K 0 0 0
> > ClntIP_id_space 0 1M 0 0 0
> > OpenOwner_id_space 0 1M 0 0 0
> > OpenStateID_id_space 0 1M 0 0 0
> > LockStateID_id_space 0 1M 0 0 0
> > Lockowner_id_space 0 1M 0 0 0
> > DelegStateID_id_space 0 1M 0 0 0
> > shmids 0 64 0 67 0
> > > > findleaks
> > BYTES LEAKED VMEM_SEG CALLER
> > 393256 1 ffffff098082bde8 modinstall+0x113
> > ------------------------------------------------------------------------
> > Total 1 kmem_oversize leak, 393256 bytes
> >
> > CACHE LEAKED BUFCTL CALLER
> > ffffff0900436548 1 ffffff097d62d298 devi_attach+0x9e
> > ------------------------------------------------------------------------
> > Total 1 buffer, 8192 bytes
>
> Nobody's concerned about 20 to over 40% kernel memory usage on a 32G system other \
> than me?
> I guess running a DEBUG kernel can explain 1,27G in the audit cache (rougly 15% of \
> the kernel memory usage). I'll run another bulk build to get the +40% and see if \
> its this cache.
> I'm curious though, if it only a history, then why isn't it written out to disk so \
> that it doesn't take up [so much] live memory? The limit could even be \
> configurable...
> --
> Richard PALO
>
-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/25758063-6f7f4185
Modify Your Subscription: \
https://www.listbox.com/member/?member_id=25758063&id_secret=25758063-83fb4fd4 \
Powered by Listbox: http://www.listbox.com
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic