/proc/slabinfo/proc/buddyinfo/proc/zoneinfo/proc/meminfo [[email protected] /]# slabtop Active / Total Objects (% used) : 347039 / 361203 (96.1%) Active / Total Slabs (% used) : 24490 / 24490 (100.0%) Active / Total Caches (% used) : 88 / 170 (51.8%) Active / Total Size (% used) : 98059.38K / 99927.38K (98.1%) Minimum / Average / Maximum Object : 0.02K / 0.28K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 115625 115344 99% 0.10K 3125 37 12500K buffer_head 73880 73437 99% 0.19K 3694 20 14776K dentry 42184 42180 99% 0.99K 10546 4 42184K ext4_inode_cache 20827 20384 97% 0.06K 353 59 1412K size-64 16709 13418 80% 0.05K 217 77 868K anon_vma_chain 15792 15708 99% 0.03K 141 112 564K size-32 11267 10323 91% 0.20K 593 19 2372K vm_area_struct 10806 10689 98% 0.64K 1801 6 7204K proc_inode_cache 9384 5232 55% 0.04K 102 92 408K anon_vma 7155 7146 99% 0.07K 135 53 540K selinux_inode_security 7070 7070 100% 0.55K 1010 7 4040K radix_tree_node 6444 6443 99% 0.58K 1074 6 4296K inode_cache 5778 5773 99% 0.14K 214 27 856K sysfs_dir_cache 3816 3765 98% 0.07K 72 53 288K Acpi-Operand 2208 2199 99% 0.04K 24 92 96K Acpi-Namespace 1860 1830 98% 0.12K 62 30 248K size-128 1440 1177 81% 0.19K 72 20 288K size-192 1220 699 57% 0.19K 61 20 244K filp 660 599 90% 1.00K 165 4 660K size-1024 [[email protected] xx]# cat /proc/meminfo |grep HugePage AnonHugePages: 2048 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 1.vi /etc/sysctl.conf 加入 vm.nr_hugepages = 10 2.sysctl -p[[email protected] /]# cat /proc/meminfo |grep HugeAnonHugePages: 2048 kBHugePages_Total: 10HugePages_Free: 10HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kB 3.应用于应用程序[[email protected] /]# mkdir /hugepages[[email protected] /]# mount -t hugetlbfs none /hugepages [[email protected] /]# dd if=/dev/zero of=/hugepages/a.out bs=1M count=5
Hugetable page: Hugetlbfs support is built on top of multiple page size support that is provided by most modern architectures Users can use the huge page support in Linux kernel by either using the mmap system call or standard Sysv shared memory system calls (shmget, shmat) cat /proc/meminfo | grep HugePage
Improving TLB performance: Kernel must usually flush TLB entries upon a context switch Use free, contiguous physical pages Automatically via the buddy allocator /proc/buddyinfo Manually via hugepages (not pageable) Linux supports large sized pages through the hugepages mechanism Sometimes known as bigpages, largepages or the hugetlbfs filesystem Consequences TLB cache hit more likely Reduces PTE visit count
Tuning TLB performance Check size of hugepages x86info -a | grep “Data TLB” dmesg cat /proc/meminfo Enable hugepages 1.In /etc/sysctl.conf vm.nr_hugepages = n 2.Kernel parameter //操作系动起动时传参数 hugepages=n Configure hugetlbfs if needed by application mmap system call requires that hugetlbfs is mounted mkdir /hugepages mount -t hugetlbfs none /hugepages shmat and shmget system calls do not require hugetlbfs
Trace every system call made by a program strace -o /tmp/strace.out -p PID grep mmap /tmp/strace.out Summarize system calls strace -c -p PID or strace -c COMMANDstrace command Other uses Investigate lock contentions Identify problems caused by improper file permissions Pinpoint IO problems
Strategies for using memory使用内存优化 1.Reduce overhead for tiny memory objects Slab cache cat /proc/slabinfo 2.Reduce or defer service time for slower subsystems Filesystem metadata: buffer cache (slab cache) //缓存文件元数据 Disk IO: page cache //缓存数据 Interprocess communications: shared memory //共享内存 Network IO: buffer cache, arp cache, connection tracking 3.Considerations when tuning memory How should pages be reclaimed to avoid pressure? Larger writes are usually more efficient due to re-sorting
内存参数设置:vm.min_free_kbytes:1.因为内存耗近,系统会崩溃2.因此保有空闲内存剩下,当进程请求内存分配,不足会把其他内存交换到SWAP中,从而便腾去足够空间去给请求 Tuning vm.min_free_kbytes only be necessary when an application regularly needs to allocate a large block of memory, then frees that same memory 使用情况: It may well be the case that the system has too little disk bandwidth, too little CPU power, or too little memory to handle its load Linux 提供了这样一个参数min_free_kbytes,用来确定系统开始回收内存的阀值,控制系统的空闲内存。值越高,内核越早开始回收内存,空闲内存越高。 http://www.cnblogs.com/itfriend/archive/2011/12/14/2287160.html Consequences Reduces service time for demand paging Memory is not available for other useage Can cause pressure on ZONE_NORMAL
Linux服务器内存使用量超过阈值,触发报警。 问题排查 首先,通过free命令观察系统的内存使用情况,显示如下: total used free shared buffers cached Mem: 24675796 24587144 88652 0 357012 1612488 -/+ buffers/cache: 22617644 2058152 Swap: 2096472 108224 1988248 其中,可以看出内存总量为24675796KB,已使用22617644KB,只剩余2058152KB。 然后,接着通过top命令,shift + M按内存排序后,观察系统中使用内存最大的进程情况,发现只占用了18GB内存,其他进程均很小,可忽略。 因此,还有将近4GB内存(22617644KB-18GB,约4GB)用到什么地方了呢? 进一步,通过cat /proc/meminfo发现,其中有将近4GB(3688732 KB)的Slab内存: ...... Mapped: 25212 kB Slab: 3688732 kB PageTables: 43524 kB ...... Slab是用于存放内核数据结构缓存,再通过slabtop命令查看这部分内存的使用情况: OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 13926348 13926348 100% 0.21K 773686 18 3494744K dentry_cache 334040 262056 78% 0.09K 8351 40 33404K buffer_head 151040 150537 99% 0.74K 30208 5 120832K ext3_inode_cache 发现其中大部分(大约3.5GB)都是用于了dentry_cache。 问题解决 1. 修改/proc/sys/vm/drop_caches,释放Slab占用的cache内存空间(参考drop_caches的官方文档): Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: * echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: * echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: * echo 3 > /proc/sys/vm/drop_caches As this is a non-destructive operation, and dirty objects are not freeable, the user should run "sync" first in order to make sure allcached objects are freed. This tunable was added in 2.6.16. 2. 方法1需要用户具有root权限,如果不是root,但有sudo权限,可以通过sysctl命令进行设置: $sync $sudo sysctl -w vm.drop_caches=3 $sudo sysctl -w vm.drop_caches=0 #recovery drop_caches 操作后可以通过sudo sysctl -a | grep drop_caches查看是否生效。
时间: 2024-10-26 17:45:24