kernel_optimize_test/lib
Christoph Lameter 8ff12cfc00 SLUB: Support for performance statistics
The statistics provided here allow the monitoring of allocator behavior but
at the cost of some (minimal) loss of performance. Counters are placed in
SLUB's per cpu data structure. The per cpu structure may be extended by the
statistics to grow larger than one cacheline which will increase the cache
footprint of SLUB.

There is a compile option to enable/disable the inclusion of the runtime
statistics and its off by default.

The slabinfo tool is enhanced to support these statistics via two options:

-D 	Switches the line of information displayed for a slab from size
	mode to activity mode.

-A	Sorts the slabs displayed by activity. This allows the display of
	the slabs most important to the performance of a certain load.

-r	Report option will report detailed statistics on

Example (tbench load):

slabinfo -AD		->Shows the most active slabs

Name                   Objects    Alloc     Free   %Fast
skbuff_fclone_cache         33 111953835 111953835  99  99
:0000192                  2666  5283688  5281047  99  99
:0001024                   849  5247230  5246389  83  83
vm_area_struct            1349   119642   118355  91  22
:0004096                    15    66753    66751  98  98
:0000064                  2067    25297    23383  98  78
dentry                   10259    28635    18464  91  45
:0000080                 11004    18950     8089  98  98
:0000096                  1703    12358    10784  99  98
:0000128                   762    10582     9875  94  18
:0000512                   184     9807     9647  95  81
:0002048                   479     9669     9195  83  65
anon_vma                   777     9461     9002  99  71
kmalloc-8                 6492     9981     5624  99  97
:0000768                   258     7174     6931  58  15

So the skbuff_fclone_cache is of highest importance for the tbench load.
Pretty high load on the 192 sized slab. Look for the aliases

slabinfo -a | grep 000192
:0000192     <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP
	request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili

Likely skbuff_head_cache.


Looking into the statistics of the skbuff_fclone_cache is possible through

slabinfo skbuff_fclone_cache	->-r option implied if cache name is mentioned


.... Usual output ...

Slab Perf Counter       Alloc     Free %Al %Fr
--------------------------------------------------
Fastpath             111953360 111946981  99  99
Slowpath                 1044     7423   0   0
Page Alloc                272      264   0   0
Add partial                25      325   0   0
Remove partial             86      264   0   0
RemoteObj/SlabFrozen      350     4832   0   0
Total                111954404 111954404

Flushes       49 Refill        0
Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%)

Looks good because the fastpath is overwhelmingly taken.


skbuff_head_cache:

Slab Perf Counter       Alloc     Free %Al %Fr
--------------------------------------------------
Fastpath              5297262  5259882  99  99
Slowpath                 4477    39586   0   0
Page Alloc                937      824   0   0
Add partial                 0     2515   0   0
Remove partial           1691      824   0   0
RemoteObj/SlabFrozen     2621     9684   0   0
Total                 5301739  5299468

Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%)


Descriptions of the output:

Total:		The total number of allocation and frees that occurred for a
		slab

Fastpath:	The number of allocations/frees that used the fastpath.

Slowpath:	Other allocations

Page Alloc:	Number of calls to the page allocator as a result of slowpath
		processing

Add Partial:	Number of slabs added to the partial list through free or
		alloc (occurs during cpuslab flushes)

Remove Partial:	Number of slabs removed from the partial list as a result of
		allocations retrieving a partial slab or by a free freeing
		the last object of a slab.

RemoteObj/Froz:	How many times were remotely freed object encountered when a
		slab was about to be deactivated. Frozen: How many times was
		free able to skip list processing because the slab was in use
		as the cpuslab of another processor.

Flushes:	Number of times the cpuslab was flushed on request
		(kmem_cache_shrink, may result from races in __slab_alloc)

Refill:		Number of times we were able to refill the cpuslab from
		remotely freed objects for the same slab.

Deactivate:	Statistics how slabs were deactivated. Shows how they were
		put onto the partial list.

In general fastpath is very good. Slowpath without partial list processing is
also desirable. Any touching of partial list uses node specific locks which
may potentially cause list lock contention.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
2008-02-07 17:47:41 -08:00
..
lzo lzo: add some missing casts 2007-07-31 15:39:37 -07:00
reed_solomon [MTD] [NAND] Replace -1 with -EBADMSG in nand error correction code 2007-10-20 22:30:54 +01:00
zlib_deflate lib/: Spelling fixes 2008-02-03 17:48:52 +02:00
zlib_inflate [ZLIB]: Fix external builds of zlib_inflate code. 2007-10-11 22:17:20 -07:00
.gitignore
argv_split.c LIB: Replace inappropriate include of <linux/bug.h> 2007-10-20 00:26:10 +02:00
audit.c
bitmap.c Fix bitmap_scnlistprintf for empty masks 2007-11-05 15:12:32 -08:00
bitrev.c
bug.c generic bug: use show_regs() instead of dump_stack() 2007-07-16 09:05:51 -07:00
bust_spinlocks.c handle recursive calls to bust_spinlocks() 2007-10-17 08:42:56 -07:00
check_signature.c uninline check_signature() 2007-07-16 09:05:50 -07:00
cmdline.c
cpumask.c
crc7.c CRC7 support 2007-07-17 10:23:04 -07:00
crc16.c
crc32.c lib/: Spelling fixes 2008-02-03 17:48:52 +02:00
crc32defs.h
crc-ccitt.c
crc-itu-t.c
ctype.c
debug_locks.c
dec_and_lock.c
devres.c
div64.c
dump_stack.c
extable.c lib/extable.c: remove an expensive integer divide in search_extable() 2008-02-06 10:41:08 -08:00
fault-inject.c fault_inject: silence a warning 2007-07-24 12:24:59 -07:00
find_next_bit.c ext4: Add ext4_find_next_bit() 2008-01-28 23:58:27 -05:00
gen_crc32table.c
genalloc.c Slab allocators: Replace explicit zeroing with __GFP_ZERO 2007-07-17 10:23:02 -07:00
halfmd4.c
hexdump.c hexdump: don't print bytes with bit 7 set 2007-11-29 09:24:53 -08:00
hweight.c remove asm/bitops.h includes 2007-10-19 11:53:41 -07:00
idr.c Slab API: remove useless ctor parameter and reorder parameters 2007-10-17 08:42:45 -07:00
inflate.c
int_sqrt.c
iomap_copy.c
iomap.c lib/iomap.c:bad_io_access(): print 0x hex prefix 2007-10-17 08:42:57 -07:00
iommu-helper.c iommu sg: add IOMMU helper functions for the free area management 2008-02-05 09:44:11 -08:00
ioremap.c lib/ioremap.c should #include <linux/io.h> 2007-10-17 08:42:50 -07:00
irq_regs.c
kasprintf.c lib: move kasprintf to a separate file 2007-07-31 15:39:39 -07:00
Kconfig Introduce CONFIG_CHECK_SIGNATURE 2007-08-22 19:52:45 -07:00
Kconfig.debug SLUB: Support for performance statistics 2008-02-07 17:47:41 -08:00
kernel_lock.c sched: remove the !PREEMPT_BKL code 2008-01-25 21:08:33 +01:00
klist.c
kobject_uevent.c Kobject: fix coding style issues in kobject c files 2008-01-24 21:59:04 -08:00
kobject.c kobject: kerneldoc comment fix 2008-02-02 15:14:48 -08:00
kref.c kref: add kref_set() 2008-01-24 20:40:05 -08:00
libcrc32c.c [LIB] crc32c: Keep intermediate crc state in cpu order 2007-11-08 21:34:09 +08:00
list_debug.c
locking-selftest-hardirq.h
locking-selftest-mutex.h
locking-selftest-rlock-hardirq.h
locking-selftest-rlock-softirq.h
locking-selftest-rlock.h
locking-selftest-rsem.h
locking-selftest-softirq.h
locking-selftest-spin-hardirq.h
locking-selftest-spin-softirq.h
locking-selftest-spin.h
locking-selftest-wlock-hardirq.h
locking-selftest-wlock-softirq.h
locking-selftest-wlock.h
locking-selftest-wsem.h
locking-selftest.c
Makefile iommu sg: add IOMMU helper functions for the free area management 2008-02-05 09:44:11 -08:00
parser.c
pcounter.c [LIB] pcounter : unline too big functions 2008-01-28 15:00:35 -08:00
percpu_counter.c Add irq protection in the percpu-counters cpu-hotplug-callback path 2007-10-19 11:53:44 -07:00
plist.c
prio_heap.c Fix cpusets update_cpumask 2007-10-19 11:53:41 -07:00
prio_tree.c
proportions.c lib: proportion: fix underflow in prop_norm_percpu() 2007-12-23 12:54:37 -08:00
radix-tree.c radix-tree: avoid atomic allocations for preloaded insertions 2008-02-05 09:44:17 -08:00
random32.c
rbtree.c
reciprocal_div.c
rwsem-spinlock.c
rwsem.c x86: fix UML and -regparm=3 2008-01-30 13:33:00 +01:00
scatterlist.c SG: work with the SCSI fixed maximum allocations. 2008-01-28 10:54:49 +01:00
semaphore-sleepers.c
sha1.c
smp_processor_id.c debug_smp_processor_id() fixlets 2008-02-06 10:41:09 -08:00
sort.c lib/sort.c optimization 2007-10-17 08:42:52 -07:00
spinlock_debug.c Use helpers to obtain task pid in printks 2007-10-19 11:53:43 -07:00
string.c
swiotlb.c iommu sg merging: swiotlb: respect the segment boundary limits 2008-02-05 09:44:12 -08:00
textsearch.c [TEXTSEARCH]: Do not allow zero length patterns in the textsearch infrastructure 2007-12-01 00:03:52 +11:00
ts_bm.c
ts_fsm.c
ts_kmp.c
vsprintf.c lib: move kasprintf to a separate file 2007-07-31 15:39:39 -07:00