diff options
author | Richard Braun <rbraun@sceen.net> | 2013-04-21 01:30:27 +0200 |
---|---|---|
committer | Richard Braun <rbraun@sceen.net> | 2013-04-21 01:30:27 +0200 |
commit | 24832de763ad58be6afdcff6c761b54ccee42667 (patch) | |
tree | b6858a1d7ccd7ed8948a86085b130623e83190df /kern/slab.h | |
parent | b54c7e38d8871a0c8b7694e7fd02062cf7ca988a (diff) | |
download | gnumach-24832de763ad58be6afdcff6c761b54ccee42667.tar.gz gnumach-24832de763ad58be6afdcff6c761b54ccee42667.tar.bz2 gnumach-24832de763ad58be6afdcff6c761b54ccee42667.zip |
Rework slab lists handling
Don't enforce strong ordering of partial slabs. Separating partial slabs
from free slabs is already effective against fragmentation, and sorting
would sometimes cause pathological scalability issues. In addition, store
new slabs (whether free or partial) in LIFO order for better cache usage.
* kern/slab.c (kmem_cache_grow): Insert new slab at the head of the slabs list.
(kmem_cache_alloc_from_slab): Likewise. In addition, don't sort partial slabs.
(kmem_cache_free_to_slab): Likewise.
* kern/slab.h: Remove comment about partial slabs sorting.
Diffstat (limited to 'kern/slab.h')
-rw-r--r-- | kern/slab.h | 7 |
1 files changed, 0 insertions, 7 deletions
diff --git a/kern/slab.h b/kern/slab.h index 47bef218..b842fb74 100644 --- a/kern/slab.h +++ b/kern/slab.h @@ -155,13 +155,6 @@ typedef void (*kmem_slab_free_fn_t)(vm_offset_t, vm_size_t); * Cache of objects. * * Locking order : cpu_pool -> cache. CPU pools locking is ordered by CPU ID. - * - * The partial slabs list is sorted by slab references. Slabs with a high - * number of references are placed first on the list to reduce fragmentation. - * Sorting occurs at insertion/removal of buffers in a slab. As the list - * is maintained sorted, and the number of references only changes by one, - * this is a very cheap operation in the average case and the worst (linear) - * case is very unlikely. */ struct kmem_cache { #if SLAB_USE_CPU_POOLS |