diff options
author | Richard Braun <rbraun@sceen.net> | 2016-02-20 00:48:38 +0100 |
---|---|---|
committer | Richard Braun <rbraun@sceen.net> | 2016-02-20 00:48:38 +0100 |
commit | e3cdb6f6ad3f2ef690cc5822178efb3bde93fa9a (patch) | |
tree | 711130cd6f53d0ae641ea70c4699bd11d7361082 /vm | |
parent | f9ac76867be8c7f6943ca42d93521e5ad97e42a4 (diff) | |
download | gnumach-e3cdb6f6ad3f2ef690cc5822178efb3bde93fa9a.tar.gz gnumach-e3cdb6f6ad3f2ef690cc5822178efb3bde93fa9a.tar.bz2 gnumach-e3cdb6f6ad3f2ef690cc5822178efb3bde93fa9a.zip |
Avoid slab allocation failures caused by memory fragmentation
Since the slab allocator has been changed to sit directly on top of the
physical allocator, failures caused by fragmentation have been observed,
as one could expect. This change makes the slab allocator revert to
kernel virtual memory when allocating larger-than-page slabs. This
solution is motivated in part to avoid the complexity of other solutions
such as page mobility, and also because a microkernel cannot be extended
to new arbitrary uncontrolled usage patterns such as a monolithic kernel
with loadable modules. As such, large objects are rare, and their use
infrequent, which is compatible with the use of kernel virtual memory.
* kern/slab.c: Update module description.
(KMEM_CF_SLAB_EXTERNAL, KMEM_CF_VERIFY): Update values.
(KMEM_CF_DIRECT): Remove macro.
(KMEM_CF_DIRECTMAP): New macro.
(kmem_pagealloc_directmap, kmem_pagefree_directmap,
kmem_pagealloc_virtual, kmem_pagefree_virtual): New functions.
(kmem_pagealloc, kmem_pagefree, kmem_slab_create, kmem_slab_destroy,
kalloc, kfree): Update to use the new pagealloc functions.
(kmem_cache_compute_sizes): Update the algorithm used to determine slab
size and other cache properties.
(kmem_slab_use_tree, kmem_cache_free_to_slab, host_slab_info): Update to
correctly use the cache flags.
(slab_init): Add KMEM_CACHE_DIRECTMAP to the kmem_slab_cache init flags.
* kern/slab.h (KMEM_CACHE_VERIFY): Change value.
(KMEM_CACHE_DIRECTMAP): New macro.
* vm/vm_map.c (vm_map_init): Add KMEM_CACHE_DIRECTMAP to the
vm_map_entry_cache init flags.
Diffstat (limited to 'vm')
-rw-r--r-- | vm/vm_map.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/vm/vm_map.c b/vm/vm_map.c index 0d610621..6c0232e4 100644 --- a/vm/vm_map.c +++ b/vm/vm_map.c @@ -146,10 +146,13 @@ vm_object_t vm_submap_object = &vm_submap_object_store; * Map and entry structures are allocated from caches -- we must * initialize those caches. * - * There are three caches of interest: + * There are two caches of interest: * * vm_map_cache: used to allocate maps. * vm_map_entry_cache: used to allocate map entries. + * + * We make sure the map entry cache allocates memory directly from the + * physical allocator to avoid recursion with this module. */ void vm_map_init(void) @@ -157,7 +160,8 @@ void vm_map_init(void) kmem_cache_init(&vm_map_cache, "vm_map", sizeof(struct vm_map), 0, NULL, 0); kmem_cache_init(&vm_map_entry_cache, "vm_map_entry", - sizeof(struct vm_map_entry), 0, NULL, 0); + sizeof(struct vm_map_entry), 0, NULL, + KMEM_CACHE_DIRECTMAP); kmem_cache_init(&vm_map_copy_cache, "vm_map_copy", sizeof(struct vm_map_copy), 0, NULL, 0); |