diff options
author | Richard Braun <rbraun@sceen.net> | 2016-09-07 00:11:08 +0200 |
---|---|---|
committer | Richard Braun <rbraun@sceen.net> | 2016-09-07 00:11:08 +0200 |
commit | e5c7d1c1dda40f8f262e26fed911bfe03027993b (patch) | |
tree | 1e5f9bfb86ef80c2fafdce7a1a9214ba955e9f5f /vm/vm_kern.h | |
parent | efcecd06abb8f7342723a8916917842840e9264f (diff) | |
download | gnumach-e5c7d1c1dda40f8f262e26fed911bfe03027993b.tar.gz gnumach-e5c7d1c1dda40f8f262e26fed911bfe03027993b.tar.bz2 gnumach-e5c7d1c1dda40f8f262e26fed911bfe03027993b.zip |
Remove map entry pageability property.
Since the replacement of the zone allocator, kernel objects have been
wired in memory. Besides, as of 5e9f6f (Stack the slab allocator
directly on top of the physical allocator), there is a single cache
used to allocate map entries.
Those changes make the pageability attribute of VM maps irrelevant.
* device/ds_routines.c (mach_device_init): Update call to kmem_submap.
* ipc/ipc_init.c (ipc_init): Likewise.
* kern/task.c (task_create): Update call to vm_map_create.
* vm/vm_kern.c (kmem_submap): Remove `pageable' argument. Update call
to vm_map_setup.
(kmem_init): Update call to vm_map_setup.
* vm/vm_kern.h (kmem_submap): Update declaration.
* vm/vm_map.c (vm_map_setup): Remove `pageable' argument. Don't set
`entries_pageable' member.
(vm_map_create): Likewise.
(vm_map_copyout): Don't bother creating copies of page entries with
the right pageability.
(vm_map_copyin): Don't set `entries_pageable' member.
(vm_map_fork): Update call to vm_map_create.
* vm/vm_map.h (struct vm_map_header): Remove `entries_pageable' member.
(vm_map_setup, vm_map_create): Remove `pageable' argument.
Diffstat (limited to 'vm/vm_kern.h')
-rw-r--r-- | vm/vm_kern.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/vm/vm_kern.h b/vm/vm_kern.h index fb8ac7f8..4bd89c49 100644 --- a/vm/vm_kern.h +++ b/vm/vm_kern.h @@ -57,7 +57,7 @@ extern kern_return_t kmem_alloc_aligned(vm_map_t, vm_offset_t *, vm_size_t); extern void kmem_free(vm_map_t, vm_offset_t, vm_size_t); extern void kmem_submap(vm_map_t, vm_map_t, vm_offset_t *, - vm_offset_t *, vm_size_t, boolean_t); + vm_offset_t *, vm_size_t); extern kern_return_t kmem_io_map_copyout(vm_map_t, vm_offset_t *, vm_offset_t *, vm_size_t *, |