| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This will prevent calling vm_map_delete without the map locked
unless ref_count is zero.
Message-ID: <20240223081505.458240-1-damien@zamaudio.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a parameter called keep_map_locked to vm_map_lookup()
that allows the function to return with the map locked.
This is to prepare for fixing a bug with gsync where the map
is locked twice by mistake.
Co-Authored-By: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240222082410.422869-3-damien@zamaudio.com>
|
|
|
|
|
|
|
|
|
| |
* vm/vm_map.c: use actual limits instead of min/max boundaries to
change pageability of the currently mapped memory.
This caused the initial vm_wire_all(host, task VM_WIRE_ALL) in glibc
startup to fail with KERN_NO_SPACE.
Message-ID: <20240111210907.419689-5-luca@orpolo.org>
|
|
|
|
|
|
|
|
| |
When
- extending an existing entry,
- changing protection or inheritance of a range of entries,
we can get several entries that could be coalesced. Attempt to do that.
Message-ID: <20230705141639.85792-4-bugaevc@gmail.com>
|
|
|
|
|
|
| |
This function attempts to coalesce a VM map entry with its preceeding
entry. It wraps vm_object_coalesce.
Message-ID: <20230705141639.85792-3-bugaevc@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_object_coalesce() callers used to rely on the fact that it always
merged the next_object into prev_object, potentially destroying
next_object and leaving prev_object the result of the whole operation.
After ee65849bec5da261be90f565bee096abb4117bdd
"vm: Allow coalescing null object with an internal object", this is no
longer true, since in case of prev_object == VM_OBJECT_NULL and
next_object != VM_OBJECT_NULL, the overall result is next_object, not
prev_object. The current callers are prepared to deal with this since
they handle this case seprately anyway, but the following commit will
introduce another caller that handles both cases in the same code path.
So, declare the way vm_object_coalesce() coalesces the two objects its
implementation detail, and make it return the resulting object and the
offset into it explicitly. This simplifies the callers, too.
Message-Id: <20230705141639.85792-2-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a deallocated VM map entry refers to an object that only has a single
reference and doesn't have a pager port, we can eagerly release any
physical pages that were contained in the deallocated range.
This is not a 100% solution: it is still possible to "leak" physical
pages that can never appear in virtual memory again by creating several
references to a memory object (perhaps by forking a VM map with
VM_INHERIT_SHARE) and deallocating the pages from all the maps referring
to the object. That being said, it should help to release the pages in
the common case sooner.
Message-Id: <20230626112656.435622-6-bugaevc@gmail.com>
|
|
|
|
|
|
|
| |
When entering an object into a map, try to extend the next entry
backward, in addition to the previously existing attempt to extend the
previous entry forward.
Message-Id: <20230626112656.435622-5-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
| |
If a mapping of an object is made right next to another mapping of the
same object have the same properties (protection, inheritance, etc.),
Mach will now expand the previous VM map entry to cover the new address
range instead of creating a new entry.
Message-Id: <20230626112656.435622-3-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
When generating stubs, Mig will will take the vm_size_array_t and define the
input request struct using rpc_vm_size_t since the size is variable. This will turn cause a mismatch
between types (vm_size_t* vs rpc_vm_size_t*). We could also ask Mig to produce
a prototype by using rpc_vm_size_t*, however we would need to change the implementation
of the RPC to use rpc_* types anyway since we want to avoid another allocation
of the array.
Message-Id: <Y9iwScHpmsgY3V0N@jupiter.tail36e24.ts.net>
|
|
|
|
| |
Message-Id: <Y8mYd/pt/og4Tj5I@mercury.tail36e24.ts.net>
|
|
|
|
|
|
| |
This also reverts 566c227636481b246d928772ebeaacbc7c37145b and
963b1794d7117064cee8ab5638b329db51dad854
Message-Id: <Y8d75KSqNL4FFInm@mercury.tail36e24.ts.net>
|
|
|
|
|
|
|
|
|
|
|
| |
Marked some functions as static (private) as needed and added missing
includes.
This also revealed some dead code which was removed.
Note that -Wmissing-prototypes is not enabled here since there is a
bunch more warnings.
Message-Id: <Y6j72lWRL9rsYy4j@mars>
|
|
|
|
|
|
|
| |
Most of the changes include defining and using proper function type
declarations (with argument types declared) and avoiding using the
K&R style of function declarations.
Message-Id: <Y6Jazsuis1QA0lXI@mars>
|
|
|
|
|
|
| |
We already use this built-in in other places and this will move us closer to
being able to build the kernel without libc.
Message-Id: <Y5l80/VUFvJYZTjy@jupiter.tail36e24.ts.net>
|
|
|
|
|
|
|
|
|
|
| |
This allows *printf to use %zd/%zu/%zx to print vm_size_t and
vm_offset_t. Warnings using the incorrect specifiers were fixed.
Note that MACH_PORT_NULL became just 0 because GCC thinks that we were
comparing a pointer to a character (due to it being an unsigned int) so
I removed the explicit cast.
Message-Id: <Y47UNdcUF35Ag4Vw@reue>
|
|
|
|
|
|
|
|
|
|
|
| |
If a "wire_required" process calls vm_map_protect(0), the
memory gets unwired as expected. But if the process then calls
vm_map_protect(VM_PROT_READ) again, we need to wire that memory.
(This happens to be exactly what glibc does for its heap)
This fixes Hurd hangs on lack of memory, during which mach was swapping
pieces of mach-defpager out.
|
|
|
|
|
| |
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-13-luca@orpolo.org>
|
|
|
|
| |
For coherency with memory_object_create_proxy.
|
|
|
|
|
|
|
|
| |
* vm/vm_map.c (vm_region_get_proxy):
- Return KERN_INVALID_ARGUMENT when the entry is a submap.
- Create a pager for the vm_object when the entry doesn't
have any yet, since it's an anonymous mapping.
Message-Id: <20211106081333.10366-3-jlledom@mailfence.com>
|
|
|
|
|
|
|
|
|
| |
To get a proxy to the region a given address belongs to,
with protection and range limited to the region ones.
* include/mach/mach4.defs: vm_region_get_proxy RPC declaration
* vm/vm_map.c: vm_region_get_proxy implementation
Message-Id: <20211106081333.10366-2-jlledom@mailfence.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_page_grab was systematically using the VM_PAGE_SEL_DIRECTMAP selector
to play safe with existing code.
This adds a flags parameter to let callers of vm_page_grab specify their
constraints.
Linux drivers need 32bit dmas, Xen drivers use kvtophys to clear some
data. Callers of kmem_pagealloc_physmem and vm_page_grab_phys_addr also use
kvtophys. Otherwise allocations can go to highmem.
This fixes the allocation jam in the directmap segment.
* vm/vm_page.h (VM_PAGE_DMA, VM_PAGE_DMA32, VM_PAGE_DIRECTMAP,
VM_PAGE_HIGHMEM): New macros.
(vm_page_grab): Add flags parameter.
* vm/vm_resident.c (vm_page_grab): Choose allocation selector according
to flags parameter.
(vm_page_convert, vm_page_alloc): Pass VM_PAGE_HIGHMEM to vm_page_grab.
(vm_page_grab_phys_addr): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* vm/vm_map.c (vm_map_copy_steal_pages): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Pass VM_PAGE_DIRECTMAP to
vm_page_grab.
* xen/block.c (device_read): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* linux/dev/glue/block.c (alloc_buffer): Pass VM_PAGE_DMA32 to vm_page_grab.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does not make sense, and produces incorrect results (since
vme_end is 0, etc.)
* vm/vm_map.h (_vm_map_clip_start, _vm_map_clip_end): Add link_gap
parameter.
* vm/vm_map.c (_vm_map_entry_link): Add link_gap parameter, do not call
vm_map_gap_insert if it is 0.
(vm_map_entry_link): Set link_gap to 1 in _vm_map_entry_link call.
(_vm_map_clip_start): Add link_gap parameter, pass it to
_vm_map_entry_link call..
(vm_map_clip_start): Set link_gap_to 1 in _vm_map_clip_start call.
(vm_map_copy_entry_link): Set link_gap to 0 in _vm_map_entry_link call.
(vm_map_copy_clip_start): Set link_gap_to 0 in _vm_map_clip_start call.
(_vm_map_entry_unlink): Add unlink_gap parameter, do not call
vm_map_gap_remove if it is 0.
(vm_map_entry_unlink): Set unlink_gap to 1 in _vm_map_entry_unlink call.
(_vm_map_clip_end): Add link_gap parameter, pass it to
_vm_map_entry_link call..
(vm_map_clip_end): Set link_gap_to 1 in _vm_map_clip_end call.
(vm_map_copy_entry_unlink): Set unlink_gap to 0 in _vm_map_entry_unlink call.
(vm_map_copy_clip_end): Set link_gap_to 0 in _vm_map_clip_end call.
* vm/vm_kern.c (projected_buffer_deallocate): set link_gap to 1 in
_vm_map_clip_start and _vm_map_clip_end calls.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_find_entry_anywhere): Print warning when max_size
gets smaller than size.
|
|
|
|
|
|
|
|
|
|
|
| |
glibc's sysdeps/mach/hurd/dl-sysdep.c has been wanting to use this for
decades.
* include/string.h (ffs): New declaration.
* vm/vm_map.c: Include <string.h>.
(vm_map_find_entry_anywhere): Separate out high bits from mask, to
compute the maximum offset instead of map->max_offset.
* doc/mach.texi (vm_map): Update documentation accordingly.
|
|
|
|
| |
* vm/vm_map.c(vm_map_msync): explit group of first condition.
|
|
|
|
|
| |
* vm/vm_map.c(vm_map_fork): use VM_MAP_NULL instead of PMAP_NULL when compare
with new_map.
|
|
|
|
| |
* vm/vm_map.c(vm_map_msync): Add missing return keyword.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_fork): Check for `new_map` being non-NULL, and not
for `new_pmap` a second time.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* include/mach/vm_sync.h: New file.
* include/mach/mach_types.h: Include <mach/vm_sync.h>
* Makefrag.am (include_mach_HEADERS): Add include/mach/vm_sync.h.
* include/mach/mach_types.defs (vm_sync_t): Add type.
* include/mach/gnumach.defs (vm_object_sync, vm_msync): Add RPCs.
* vm/vm_map.h: Include <mach/vm_sync.h>.
(vm_map_msync): New declaration.
* vm/vm_map.c (vm_map_msync): New function.
* vm/vm_user.c: Include <mach/vm_sync.h> and <kern/mach.server.h>.
(vm_object_sync, vm_msync): New functions.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_find_entry_anywhere): Also check that (min + mask) &
~mask remains bigger than min.
|
|
|
|
| |
* vm/vm_map.c (vm_map_copyout): Fix panic format.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_create): Gracefully handle resource exhaustion.
(vm_map_fork): Likewise at the callsite.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This call maps the POSIX mlockall and munlockall calls.
* Makefrag.am (include_mach_HEADERS): Add include/mach/vm_wire.h.
* include/mach/gnumach.defs (vm_wire_t): New type.
(vm_wire_all): New routine.
* include/mach/mach_types.h: Include mach/vm_wire.h.
* vm/vm_map.c: Likewise.
(vm_map_enter): Automatically wire new entries if requested.
(vm_map_copyout): Likewise.
(vm_map_pageable_all): New function.
vm/vm_map.h: Include mach/vm_wire.h.
(struct vm_map): Update description of member `wiring_required'.
(vm_map_pageable_all): New function.
* vm/vm_user.c (vm_wire_all): New function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, user wiring is removed, simply because it has never been used.
Second, make the VM system track wiring requests to better handle
protection. This change makes it possible to wire entries with
VM_PROT_NONE protection without actually reserving any page for
them until protection changes, and even make those pages pageable
if protection is downgraded to VM_PROT_NONE.
* ddb/db_ext_symtab.c: Update call to vm_map_pageable.
* i386/i386/user_ldt.c: Likewise.
* ipc/mach_port.c: Likewise.
* vm/vm_debug.c (mach_vm_region_info): Update values returned
as appropriate.
* vm/vm_map.c (vm_map_entry_copy): Update operation as appropriate.
(vm_map_setup): Update member names as appropriate.
(vm_map_find_entry): Update to account for map member variable changes.
(vm_map_enter): Likewise.
(vm_map_entry_inc_wired): New function.
(vm_map_entry_reset_wired): Likewise.
(vm_map_pageable_scan): Likewise.
(vm_map_protect): Update wired access, call vm_map_pageable_scan.
(vm_map_pageable_common): Rename to ...
(vm_map_pageable): ... and rewrite to use vm_map_pageable_scan.
(vm_map_entry_delete): Fix unwiring.
(vm_map_copy_overwrite): Replace inline code with a call to
vm_map_entry_reset_wired.
(vm_map_copyin_page_list): Likewise.
(vm_map_print): Likewise. Also print map size and wired size.
(vm_map_copyout_page_list): Update to account for map member variable
changes.
* vm/vm_map.h (struct vm_map_entry): Remove `user_wired_count' member,
add `wired_access' member.
(struct vm_map): Rename `user_wired' member to `size_wired'.
(vm_map_pageable_common): Remove function.
(vm_map_pageable_user): Remove macro.
(vm_map_pageable): Replace macro with function declaration.
* vm/vm_user.c (vm_wire): Update call to vm_map_pageable.
|
|
|
|
|
|
|
|
| |
* doc/mach.texi: Update return codes.
* vm/vm_map.c (vm_map_pageable_common): Return KERN_NO_SPACE instead
of KERN_FAILURE if some of the specified address range does not
correspond to mapped pages. Skip unwired entries instead of failing
when unwiring.
|
|
|
|
| |
* vm/vm_map.c (vm_map_print): Print name of the map.
|
|
|
|
|
|
| |
* kern/task.c (task_create): Gracefully handle pmap allocation
failures.
* vm/vm_map.c (vm_map_fork): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a "page considered external", which apparently takes into
account whether a page is dirty or not, redefine this property to
reliably mean "is in an external object".
This commit mostly deals with the impact of this change on the page
allocation interface.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to
vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of
vm_page_grab_contig.
(kmem_pagefree_physmem): Use vm_page_release instead of
vm_page_free_contig.
* linux/dev/glue/block.c (alloc_buffer, device_read): Update call
to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and
vm_page_convert.
* vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab.
* vm/vm_page.h (struct vm_page): Remove `extcounted' member.
(vm_page_external_limit, vm_page_external_count): Remove extern
declarations.
(vm_page_convert, vm_page_grab): Update declarations.
(vm_page_release, vm_page_grab_phys_addr): New function declarations.
* vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro.
(VM_PAGE_EXTERNAL_TARGET): Likewise.
(vm_page_external_target): Remove variable.
(vm_pageout_scan): Remove specific handling of external pages.
(vm_pageout): Don't set vm_page_external_limit and
vm_page_external_target.
* vm/vm_resident.c (vm_page_external_limit): Remove variable.
(vm_page_insert, vm_page_replace, vm_page_remove): Update external
page tracking.
(vm_page_convert): Remove `external' parameter.
(vm_page_grab): Likewise. Remove specific handling of external pages.
(vm_page_grab_phys_addr): Update call to vm_page_grab.
(vm_page_release): Remove `external' parameter and remove specific
handling of external pages.
(vm_page_wait): Remove specific handling of external pages.
(vm_page_alloc): Update call to vm_page_grab.
(vm_page_free): Update call to vm_page_release.
* xen/block.c (device_read): Update call to vm_page_grab.
* xen/net.c (device_write): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 5dd4f67522ad0d49a2cecdb9b109251f546d4dd1 makes VM map entry
allocation done with VM privilege, so that a VM map isn't held locked
while physical allocations are paused, which may block the default
pager during page eviction, causing a system-wide deadlock.
First, it turns out that map entries aren't the only buffers allocated,
and second, their number can't be easily determined, which makes a
preallocation strategy very hard to implement.
This change generalizes the strategy of VM privilege increase when a
VM map is locked.
* device/ds_routines.c (io_done_thread): Use integer values instead
of booleans when setting VM privilege.
* kern/thread.c (thread_init, thread_wire): Likewise.
* vm/vm_pageout.c (vm_pageout): Likewise.
* kern/thread.h (struct thread): Turn member `vm_privilege' into an
unsigned integer.
* vm/vm_map.c (vm_map_lock): New function, where VM privilege is
temporarily increased.
(vm_map_unlock): New function, where VM privilege is decreased.
(_vm_map_entry_create): Remove VM privilege workaround from this
function.
* vm/vm_map.h (vm_map_lock, vm_map_unlock): Turn into functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the replacement of the zone allocator, kernel objects have been
wired in memory. Besides, as of 5e9f6f (Stack the slab allocator
directly on top of the physical allocator), there is a single cache
used to allocate map entries.
Those changes make the pageability attribute of VM maps irrelevant.
* device/ds_routines.c (mach_device_init): Update call to kmem_submap.
* ipc/ipc_init.c (ipc_init): Likewise.
* kern/task.c (task_create): Update call to vm_map_create.
* vm/vm_kern.c (kmem_submap): Remove `pageable' argument. Update call
to vm_map_setup.
(kmem_init): Update call to vm_map_setup.
* vm/vm_kern.h (kmem_submap): Update declaration.
* vm/vm_map.c (vm_map_setup): Remove `pageable' argument. Don't set
`entries_pageable' member.
(vm_map_create): Likewise.
(vm_map_copyout): Don't bother creating copies of page entries with
the right pageability.
(vm_map_copyin): Don't set `entries_pageable' member.
(vm_map_fork): Update call to vm_map_create.
* vm/vm_map.h (struct vm_map_header): Remove `entries_pageable' member.
(vm_map_setup, vm_map_create): Remove `pageable' argument.
|
|
|
|
|
| |
* vm/vm_map.c (_vm_map_entry_create: Make sure there is a thread
before accessing VM privilege.
|
|
|
|
|
| |
* vm/vm_map.c (_vm_map_entry_create): Temporarily set the current thread
as VM privileged.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change improves the clarity of "no more room for ..." VM map
allocation errors.
* kern/task.c (task_init): Call vm_map_set_name for the kernel map.
(task_create): Call vm_map_set_name where appropriate.
* vm/vm_map.c (vm_map_setup): Set map name to NULL.
(vm_map_find_entry_anywhere): Update error message to include map name.
* vm/vm_map.h (struct vm_map): New `name' member.
(vm_map_set_name): New inline function.
|
|
|
|
|
| |
* vm/vm_map (vm_map_copyin, vm_map_copyin_page_list): Check overflow
before page alignment of source data.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_copyout_page_list): Fix call to
vm_map_find_entry_anywhere to avoid relocking VM map.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change augments VM maps with a gap tree, sorted by gap size, to
use for non-fixed allocations.
* vm/vm_map.c: Include kern/list.h.
(vm_map_entry_gap_cmp_lookup, vm_map_entry_gap_cmp_insert,
vm_map_gap_valid, vm_map_gap_compute, vm_map_gap_insert_single,
vm_map_gap_remove_single, vm_map_gap_update, vm_map_gap_insert,
vm_map_gap_remove, vm_map_find_entry_anywhere): New functions.
(vm_map_setup): Initialize gap tree.
(_vm_map_entry_link): Call vm_map_gap_insert.
(_vm_map_entry_unlink): Call vm_map_gap_remove.
(vm_map_find_entry, vm_map_enter, vm_map_copyout,
vm_map_copyout_page_list, vm_map_copyin): Replace look up loop with
a call to vm_map_find_entry_anywhere. Call vm_map_gap_update and
initialize gap tree where relevant.
(vm_map_copy_insert): Turn macro into an inline function and rewrite.
(vm_map_simplify): Reorder call to vm_map_entry_unlink so that previous
entry is suitable for use with gap management functions.
* vm/vm_map.h: Include kern/list.h.
(struct vm_map_entry): New members `gap_node`, `gap_list`,
`gap_size` and `in_gap_tree`.
(struct vm_map_header): New member `gap_tree`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The slab allocator has grown to use multiple ways to allocate slabs
as well as track them, which got a little messy. One consequence is
the breaking of the KMEM_CF_VERIFY option. In order to make the code
less confusing, this change expresses all options as explicit cache
flags and clearly defines their relationships.
The special kmem_slab and vm_map_entry caches are initialized
accordingly.
* kern/slab.c (KMEM_CF_DIRECTMAP): Rename to ...
(KMEM_CF_PHYSMEM): ... this new macro.
(KMEM_CF_DIRECT): Restore macro.
(KMEM_CF_USE_TREE, KMEM_CF_USE_PAGE): New macros.
(KMEM_CF_VERIFY): Update value.
(kmem_pagealloc_directmap): Rename to...
(kmem_pagealloc_physmem): ... this new function.
(kmem_pagefree_directmap): Rename to ...
(kmem_pagefree_physmem): ... this new function.
(kmem_pagealloc, kmem_pagefree): Update macro names.
(kmem_slab_use_tree): Remove function.
(kmem_slab_create, kmem_slab_destroy): Update according to the new
cache flags.
(kmem_cache_compute_sizes): Rename to ...
(kmem_cache_compute_properties): ... this new function, and update
to properly set cache flags.
(kmem_cache_init): Update call to kmem_cache_compute_properties.
(kmem_cache_alloc_from_slab): Check KMEM_CF_USE_TREE instead of
calling the defunct kmem_slab_use_tree function.
(kmem_cache_free_to_slab): Update according to the new cache flags.
kmem_cache_free_verify): Add assertion.
(slab_init): Update initialization of kmem_slab_cache.
* kern/slab.h (KMEM_CACHE_DIRECTMAP): Rename to ...
(KMEM_CACHE_PHYSMEM): ... this new macro.
* vm/vm_map.c (vm_map_init): Update initialization of vm_map_entry_cache.
|