| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
It needs to be able to hold > 4G size.
* i386/include/mach/i386/vm_types.h (vm_size_t): Set type to unsigned
long.
* vm/vm_user.c (vm_read, vm_write): Fix type according to RPC.
* i386/i386at/model_dep.c (c_boot_entry): Fix format.
* device/dev_pager.c (device_pager_data_request): Fix format.
|
|
|
|
|
|
|
| |
Suggested by guy fleury iteriteka <gfleury@disroot.org>
* vm/vm_object.c (ipc/mavm_object_copy_call): Make sure vm_object_enter call
succeeds.
|
|
|
|
| |
* vm/vm_map.c(vm_map_msync): explit group of first condition.
|
|
|
|
|
| |
* vm/vm_map.c(vm_map_fork): use VM_MAP_NULL instead of PMAP_NULL when compare
with new_map.
|
|
|
|
| |
* vm/vm_map.c(vm_map_msync): Add missing return keyword.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_fork): Check for `new_map` being non-NULL, and not
for `new_pmap` a second time.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* include/mach/vm_sync.h: New file.
* include/mach/mach_types.h: Include <mach/vm_sync.h>
* Makefrag.am (include_mach_HEADERS): Add include/mach/vm_sync.h.
* include/mach/mach_types.defs (vm_sync_t): Add type.
* include/mach/gnumach.defs (vm_object_sync, vm_msync): Add RPCs.
* vm/vm_map.h: Include <mach/vm_sync.h>.
(vm_map_msync): New declaration.
* vm/vm_map.c (vm_map_msync): New function.
* vm/vm_user.c: Include <mach/vm_sync.h> and <kern/mach.server.h>.
(vm_object_sync, vm_msync): New functions.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_find_entry_anywhere): Also check that (min + mask) &
~mask remains bigger than min.
|
|
|
|
| |
* vm/vm_map.c (vm_map_copyout): Fix panic format.
|
|
|
|
|
|
|
|
|
| |
* i386/intel/pmap.c: Drop the register qualifier.
* ipc/ipc_kmsg.h: Likewise.
* kern/bootstrap.c: Likewise.
* kern/profile.c: Likewise.
* kern/thread.c: Likewise.
* vm/vm_object.c: Likewise.
|
|
|
|
|
| |
* vm/vm_object.c (vm_object_accept_old_init_protocol): Remove.
(vm_object_enter): Adapt.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_create): Gracefully handle resource exhaustion.
(vm_map_fork): Likewise at the callsite.
|
| |
|
|
|
|
|
|
| |
* vm/vm_fault.c (vm_fault_page): Mute paging error message if the
objects pager is NULL. This happens when a pager is destroyed,
e.g. at system shutdown time when the root filesystem terminates.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit eb07428ffb0009085fcd01dd1b79d9953af8e0ad does fix pageout of
external objects backed by the default pager, but the way it's done
has a vicious side effect: because they're considered external, the
pageout daemon can keep evicting them even though the external pagers
haven't released them, unlike internal pages which must all be
released before the pageout daemon can make progress. This can lead
to a situation where too many pages become wired, the default pager
cannot allocate memory to process new requests, and the pageout
daemon cannot recycle any more page, causing a panic.
This change makes the pageout daemon use the same strategy for both
internal pages and external pages sent to the default pager: use
the laundry bit and wait for all laundry pages to be released,
thereby completely synchronizing the pageout daemon and the default
pager.
* vm/vm_page.c (vm_page_can_move): Allow external laundry pages to
be moved.
(vm_page_seg_evict): Don't alter the `external_laundry' bit, merely
disable double paging for external pages sent to the default pager.
* vm/vm_pageout.c: Include vm/memory_object.h.
(vm_pageout_setup): Don't check whether the `external_laundry' bit
is set, but handle external pages sent to the default pager the same
as internal pages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This call maps the POSIX mlockall and munlockall calls.
* Makefrag.am (include_mach_HEADERS): Add include/mach/vm_wire.h.
* include/mach/gnumach.defs (vm_wire_t): New type.
(vm_wire_all): New routine.
* include/mach/mach_types.h: Include mach/vm_wire.h.
* vm/vm_map.c: Likewise.
(vm_map_enter): Automatically wire new entries if requested.
(vm_map_copyout): Likewise.
(vm_map_pageable_all): New function.
vm/vm_map.h: Include mach/vm_wire.h.
(struct vm_map): Update description of member `wiring_required'.
(vm_map_pageable_all): New function.
* vm/vm_user.c (vm_wire_all): New function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First, user wiring is removed, simply because it has never been used.
Second, make the VM system track wiring requests to better handle
protection. This change makes it possible to wire entries with
VM_PROT_NONE protection without actually reserving any page for
them until protection changes, and even make those pages pageable
if protection is downgraded to VM_PROT_NONE.
* ddb/db_ext_symtab.c: Update call to vm_map_pageable.
* i386/i386/user_ldt.c: Likewise.
* ipc/mach_port.c: Likewise.
* vm/vm_debug.c (mach_vm_region_info): Update values returned
as appropriate.
* vm/vm_map.c (vm_map_entry_copy): Update operation as appropriate.
(vm_map_setup): Update member names as appropriate.
(vm_map_find_entry): Update to account for map member variable changes.
(vm_map_enter): Likewise.
(vm_map_entry_inc_wired): New function.
(vm_map_entry_reset_wired): Likewise.
(vm_map_pageable_scan): Likewise.
(vm_map_protect): Update wired access, call vm_map_pageable_scan.
(vm_map_pageable_common): Rename to ...
(vm_map_pageable): ... and rewrite to use vm_map_pageable_scan.
(vm_map_entry_delete): Fix unwiring.
(vm_map_copy_overwrite): Replace inline code with a call to
vm_map_entry_reset_wired.
(vm_map_copyin_page_list): Likewise.
(vm_map_print): Likewise. Also print map size and wired size.
(vm_map_copyout_page_list): Update to account for map member variable
changes.
* vm/vm_map.h (struct vm_map_entry): Remove `user_wired_count' member,
add `wired_access' member.
(struct vm_map): Rename `user_wired' member to `size_wired'.
(vm_map_pageable_common): Remove function.
(vm_map_pageable_user): Remove macro.
(vm_map_pageable): Replace macro with function declaration.
* vm/vm_user.c (vm_wire): Update call to vm_map_pageable.
|
|
|
|
|
|
|
|
|
|
|
| |
Double paging on such objects causes deadlocks.
* vm/vm_page.c: Include <vm/memory_object.h>.
(vm_page_seg_evict): Rename laundry to double_paging to increase
clarity. Set the `external_laundry' bit when evicting a page
from an external object backed by the default pager.
* vm/vm_pageout.c (vm_pageout_setup): Wire page if the
`external_laundry' bit is set.
|
|
|
|
|
|
|
|
| |
Unlike laundry pages sent to the default pager, pages marked with the
`external_laundry' bit remain in the page queues and must be filtered
out by the pageability check.
* vm/vm_page.c (vm_page_can_move): Check the `external_laundry' bit.
|
|
|
|
|
|
|
| |
The interval parameter to the thread_set_timeout function is actually
in ticks.
* vm/vm_pageout.c (vm_pageout): Fix call to thread_set_timeout.
|
|
|
|
|
|
|
|
| |
* doc/mach.texi: Update return codes.
* vm/vm_map.c (vm_map_pageable_common): Return KERN_NO_SPACE instead
of KERN_FAILURE if some of the specified address range does not
correspond to mapped pages. Skip unwired entries instead of failing
when unwiring.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the VM system has been tracking whether pages belong to internal
or external objects, pageout throttling to external pagers has simply
not been working. The reason is that, on pageout, requests for external
pages are correctly tracked, but on page release (which is used to
acknowledge the request), external pages are not marked external
any more. This is because the external bit tracks whether a page
belongs to an external object, and all pages, including external
ones, are moved to an internal object during pageout.
To solve this issue, a new "external_laundry" bit is added. It has
the same purpose as the laundry bit, but for external pagers.
* vm/vm_page.c (vm_page_seg_min_page_available): Function unused, remove.
(vm_page_seg_evict): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout. Add an assertion about double paging.
(vm_page_check_usable): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout.
(vm_page_evict): Likewise.
* vm/vm_page.h (struct vm_page): New `external_laundry' member.
(vm_page_external_pagedout): Rename to ...
(vm_page_external_laundry_count): ... this.
* vm/vm_pageout.c: Include kern/printf.h.
(DEBUG): New macro.
(VM_PAGEOUT_TIMEOUT): Likewise.
(vm_pageout_setup): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout. Set `external_laundry' where appropriate.
(vm_pageout): Use VM_PAGEOUT_TIMEOUT with thread_set_timeout.
Add debugging code, commented out by default.
* vm/vm_resident.c (vm_page_external_pagedout): Rename to ...
(vm_page_external_laundry_count): ... this.
(vm_page_init_template): Set `external_laundry' member to FALSE.
(vm_page_release): Rename external parameter to external_laundry.
Slightly change pageout resuming.
(vm_page_free): Rename external variable to external_laundry.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of determining if memory is low, directly use the
vm_page_alloc_paused variable, which is true when memory has reached
a minimum threshold until it gets back above the high thresholds.
This makes sure double paging is used when external pagers are unable
to allocate memory.
* vm/vm_page.c (vm_page_seg_evict): Rename low_memory to alloc_paused.
(vm_page_evict_once): Remove low_memory and its computation. Blindly
pass the new alloc_paused argument instead.
(vm_page_evict): Pass the value of vm_page_alloc_paused to
vm_page_evict_once.
|
|
|
|
|
| |
* vm/vm_page.c (vm_page_evict): Test both vm_page_external_pagedout
and vm_page_laundry_count in order to determine there was "no pageout".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When checking whether to continue paging out or not, the pageout daemon
only considers the high free page threshold of a segment. But if e.g.
the default pager had to allocate reserved pages during a previous
pageout cycle, it could have exhausted a segment (this is currently
only seen with the DMA segment). In that case, the high threshold
cannot be reached because the segment has currently no pageable page.
This change makes the pageout daemon identify this condition and
consider the segment as usable in order to make progress. The segment
will simply be ignored on the allocation path for unprivileged threads,
and if this happens with too many segments, the system will fail at
allocation time.
* vm/vm_page.c (vm_page_seg_usable): Report usable if the segment has
no pageable page.
|
|
|
|
| |
* vm/vm_map.c (vm_map_print): Print name of the map.
|
|
|
|
|
|
| |
* kern/task.c (task_create): Gracefully handle pmap allocation
failures.
* vm/vm_map.c (vm_map_fork): Likewise.
|
|
|
|
|
|
|
| |
* vm/vm_kern.c (kmem_alloc): Print map names in case of failures.
(kmem_alloc_wired): Likewise.
(kmem_alloc_aligned): Likewise.
(kmem_alloc_pageable): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* NEWS: Update.
* device/dev_pager.c (device_pager_data_request): Prune unused branch.
(device_pager_data_request_done): Remove function.
(device_pager_data_write): Likewise.
(device_pager_data_write_done): Likewise.
(device_pager_copy): Use 'memory_object_ready'.
* device/dev_pager.h (device_pager_data_write_done): Remove prototype.
* device/device_pager.srv (memory_object_data_write): Remove macro.
* doc/mach.texi: Update documentation.
* include/mach/mach.defs (memory_object_data_provided): Drop RPC.
(memory_object_set_attributes): Likewise.
* include/mach/memory_object.defs: Update comments.
(memory_object_data_write): Drop RPC.
* include/mach/memory_object_default.defs: Update comments.
* include/mach_debug/vm_info.h (VOI_STATE_USE_OLD_PAGEOUT): Drop
macro.
* vm/memory_object.c (memory_object_data_provided): Remove function.
(memory_object_data_error): Simplify.
(memory_object_set_attributes_common): Make static, remove unused
parameters, simplify.
(memory_object_change_attributes): Update callsite.
(memory_object_set_attributes): Remove function.
(memory_object_ready): Update callsite.
* vm/vm_debug.c (mach_vm_object_info): Adapt to the changes.
* vm/vm_object.c (vm_object_bootstrap): Likewise.
* vm/vm_object.h (struct vm_object): Drop flag 'use_old_pageout'.
* vm/vm_pageout.c: Update comments.
(vm_pageout_page): Simplify.
|
|
|
|
|
|
|
|
|
|
| |
* i386/i386at/biosmem.c (biosmem_setup): Load the HIGHMEM segment if
present.
(biosmem_free_usable): Report high memory as usable.
* vm/vm_page.c (vm_page_boot_table_size, vm_page_table_size,
vm_page_mem_size, vm_page_mem_free): Scan all segments.
* vm/vm_resident.c (vm_page_grab): Describe allocation strategy
with regard to the HIGHMEM segment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we're about to use a new HIGHMEM segment, potentially much larger
than the existing DMA and DIRECTMAP ones, it's now compulsory to make
the pageout daemon aware of those segments.
And while we're at it, let's fix some of the defects that have been
plaguing pageout forever, such as throttling, and pageout of internal
versus external pages (this commit notably introduces a hardcoded
policy in which as many external pages are selected before considering
internal pages).
* kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release.
* vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>.
(VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM,
VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM,
VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW,
VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM,
VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES,
VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros.
(struct vm_page_queue): New type.
(struct vm_page_seg): Add new members `min_free_pages', `low_free_pages',
`high_free_pages', `active_pages', `nr_active_pages', `high_active_pages',
`inactive_pages', `nr_inactive_pages'.
(vm_page_alloc_paused): New variable.
(vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New functions.
(vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout
daemon as appropriate.
(vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove,
vm_page_queue_first, vm_page_seg_get, vm_page_seg_index,
vm_page_seg_compute_pageout_thresholds): New functions.
(vm_page_seg_init): Initialize the new segment members.
(vm_page_seg_add_active_page, vm_page_seg_remove_active_page,
vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page,
vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page,
vm_page_seg_pull_cache_page): New functions.
(vm_page_seg_min_page_available, vm_page_seg_page_available,
vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock,
vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict,
vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive,
vm_page_lookup_seg, vm_page_check): New functions.
(vm_page_alloc_pa): Handle allocation failure from VM privileged thread.
(vm_page_info_all): Display additional segment properties.
(vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate,
vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments.
(vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance,
vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions.
(VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros.
(vm_page_evict, vm_page_refill_inactive): New functions.
* vm/vm_page.h: Include <kern/list.h>.
(struct vm_page): Remove member `pageq', reuse the `node' member instead,
move the `listq' and `next' members above `vm_page_header'.
(VM_PAGE_CHECK): Define as an alias to vm_page_check.
(vm_page_check): New function declaration.
(vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive,
vm_page_free_target, vm_page_free_min, vm_page_inactive_target,
vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations.
(vm_page_external_pagedout): New extern declaration.
(vm_page_release): Update declaration.
(VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove.
(VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros.
(VM_PT_KERNEL): Update value.
(vm_page_queues_remove, vm_page_balance, vm_page_evict,
vm_page_refill_inactive): New function declarations.
* vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN,
VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX,
VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN,
VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL,
VM_PAGEOUT_RESERVED_REALLY): Remove macros.
(vm_pageout_reserved_internal, vm_pageout_reserved_really,
vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait,
vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max,
vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock,
vm_pageout_inactive_busy, vm_pageout_inactive_absent,
vm_pageout_inactive_used, vm_pageout_inactive_clean,
vm_pageout_inactive_dirty, vm_pageout_inactive_double,
vm_pageout_inactive_cleaned_external): Remove variables.
(vm_pageout_requested, vm_pageout_continue): New variables.
(vm_pageout_setup): Wait for page allocation to succeed instead of
falling back to flush, update double paging protocol with caller,
add pageout throttling setup.
(vm_pageout_scan): Rewrite to use the new vm_page balancing,
eviction and inactive queue refill functions.
(vm_pageout_scan_continue, vm_pageout_continue): Remove functions.
(vm_pageout): Rewrite.
(vm_pageout_start, vm_pageout_resume): New functions.
* vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove
function declarations.
(vm_pageout_start, vm_pageout_resume): New function declarations.
* vm/vm_resident.c: Include <kern/list.h>.
(vm_page_queue_fictitious): Define as a struct list.
(vm_page_free_wanted, vm_page_external_count, vm_page_free_avail,
vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target,
vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved):
Remove variables.
(vm_page_external_pagedout): New variable.
(vm_page_bootstrap): Don't initialize removed variable, update
initialization of vm_page_queue_fictitious.
(vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate.
(vm_page_remove): Likewise.
(vm_page_grab_fictitious): Update to use list_xxx functions.
(vm_page_release_fictitious): Likewise.
(vm_page_grab): Remove pageout related code.
(vm_page_release): Add `laundry' and `external' parameters for
pageout throttling.
(vm_page_grab_contig): Remove pageout related code.
(vm_page_free_contig): Likewise.
(vm_page_free): Remove pageout related code, update call to
vm_page_release.
(vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate,
vm_page_activate): Move to vm/vm_page.c.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a "page considered external", which apparently takes into
account whether a page is dirty or not, redefine this property to
reliably mean "is in an external object".
This commit mostly deals with the impact of this change on the page
allocation interface.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to
vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of
vm_page_grab_contig.
(kmem_pagefree_physmem): Use vm_page_release instead of
vm_page_free_contig.
* linux/dev/glue/block.c (alloc_buffer, device_read): Update call
to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and
vm_page_convert.
* vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab.
* vm/vm_page.h (struct vm_page): Remove `extcounted' member.
(vm_page_external_limit, vm_page_external_count): Remove extern
declarations.
(vm_page_convert, vm_page_grab): Update declarations.
(vm_page_release, vm_page_grab_phys_addr): New function declarations.
* vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro.
(VM_PAGE_EXTERNAL_TARGET): Likewise.
(vm_page_external_target): Remove variable.
(vm_pageout_scan): Remove specific handling of external pages.
(vm_pageout): Don't set vm_page_external_limit and
vm_page_external_target.
* vm/vm_resident.c (vm_page_external_limit): Remove variable.
(vm_page_insert, vm_page_replace, vm_page_remove): Update external
page tracking.
(vm_page_convert): RemoveĀ `external' parameter.
(vm_page_grab): Likewise. Remove specific handling of external pages.
(vm_page_grab_phys_addr): Update call to vm_page_grab.
(vm_page_release): Remove `external' parameter and remove specific
handling of external pages.
(vm_page_wait): Remove specific handling of external pages.
(vm_page_alloc): Update call to vm_page_grab.
(vm_page_free): Update call to vm_page_release.
* xen/block.c (device_read): Update call to vm_page_grab.
* xen/net.c (device_write): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* i386/i386/phys.c (pmap_zero_page, pmap_copy_page, copy_to_phys,
copy_from_phys, kvtophys): Use the phys_addr_t type for physical
addresses.
* i386/intel/pmap.c (pmap_map, pmap_map_bd, pmap_destroy,
pmap_remove_range, pmap_page_protect, pmap_enter, pmap_extract,
pmap_collect, phys_attribute_clear, phys_attribute_test,
pmap_clear_modify, pmap_is_modified, pmap_clear_reference,
pmap_is_referenced): Likewise.
* i386/intel/pmap.h (pt_entry_t): Unconditionally define as a
phys_addr_t.
(pmap_zero_page, pmap_copy_page, kvtophys): Use the phys_addr_t
type for physical addresses.
* vm/pmap.h (pmap_enter, pmap_page_protect, pmap_clear_reference,
pmap_is_referenced, pmap_clear_modify, pmap_is_modified,
pmap_extract, pmap_map_bd): Likewise.
* vm/vm_page.h (vm_page_fictitious_addr): Declare as a phys_addr_t.
* vm/vm_resident.c (vm_page_fictitious_addr): Likewise.
(vm_page_grab_phys_addr): Change return type to phys_addr_t.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old assumption that all physical memory is directly mapped in
kernel space is about to go away. Those variables are directly linked
to that assumption.
* i386/i386/model_dep.h (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT
instead of phys_last_addr.
(pmap_copy_page, copy_to_phys, copy_from_phys): Likewise.
* i386/i386/trap.c (user_trap): Remove check against phys_last_addr.
* i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set
phys_last_addr.
* i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if
a physical address references physical memory.
* i386/i386at/model_dep.c (phys_first_addr): Remove variable.
(phys_last_addr): Likewise.
(pmap_free_pages, pmap_valid_page): Remove functions.
* i386/intel/pmap.c: Include i386at/biosmem.h.
(pa_index): Turn into an alias for vm_page_table_index.
(pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr
as appropriate.
(pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr
and phys_last_addr to obtain the number of physical pages.
(pmap_verify_free): Remove function.
(valid_page): Turn this macro into an inline function and rewrite
using vm_page_lookup_pa.
(pmap_page_table_page_alloc): Build the pmap VM object using
vm_page_table_size to determine its size.
(pmap_remove_range, pmap_page_protect, phys_attribute_clear,
phys_attribute_test): Turn page indexes into unsigned long integers.
(pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or
biosmem_directmap_end to determine if a physical address references
physical memory.
* i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of
phys_last_addr to obtain the number of physical pages.
* kern/startup.c (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the
appropriate segment selector instead of phys_last_addr to determine
where high memory starts.
* vm/pmap.h: Update requirements description.
(pmap_free_pages, pmap_valid_page): Remove declarations.
* vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size,
vm_page_table_size, vm_page_table_index): New functions.
* vm/vm_page.h (vm_page_seg_end, vm_page_table_size,
vm_page_table_index): New function declarations.
* vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define
as unsigned long integers.
(vm_page_bootstrap): Compute VP table size based on the page table
size instead of the value returned by pmap_free_pages.
|
|
|
|
|
|
|
| |
The vm_page_direct_va, vm_page_direct_pa and vm_page_direct_ptr
functions were imported along with the new vm_page module, but
never actually used since the kernel already has phystokv and
kvtophys functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 5dd4f67522ad0d49a2cecdb9b109251f546d4dd1 makes VM map entry
allocation done with VM privilege, so that a VM map isn't held locked
while physical allocations are paused, which may block the default
pager during page eviction, causing a system-wide deadlock.
First, it turns out that map entries aren't the only buffers allocated,
and second, their number can't be easily determined, which makes a
preallocation strategy very hard to implement.
This change generalizes the strategy of VM privilege increase when a
VM map is locked.
* device/ds_routines.c (io_done_thread): Use integer values instead
of booleans when setting VM privilege.
* kern/thread.c (thread_init, thread_wire): Likewise.
* vm/vm_pageout.c (vm_pageout): Likewise.
* kern/thread.h (struct thread): Turn member `vm_privilege' into an
unsigned integer.
* vm/vm_map.c (vm_map_lock): New function, where VM privilege is
temporarily increased.
(vm_map_unlock): New function, where VM privilege is decreased.
(_vm_map_entry_create): Remove VM privilege workaround from this
function.
* vm/vm_map.h (vm_map_lock, vm_map_unlock): Turn into functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the replacement of the zone allocator, kernel objects have been
wired in memory. Besides, as of 5e9f6f (Stack the slab allocator
directly on top of the physical allocator), there is a single cache
used to allocate map entries.
Those changes make the pageability attribute of VM maps irrelevant.
* device/ds_routines.c (mach_device_init): Update call to kmem_submap.
* ipc/ipc_init.c (ipc_init): Likewise.
* kern/task.c (task_create): Update call to vm_map_create.
* vm/vm_kern.c (kmem_submap): Remove `pageable' argument. Update call
to vm_map_setup.
(kmem_init): Update call to vm_map_setup.
* vm/vm_kern.h (kmem_submap): Update declaration.
* vm/vm_map.c (vm_map_setup): Remove `pageable' argument. Don't set
`entries_pageable' member.
(vm_map_create): Likewise.
(vm_map_copyout): Don't bother creating copies of page entries with
the right pageability.
(vm_map_copyin): Don't set `entries_pageable' member.
(vm_map_fork): Update call to vm_map_create.
* vm/vm_map.h (struct vm_map_header): Remove `entries_pageable' member.
(vm_map_setup, vm_map_create): Remove `pageable' argument.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Import upstream biosmem and vm_page changes, and adjust for local
modifications.
Specifically, the biosmem module was mistakenly loading physical
segments that did not clip with the heap as completely available.
This change makes it load them as completely unavailable during
startup, and once the VM system is ready, additional pages are
loaded.
* i386/i386at/biosmem.c (DEBUG): New macro.
(struct biosmem_segment): Remove members `avail_start' and `avail_end'.
(biosmem_heap_cur): Remove variable.
(biosmem_heap_bottom, biosmem_heap_top, biosmem_heap_topdown): New variables.
(biosmem_find_boot_data_update, biosmem_find_boot_data): Remove functions.
(biosmem_find_heap_clip, biosmem_find_heap): New functions.
(biosmem_setup_allocator): Rewritten to use the new biosmem_find_heap
function.
(biosmem_bootalloc): Support both bottom-up and top-down allocations.
(biosmem_directmap_size): Renamed to ...
(biosmem_directmap_end): ... this function.
(biosmem_load_segment): Fix segment loading.
(biosmem_setup): Restrict usable memory to the directmap segment.
(biosmem_free_usable_range): Add checks on input parameters.
(biosmem_free_usable_update_start, biosmem_free_usable_start,
biosmem_free_usable_reserved, biosmem_free_usable_end): Remove functions.
(biosmem_free_usable_entry): Rewritten to use the new biosmem_find_heap
function.
(biosmem_free_usable): Restrict usable memory to the directmap segment.
* i386/i386at/biosmem.h (biosmem_bootalloc): Update description.
(biosmem_directmap_size): Renamed to ...
(biosmem_directmap_end): ... this function.
(biosmem_free_usable): Update declaration.
* i386/i386at/model_dep.c (machine_init): Call biosmem_free_usable.
* vm/vm_page.c (DEBUG): New macro.
(struct vm_page_seg): New member `heap_present'.
(vm_page_load): Remove heap related parameters.
(vm_page_load_heap): New function.
* vm/vm_page.h (vm_page_load): Remove heap related parameters. Update
description.
(vm_page_load_heap): New function.
|
|
|
|
|
| |
* vm/vm_map.c (_vm_map_entry_create: Make sure there is a thread
before accessing VM privilege.
|
|
|
|
|
| |
* vm/vm_map.c (_vm_map_entry_create): Temporarily set the current thread
as VM privileged.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change improves the clarity of "no more room for ..." VM map
allocation errors.
* kern/task.c (task_init): Call vm_map_set_name for the kernel map.
(task_create): Call vm_map_set_name where appropriate.
* vm/vm_map.c (vm_map_setup): Set map name to NULL.
(vm_map_find_entry_anywhere): Update error message to include map name.
* vm/vm_map.h (struct vm_map): New `name' member.
(vm_map_set_name): New inline function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of reporting statistics about unreferenced objects (the object
cache), report statistics about external objects (the page cache).
* vm/vm_object.c (vm_object_cached_count): Remove variable.
(vm_object_cache_add): Remove object cache stats updates.
(vm_object_cache_remove): Likewise.
(vm_object_terminate): Update page cache stats.
* vm/vm_object.h (vm_object_cached_count): Remove variable.
(vm_object_cached_pages): Likewise.
(vm_object_cached_pages_lock_data): Likewise.
(vm_object_cached_pages_update): Remove macro.
(vm_object_external_count): New extern variable.
(vm_object_external_pages): Likewise.
* vm/vm_resident.c (vm_object_external_count): New variable.
(vm_object_external_pages): Likewise.
(vm_page_insert): Remove object cache stats updates and
update page cache stats.
(vm_page_replace): Likewise.
(vm_page_remove): Likewise.
* vm/vm_user.c (vm_cache_statistics): Report page cache stats instead
of object cache stats.
|
|
|
|
|
| |
* vm/vm_map (vm_map_copyin, vm_map_copyin_page_list): Check overflow
before page alignment of source data.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_copyout_page_list): Fix call to
vm_map_find_entry_anywhere to avoid relocking VM map.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change augments VM maps with a gap tree, sorted by gap size, to
use for non-fixed allocations.
* vm/vm_map.c: Include kern/list.h.
(vm_map_entry_gap_cmp_lookup, vm_map_entry_gap_cmp_insert,
vm_map_gap_valid, vm_map_gap_compute, vm_map_gap_insert_single,
vm_map_gap_remove_single, vm_map_gap_update, vm_map_gap_insert,
vm_map_gap_remove, vm_map_find_entry_anywhere): New functions.
(vm_map_setup): Initialize gap tree.
(_vm_map_entry_link): Call vm_map_gap_insert.
(_vm_map_entry_unlink): Call vm_map_gap_remove.
(vm_map_find_entry, vm_map_enter, vm_map_copyout,
vm_map_copyout_page_list, vm_map_copyin): Replace look up loop with
a call to vm_map_find_entry_anywhere. Call vm_map_gap_update and
initialize gap tree where relevant.
(vm_map_copy_insert): Turn macro into an inline function and rewrite.
(vm_map_simplify): Reorder call to vm_map_entry_unlink so that previous
entry is suitable for use with gap management functions.
* vm/vm_map.h: Include kern/list.h.
(struct vm_map_entry): New members `gap_node`, `gap_list`,
`gap_size` and `in_gap_tree`.
(struct vm_map_header): New member `gap_tree`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ddb/db_elf.c (elf_db_sym_init): Turn `i' into unsigned.
* device/ds_routines.c (ds_device_open, device_writev_trap): Likewise.
* i386/i386/user_ldt.c (i386_set_ldt): Likewise for `i', `min_selector', and
`first_desc'.
(i386_get_ldt): Likewise for `ldt_count'.
(user_ldt_free): Likewise for `i'.
* i386/i386/xen.h (hyp_set_ldt): Turn `count' into unsigned long.
* i386/intel/pmap.c (pmap_bootstrap): Turn `i', `j' and 'n' into unsigned.
(pmap_clear_bootstrap_pagetable): Likewise for `i' and `j'.
* ipc/ipc_kmsg.c (ipc_msg_print): Turn `i' and `numwords' into unsigned.
* kern/boot_script.c (boot_script_parse_line): Likewise for `i'.
* kern/bootstrap.c (bootstrap_create): Likewise for `n' and `i'.
* kern/host.c (host_processors): Likewise for `i'.
* kern/ipc_tt.c (mach_ports_register): Likewise.
* kern/mach_clock.c (tickadj, bigadj): turn into unsigned.
* kern/processor.c (processor_set_things): Turn `i' into unsigned.
* kern/task.c (task_threads): Likewise.
* kern/thread.c (consider_thread_collect, stack_init): Likewise.
* kern/strings.c (memset): Turn `i' into size_t.
* vm/memory_object.c (memory_object_lock_request): Turn `i' into unsigned.
* xen/block.c (hyp_block_init): Use %u format for evt.
(device_open): Drop unused err variable.
(device_write): Turn `copy_npages', `i', `nbpages', and `j' into unsigned.
* xen/console.c (hypcnread, hypcnwrite, hypcnclose): Turn dev to dev_t.
(hypcnclose): Return void.
* xen/console.h (hypcnread, hypcnwrite, hypcnclose): Fix prototypes
accordingly.
* xen/evt.c (form_int_mask): Turn `i' into int.
* xen/net.c (hyp_net_init): Use %u format for evt.
(device_open): Remove unused `err' variable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pageout daemon uses small, internal, temporary objects to transport
the data out to memory managers, which are expected to release the data
once written out to backing store. Releasing this data is done with a
vm_deallocate call. The problem with this is that vm_map is allowed to
merge these objects, in which case vm_deallocate will only remove a
reference instead of releasing the underlying pages, causing the pageout
daemon to deadlock.
This change makes the pageout daemon mark these objects so that they
don't get merged.
* vm/vm_object.c (vm_object_bootstrap): Update template.
(vm_object_coalesce): Don't coalesce if an object is used for pageout.
* vm/vm_object.h (struct vm_object): New `used_for_pageout` member.
* vm/vm_pageout.c (vm_pageout_page): Mark new objects for pageout.
|
|
|
|
|
| |
* i386/i386/hardclock.c (hardclock): Use '0' instead of 'NULL'.
* vm/vm_fault (vm_fault_cleanup): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* NEWS: Advertise feature.
* configfrac.ac (--enable-kernsample): Add option.
* kern/pc_sample.h (take_pc_sample): Add usermode and pc parameter.
(take_pc_sample_macro): Take usermode and pc parameters, pass as such to
take_pc_sample.
* kern/pc_sample.c (take_pc_sample): Use pc parameter when usermode is 1.
* kern/mach_clock.c (clock_interrupt): Add pc parameter. Pass usermode and
pc to take_pc_sample_macro call.
* i386/i386/hardclock.c (hardclock): Pass regs->eip to clock_interrupt call
on normal interrupts, NULL on interrupt interrupt.
* vm/vm_fault.c (vm_fault_cleanup): Set usermode to 1 and pc to NULL in
take_pc_sample_macro call.
|