aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* Update the NEWS fileJustus Winter2016-12-091-1/+24
|
* rbtree: minor changeRichard Braun2016-12-091-1/+1
| | | | * kern/rbtree.h (rbtree_for_each_remove): Remove trailing slash.
* VM: fix pageout throttling to external pagersRichard Braun0 min.4-26/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the VM system has been tracking whether pages belong to internal or external objects, pageout throttling to external pagers has simply not been working. The reason is that, on pageout, requests for external pages are correctly tracked, but on page release (which is used to acknowledge the request), external pages are not marked external any more. This is because the external bit tracks whether a page belongs to an external object, and all pages, including external ones, are moved to an internal object during pageout. To solve this issue, a new "external_laundry" bit is added. It has the same purpose as the laundry bit, but for external pagers. * vm/vm_page.c (vm_page_seg_min_page_available): Function unused, remove. (vm_page_seg_evict): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. Add an assertion about double paging. (vm_page_check_usable): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. (vm_page_evict): Likewise. * vm/vm_page.h (struct vm_page): New `external_laundry' member. (vm_page_external_pagedout): Rename to ... (vm_page_external_laundry_count): ... this. * vm/vm_pageout.c: Include kern/printf.h. (DEBUG): New macro. (VM_PAGEOUT_TIMEOUT): Likewise. (vm_pageout_setup): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. Set `external_laundry' where appropriate. (vm_pageout): Use VM_PAGEOUT_TIMEOUT with thread_set_timeout. Add debugging code, commented out by default. * vm/vm_resident.c (vm_page_external_pagedout): Rename to ... (vm_page_external_laundry_count): ... this. (vm_page_init_template): Set `external_laundry' member to FALSE. (vm_page_release): Rename external parameter to external_laundry. Slightly change pageout resuming. (vm_page_free): Rename external variable to external_laundry.
* VM: fix pageout on low memoryRichard Braun2016-11-301-37/+9
| | | | | | | | | | | | | | | Instead of determining if memory is low, directly use the vm_page_alloc_paused variable, which is true when memory has reached a minimum threshold until it gets back above the high thresholds. This makes sure double paging is used when external pagers are unable to allocate memory. * vm/vm_page.c (vm_page_seg_evict): Rename low_memory to alloc_paused. (vm_page_evict_once): Remove low_memory and its computation. Blindly pass the new alloc_paused argument instead. (vm_page_evict): Pass the value of vm_page_alloc_paused to vm_page_evict_once.
* VM: fix eviction logic errorRichard Braun2016-11-301-1/+1
| | | | | * vm/vm_page.c (vm_page_evict): Test both vm_page_external_pagedout and vm_page_laundry_count in order to determine there was "no pageout".
* VM: fix pageout stop conditionRichard Braun2016-11-301-0/+5
| | | | | | | | | | | | | | | | | | When checking whether to continue paging out or not, the pageout daemon only considers the high free page threshold of a segment. But if e.g. the default pager had to allocate reserved pages during a previous pageout cycle, it could have exhausted a segment (this is currently only seen with the DMA segment). In that case, the high threshold cannot be reached because the segment has currently no pageable page. This change makes the pageout daemon identify this condition and consider the segment as usable in order to make progress. The segment will simply be ignored on the allocation path for unprivileged threads, and if this happens with too many segments, the system will fail at allocation time. * vm/vm_page.c (vm_page_seg_usable): Report usable if the segment has no pageable page.
* gsync: Avoid NULL pointer dereferenceBrent Baccala2016-11-101-9/+12
| | | | | * kern/gsync.c (gsync_wait, gsync_wake, gsync_requeue): Return immediately if task argument is TASK_NULL
* Revert "i386: use ACPI to power off the machine"Justus Winter2016-11-0619-2198/+0
| | | | | | | | | This reverts commit c031b41b783cc99c0bd5aac7d14c1d6e34520397. Adding the ACPI parser from GRUB was a mistake as its license conflicts with the one used by the legacy Linux drivers. Furthermore, adding support for ACPI to the kernel violates the minimality principle.
* vm: Print names of maps in the debugger.Justus Winter2016-11-041-2/+2
| | | | * vm/vm_map.c (vm_map_print): Print name of the map.
* include: Fix new task notifications.Justus Winter2016-11-011-2/+5
| | | | | | | | Instead of copying the send right, move it. This fixes a send-right leak. * include/mach/task_notify.defs (task_move_t): New type. (mach_notify_new_task): Use the new type.
* i386: Use discontiguous page directories when using PAE.Justus Winter2016-11-014-131/+205
| | | | | | | | | | | | | | | | | | | | | | Previously, we used contiguous page directories four pages in length when using PAE. To prevent physical memory fragmentation, we need to use virtual memory for objects spanning multiple pages. Virtual kernel memory, however, is a scarce commodity. * i386/intel/pmap.h (lin2pdenum): Never include the page directory pointer table index. (lin2pdenum_cont): New macro which does include said index. (struct pmap): Remove the directory base pointer when using PAE. * i386/intel/pmap.c (pmap_pde): Fix lookup. (pmap_pte): Fix check for uninitialized pmap. (pmap_bootstrap): Do not store the page directory base if PAE. (pmap_init): Reduce size of page directories to one page, use direct-mapped memory. (pmap_create): Allocate four page directories per pmap. (pmap_destroy): Adapt to the discontinuous directories. (pmap_collect): Likewise. * i386/i386/xen.h (hyp_mmu_update_la): Adapt code manipulating the kernels page directory. * i386/i386at/model_dep.c (i386at_init): Likewise.
* gsync: fix licenceSamuel Thibault2016-10-312-2/+2
| | | | | | | Agustina relicenced her work. * kern/gsync.c: Relicence to GPL 2+. * kern/gsync.h: Relicence to GPL 2+.
* gsync: Fix crash when task is not current taskSamuel Thibault2016-10-311-0/+9
| | | | | * kern/gsync.c (gsync_wait, gsync_wake, gsync_requeue): Return KERN_FAILURE when task != current_task().
* gsync: Fix assertion failure with MACH_LDEBUGSamuel Thibault2016-10-311-5/+5
| | | | | | | | | vm_map_lock_read calls check_simple_locks(), so we need to lock hbp after taking the vm_map read lock. * kern/gsync.c (gsync_wait): Call vm_map_lock_read before locking &hbp->lock. (gsync_wake): Likewise.
* Make multiboot cmdline and modules non-permanent reservationsSamuel Thibault2016-10-311-7/+8
| | | | | | * i386/i386at/model_dep.c (register_boot_data): For multiboot cmdline, module structure, module data and module strings, set biosmem_register_boot_data temporary parameter to TRUE.
* Fix taking LDFLAGS into accountSamuel Thibault2016-10-241-1/+1
| | | | * Makefile.am (clib-routines.o): Add $(LDFLAGS) to link command.
* Fix taking LDFLAGS into accountSamuel Thibault2016-10-241-2/+2
| | | | * Makefile.am (gnumach_o_LINK, gnumach_LINK): Add $(LDFLAGS).
* Fix warningsSamuel Thibault2016-10-242-3/+3
| | | | | | * i386/i386/seg.h (fill_descriptor): Fix format for vm_offset_t. * i386/i386/xen.h (hyp_free_mfn, hyp_free_page): Fix format for unsigned long.
* i386: Allocate page directories using the slab allocator.Justus Winter2016-10-221-11/+28
| | | | | | | | * i386/intel/pmap.c (pd_cache): New variable. (pdp_cache): Likewise. (pmap_init): Initialize new caches. (pmap_create): Use the caches. (pmap_destroy): Free to the caches.
* Gracefully handle pmap allocation failures.Justus Winter2016-10-212-3/+17
| | | | | | * kern/task.c (task_create): Gracefully handle pmap allocation failures. * vm/vm_map.c (vm_map_fork): Likewise.
* vm: Print map names in case of failures.Justus Winter2016-10-211-4/+8
| | | | | | | * vm/vm_kern.c (kmem_alloc): Print map names in case of failures. (kmem_alloc_wired): Likewise. (kmem_alloc_aligned): Likewise. (kmem_alloc_pageable): Likewise.
* Make task notification ports mutable.Justus Winter2016-10-131-1/+21
| | | | | * include/mach/task_notify.defs (task_notify_port_t): New type. (mach_notify_new_task): Use the specialized type.
* linux: Remove incompatible local prototype.Justus Winter2016-10-121-4/+3
| | | | | * linux/dev/glue/kmem.c (vremap): Remove prototype, include the right header instead.
* Remove deprecated external memory management interface.Justus Winter2016-10-0314-411/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * NEWS: Update. * device/dev_pager.c (device_pager_data_request): Prune unused branch. (device_pager_data_request_done): Remove function. (device_pager_data_write): Likewise. (device_pager_data_write_done): Likewise. (device_pager_copy): Use 'memory_object_ready'. * device/dev_pager.h (device_pager_data_write_done): Remove prototype. * device/device_pager.srv (memory_object_data_write): Remove macro. * doc/mach.texi: Update documentation. * include/mach/mach.defs (memory_object_data_provided): Drop RPC. (memory_object_set_attributes): Likewise. * include/mach/memory_object.defs: Update comments. (memory_object_data_write): Drop RPC. * include/mach/memory_object_default.defs: Update comments. * include/mach_debug/vm_info.h (VOI_STATE_USE_OLD_PAGEOUT): Drop macro. * vm/memory_object.c (memory_object_data_provided): Remove function. (memory_object_data_error): Simplify. (memory_object_set_attributes_common): Make static, remove unused parameters, simplify. (memory_object_change_attributes): Update callsite. (memory_object_set_attributes): Remove function. (memory_object_ready): Update callsite. * vm/vm_debug.c (mach_vm_object_info): Adapt to the changes. * vm/vm_object.c (vm_object_bootstrap): Likewise. * vm/vm_object.h (struct vm_object): Drop flag 'use_old_pageout'. * vm/vm_pageout.c: Update comments. (vm_pageout_page): Simplify.
* Fix format securityDavid Michael2016-10-021-2/+2
| | | | | | | * i386/i386at/biosmem.c (boot_panic): Use %s format instead of passing the string directly to `panic'. (biosmem_unregister_boot_data): Use %s format instead of passing `biosmem_panic_inval_boot_data' directly to `panic'.
* kern: Improve panic messages from the scheduler.Justus Winter2016-10-011-9/+14
| | | | | * kern/sched_prim.c (state_panic): Turn into macro, print symbolic values of thread state.
* kern: Improve assertions and panics.Justus Winter2016-10-014-14/+21
| | | | | | | | | | * kern/assert.h (Assert): Add function argument. (assert): Supply function argument. * kern/debug.c (Assert): Add function argument. Unify message format. (panic): Rename to 'Panic', add location information. * kern/debug.h (panic): Rename, and add a macro version that supplies the location. * linux/dev/include/linux/kernel.h: Use the new panic macro.
* biosmem: show memory map at bootSamuel Thibault2016-09-221-6/+2
| | | | | * i386/i386at/biosmem.c (biosmem_type_desc, biosmem_map_show): Define even if DEBUG is 0. But show biosmem heap addresses only if DEBUG is 1.
* Enable high memoryRichard Braun2016-09-213-22/+12
| | | | | | | | | | * i386/i386at/biosmem.c (biosmem_setup): Load the HIGHMEM segment if present. (biosmem_free_usable): Report high memory as usable. * vm/vm_page.c (vm_page_boot_table_size, vm_page_table_size, vm_page_mem_size, vm_page_mem_free): Scan all segments. * vm/vm_resident.c (vm_page_grab): Describe allocation strategy with regard to the HIGHMEM segment.
* Update device drivers for highmem supportRichard Braun2016-09-213-93/+68
| | | | | | | | | | Unconditionally use bounce buffers for now. * linux/dev/glue/net.c (device_write): Unconditionally use a bounce buffer. * xen/block.c (device_write): Likewise. * xen/net.c: Include <device/ds_routines.h>. (device_write): Unconditionally use a bounce buffer.
* Update Linux block layer glue codeRichard Braun2016-09-211-15/+16
| | | | | | | | | | The Linux block layer glue code needs to use page nodes with the appropriate interface since their redefinition as struct list. * linux/dev/glue/block.c: Include <kern/list.h>. (struct temp_data): Define member `pages' as a struct list. (alloc_buffer): Update to use list_xxx functions. (free_buffer, INIT_DATA, device_open, device_read): Likewise.
* Rework pageout to handle multiple segmentsRichard Braun2016-09-216-872/+1457
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we're about to use a new HIGHMEM segment, potentially much larger than the existing DMA and DIRECTMAP ones, it's now compulsory to make the pageout daemon aware of those segments. And while we're at it, let's fix some of the defects that have been plaguing pageout forever, such as throttling, and pageout of internal versus external pages (this commit notably introduces a hardcoded policy in which as many external pages are selected before considering internal pages). * kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release. * vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>. (VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM, VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM, VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW, VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM, VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES, VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros. (struct vm_page_queue): New type. (struct vm_page_seg): Add new members `min_free_pages', `low_free_pages', `high_free_pages', `active_pages', `nr_active_pages', `high_active_pages', `inactive_pages', `nr_inactive_pages'. (vm_page_alloc_paused): New variable. (vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New functions. (vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout daemon as appropriate. (vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove, vm_page_queue_first, vm_page_seg_get, vm_page_seg_index, vm_page_seg_compute_pageout_thresholds): New functions. (vm_page_seg_init): Initialize the new segment members. (vm_page_seg_add_active_page, vm_page_seg_remove_active_page, vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page, vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page, vm_page_seg_pull_cache_page): New functions. (vm_page_seg_min_page_available, vm_page_seg_page_available, vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock, vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict, vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive, vm_page_lookup_seg, vm_page_check): New functions. (vm_page_alloc_pa): Handle allocation failure from VM privileged thread. (vm_page_info_all): Display additional segment properties. (vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate, vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments. (vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance, vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions. (VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros. (vm_page_evict, vm_page_refill_inactive): New functions. * vm/vm_page.h: Include <kern/list.h>. (struct vm_page): Remove member `pageq', reuse the `node' member instead, move the `listq' and `next' members above `vm_page_header'. (VM_PAGE_CHECK): Define as an alias to vm_page_check. (vm_page_check): New function declaration. (vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations. (vm_page_external_pagedout): New extern declaration. (vm_page_release): Update declaration. (VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove. (VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros. (VM_PT_KERNEL): Update value. (vm_page_queues_remove, vm_page_balance, vm_page_evict, vm_page_refill_inactive): New function declarations. * vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN, VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX, VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN, VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL, VM_PAGEOUT_RESERVED_REALLY): Remove macros. (vm_pageout_reserved_internal, vm_pageout_reserved_really, vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait, vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max, vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock, vm_pageout_inactive_busy, vm_pageout_inactive_absent, vm_pageout_inactive_used, vm_pageout_inactive_clean, vm_pageout_inactive_dirty, vm_pageout_inactive_double, vm_pageout_inactive_cleaned_external): Remove variables. (vm_pageout_requested, vm_pageout_continue): New variables. (vm_pageout_setup): Wait for page allocation to succeed instead of falling back to flush, update double paging protocol with caller, add pageout throttling setup. (vm_pageout_scan): Rewrite to use the new vm_page balancing, eviction and inactive queue refill functions. (vm_pageout_scan_continue, vm_pageout_continue): Remove functions. (vm_pageout): Rewrite. (vm_pageout_start, vm_pageout_resume): New functions. * vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove function declarations. (vm_pageout_start, vm_pageout_resume): New function declarations. * vm/vm_resident.c: Include <kern/list.h>. (vm_page_queue_fictitious): Define as a struct list. (vm_page_free_wanted, vm_page_external_count, vm_page_free_avail, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved): Remove variables. (vm_page_external_pagedout): New variable. (vm_page_bootstrap): Don't initialize removed variable, update initialization of vm_page_queue_fictitious. (vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate. (vm_page_remove): Likewise. (vm_page_grab_fictitious): Update to use list_xxx functions. (vm_page_release_fictitious): Likewise. (vm_page_grab): Remove pageout related code. (vm_page_release): Add `laundry' and `external' parameters for pageout throttling. (vm_page_grab_contig): Remove pageout related code. (vm_page_free_contig): Likewise. (vm_page_free): Remove pageout related code, update call to vm_page_release. (vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate): Move to vm/vm_page.c.
* Redefine what an external page isRichard Braun2016-09-2110-103/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of a "page considered external", which apparently takes into account whether a page is dirty or not, redefine this property to reliably mean "is in an external object". This commit mostly deals with the impact of this change on the page allocation interface. * i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to vm_page_grab. * kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of vm_page_grab_contig. (kmem_pagefree_physmem): Use vm_page_release instead of vm_page_free_contig. * linux/dev/glue/block.c (alloc_buffer, device_read): Update call to vm_page_grab. * vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and vm_page_convert. * vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab. * vm/vm_page.h (struct vm_page): Remove `extcounted' member. (vm_page_external_limit, vm_page_external_count): Remove extern declarations. (vm_page_convert, vm_page_grab): Update declarations. (vm_page_release, vm_page_grab_phys_addr): New function declarations. * vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro. (VM_PAGE_EXTERNAL_TARGET): Likewise. (vm_page_external_target): Remove variable. (vm_pageout_scan): Remove specific handling of external pages. (vm_pageout): Don't set vm_page_external_limit and vm_page_external_target. * vm/vm_resident.c (vm_page_external_limit): Remove variable. (vm_page_insert, vm_page_replace, vm_page_remove): Update external page tracking. (vm_page_convert): Remove `external' parameter. (vm_page_grab): Likewise. Remove specific handling of external pages. (vm_page_grab_phys_addr): Update call to vm_page_grab. (vm_page_release): Remove `external' parameter and remove specific handling of external pages. (vm_page_wait): Remove specific handling of external pages. (vm_page_alloc): Update call to vm_page_grab. (vm_page_free): Update call to vm_page_release. * xen/block.c (device_read): Update call to vm_page_grab. * xen/net.c (device_write): Likewise.
* Replace vm_offset_t with phys_addr_t where appropriateRichard Braun2016-09-216-46/+42
| | | | | | | | | | | | | | | | | | | | | * i386/i386/phys.c (pmap_zero_page, pmap_copy_page, copy_to_phys, copy_from_phys, kvtophys): Use the phys_addr_t type for physical addresses. * i386/intel/pmap.c (pmap_map, pmap_map_bd, pmap_destroy, pmap_remove_range, pmap_page_protect, pmap_enter, pmap_extract, pmap_collect, phys_attribute_clear, phys_attribute_test, pmap_clear_modify, pmap_is_modified, pmap_clear_reference, pmap_is_referenced): Likewise. * i386/intel/pmap.h (pt_entry_t): Unconditionally define as a phys_addr_t. (pmap_zero_page, pmap_copy_page, kvtophys): Use the phys_addr_t type for physical addresses. * vm/pmap.h (pmap_enter, pmap_page_protect, pmap_clear_reference, pmap_is_referenced, pmap_clear_modify, pmap_is_modified, pmap_extract, pmap_map_bd): Likewise. * vm/vm_page.h (vm_page_fictitious_addr): Declare as a phys_addr_t. * vm/vm_resident.c (vm_page_fictitious_addr): Likewise. (vm_page_grab_phys_addr): Change return type to phys_addr_t.
* Remove phys_first_addr and phys_last_addr global variablesRichard Braun2016-09-2114-119/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old assumption that all physical memory is directly mapped in kernel space is about to go away. Those variables are directly linked to that assumption. * i386/i386/model_dep.h (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT instead of phys_last_addr. (pmap_copy_page, copy_to_phys, copy_from_phys): Likewise. * i386/i386/trap.c (user_trap): Remove check against phys_last_addr. * i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set phys_last_addr. * i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if a physical address references physical memory. * i386/i386at/model_dep.c (phys_first_addr): Remove variable. (phys_last_addr): Likewise. (pmap_free_pages, pmap_valid_page): Remove functions. * i386/intel/pmap.c: Include i386at/biosmem.h. (pa_index): Turn into an alias for vm_page_table_index. (pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr as appropriate. (pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr and phys_last_addr to obtain the number of physical pages. (pmap_verify_free): Remove function. (valid_page): Turn this macro into an inline function and rewrite using vm_page_lookup_pa. (pmap_page_table_page_alloc): Build the pmap VM object using vm_page_table_size to determine its size. (pmap_remove_range, pmap_page_protect, phys_attribute_clear, phys_attribute_test): Turn page indexes into unsigned long integers. (pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or biosmem_directmap_end to determine if a physical address references physical memory. * i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of phys_last_addr to obtain the number of physical pages. * kern/startup.c (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the appropriate segment selector instead of phys_last_addr to determine where high memory starts. * vm/pmap.h: Update requirements description. (pmap_free_pages, pmap_valid_page): Remove declarations. * vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size, vm_page_table_size, vm_page_table_index): New functions. * vm/vm_page.h (vm_page_seg_end, vm_page_table_size, vm_page_table_index): New function declarations. * vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define as unsigned long integers. (vm_page_bootstrap): Compute VP table size based on the page table size instead of the value returned by pmap_free_pages.
* VM: remove commented out codeRichard Braun2016-09-201-23/+0
| | | | | | | The vm_page_direct_va, vm_page_direct_pa and vm_page_direct_ptr functions were imported along with the new vm_page module, but never actually used since the kernel already has phystokv and kvtophys functions.
* VM: improve pageout deadlock workaroundRichard Braun2016-09-166-37/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5dd4f67522ad0d49a2cecdb9b109251f546d4dd1 makes VM map entry allocation done with VM privilege, so that a VM map isn't held locked while physical allocations are paused, which may block the default pager during page eviction, causing a system-wide deadlock. First, it turns out that map entries aren't the only buffers allocated, and second, their number can't be easily determined, which makes a preallocation strategy very hard to implement. This change generalizes the strategy of VM privilege increase when a VM map is locked. * device/ds_routines.c (io_done_thread): Use integer values instead of booleans when setting VM privilege. * kern/thread.c (thread_init, thread_wire): Likewise. * vm/vm_pageout.c (vm_pageout): Likewise. * kern/thread.h (struct thread): Turn member `vm_privilege' into an unsigned integer. * vm/vm_map.c (vm_map_lock): New function, where VM privilege is temporarily increased. (vm_map_unlock): New function, where VM privilege is decreased. (_vm_map_entry_create): Remove VM privilege workaround from this function. * vm/vm_map.h (vm_map_lock, vm_map_unlock): Turn into functions.
* Fix spurious warningSamuel Thibault2016-09-111-1/+1
| | | | | * i386/i386/db_trace.c (db_i386_stack_trace): Do not check for frame validity if it is 0.
* Fix size of functions interrupt and syscallSamuel Thibault2016-09-112-0/+3
| | | | | * i386/i386/locore.S (syscall): Add END(syscall). * i386/i386at/interrupt.S (interrupt): Add END(interrupt).
* Set function type on symbols created by ENTRY macroSamuel Thibault2016-09-111-7/+7
| | | | | * i386/include/mach/i386/asm.h (ENTRY, ENTRY2, ASENTRY, Entry): Use .type @function on created entries.
* Close parenthesisSamuel Thibault2016-09-111-1/+3
| | | | | * i386/i386/db_trace.c (db_i386_stack_trace): When stopping on zero frame, close parameters parenthesis.
* Fix exploring stack trace up to assemblySamuel Thibault2016-09-111-7/+8
| | | | | * i386/i386/db_trace.c (db_i386_stack_trace): Do not stop as soon as frame is 0, lookup PC first, and stop only before accessing the frame content.
* ipc: Fix crash in debug code.Justus Winter2016-09-111-0/+3
| | | | | * ipc/mach_debug.c (mach_port_kernel_object): Check that the receiver is valid.
* Remove map entry pageability property.Richard Braun2016-09-077-71/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | Since the replacement of the zone allocator, kernel objects have been wired in memory. Besides, as of 5e9f6f (Stack the slab allocator directly on top of the physical allocator), there is a single cache used to allocate map entries. Those changes make the pageability attribute of VM maps irrelevant. * device/ds_routines.c (mach_device_init): Update call to kmem_submap. * ipc/ipc_init.c (ipc_init): Likewise. * kern/task.c (task_create): Update call to vm_map_create. * vm/vm_kern.c (kmem_submap): Remove `pageable' argument. Update call to vm_map_setup. (kmem_init): Update call to vm_map_setup. * vm/vm_kern.h (kmem_submap): Update declaration. * vm/vm_map.c (vm_map_setup): Remove `pageable' argument. Don't set `entries_pageable' member. (vm_map_create): Likewise. (vm_map_copyout): Don't bother creating copies of page entries with the right pageability. (vm_map_copyin): Don't set `entries_pageable' member. (vm_map_fork): Update call to vm_map_create. * vm/vm_map.h (struct vm_map_header): Remove `entries_pageable' member. (vm_map_setup, vm_map_create): Remove `pageable' argument.
* Fix registration of strings from in boot dataRichard Braun2016-09-061-2/+4
| | | | | * i386/i386at/model_dep.c (register_boot_data): Use phystokv on strings when computing their length.
* Make early physical page allocation truely reliableRichard Braun2016-09-063-194/+316
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Import upstream biosmem changes and adjust for local modifications. Specifically, this change makes the biosmem module reliably track all boot data by storing their addresses in a sorted array. This allows both the early page allocator and the biosmem_free_usable function to accurately find any range of free pages. * i386/i386at/biosmem.c: Remove inclusion of <i386at/elf.h>. (_start, _end): Remove variable declarations. (BIOSMEM_MAX_BOOT_DATA): New macro. (struct biosmem_boot_data): New type. (biosmem_boot_data_array, biosmem_nr_boot_data): New variables. (biosmem_heap_start, biosmem_heap_bottom, biosmem_heap_top, biosmem_heap_end): Change type to phys_addr_t. (biosmem_panic_inval_boot_data): New variable. (biosmem_panic_too_many_boot_data): Likewise. (biosmem_panic_toobig_msg): Variable renamed ... (biosmem_panic_too_big_msg): ... to this. (biosmem_register_boot_data): New function. (biosmem_unregister_boot_data): Likewise. (biosmem_map_adjust): Update reference to panic message. (biosmem_map_find_avail): Add detailed description. (biosmem_save_cmdline_sizes): Remove function. (biosmem_find_heap_clip): Likewise. (biosmem_find_heap): Likewise. (biosmem_find_avail_clip, biosmem_find_avail): New functions. (biosmem_setup_allocator): Receive const multiboot info, replace calls to biosmem_find_heap with calls to biosmem_find_avail and update accordingly. Register the heap as boot data. (biosmem_xen_bootstrap): Register the Xen boot info and the heap as boot data. (biosmem_bootstrap): Receive const multiboot information. Remove call to biosmem_save_cmdline_sizes. (biosmem_bootalloc): Remove assertion on the VM system state. (biosmem_type_desc, biosmem_map_show): Build only if DEBUG is true. (biosmem_unregister_temporary_boot_data): New function. (biosmem_free_usable_range): Change address range format. (biosmem_free_usable_entry): Rewrite to use biosmem_find_avail without abusing it. (biosmem_free_usable): Call biosmem_unregister_temporary_boot_data, update call to biosmem_free_usable_entry. * i386/i386at/biosmem.h (biosmem_register_boot_data): New function. (biosmem_bootalloc): Update description. (biosmem_bootstrap): Update description and declaration. (biosmem_free_usable): Likewise. * i386/i386at/model_dep.c: Include <i386at/elf.h>. (machine_init): Update call to biosmem_free_usable. (register_boot_data): New function. (i386at_init): Call register_boot_data where appropriate.
* Fix early physical page allocationRichard Braun2016-09-035-184/+280
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Import upstream biosmem and vm_page changes, and adjust for local modifications. Specifically, the biosmem module was mistakenly loading physical segments that did not clip with the heap as completely available. This change makes it load them as completely unavailable during startup, and once the VM system is ready, additional pages are loaded. * i386/i386at/biosmem.c (DEBUG): New macro. (struct biosmem_segment): Remove members `avail_start' and `avail_end'. (biosmem_heap_cur): Remove variable. (biosmem_heap_bottom, biosmem_heap_top, biosmem_heap_topdown): New variables. (biosmem_find_boot_data_update, biosmem_find_boot_data): Remove functions. (biosmem_find_heap_clip, biosmem_find_heap): New functions. (biosmem_setup_allocator): Rewritten to use the new biosmem_find_heap function. (biosmem_bootalloc): Support both bottom-up and top-down allocations. (biosmem_directmap_size): Renamed to ... (biosmem_directmap_end): ... this function. (biosmem_load_segment): Fix segment loading. (biosmem_setup): Restrict usable memory to the directmap segment. (biosmem_free_usable_range): Add checks on input parameters. (biosmem_free_usable_update_start, biosmem_free_usable_start, biosmem_free_usable_reserved, biosmem_free_usable_end): Remove functions. (biosmem_free_usable_entry): Rewritten to use the new biosmem_find_heap function. (biosmem_free_usable): Restrict usable memory to the directmap segment. * i386/i386at/biosmem.h (biosmem_bootalloc): Update description. (biosmem_directmap_size): Renamed to ... (biosmem_directmap_end): ... this function. (biosmem_free_usable): Update declaration. * i386/i386at/model_dep.c (machine_init): Call biosmem_free_usable. * vm/vm_page.c (DEBUG): New macro. (struct vm_page_seg): New member `heap_present'. (vm_page_load): Remove heap related parameters. (vm_page_load_heap): New function. * vm/vm_page.h (vm_page_load): Remove heap related parameters. Update description. (vm_page_load_heap): New function.
* pmap: fix map window creation on xenRichard Braun2016-09-011-0/+12
| | | | | | * i386/intel/pmap.c (pmap_get_mapwindow, pmap_put_mapwindow): Use the appropriate xen hypercall if building for paravirtualized page table management.
* Avoid using non-ascii source encodingSamuel Thibault2016-08-311-2/+2
| | | | | * xen/console.c (hypcnintr): Replace latin1 £ character with the 0xA3 number.
* vm: fix boot on xenRichard Braun2016-08-291-3/+10
| | | | | * vm/vm_map.c (_vm_map_entry_create: Make sure there is a thread before accessing VM privilege.