aboutsummaryrefslogtreecommitdiff
path: root/vm/vm_page.c
Commit message (Collapse)AuthorAgeFilesLines
* db_show_vmstat: Show segment name rather than its numberSamuel Thibault2021-08-231-1/+1
|
* db_show_vmstat: Also show segment sizeSamuel Thibault2021-08-221-0/+2
| | | | * vm/vm_page.c (db_show_vmstat): Add printing the segment size.
* db_show_vmstat: Drop duplicate outputSamuel Thibault2021-08-221-3/+0
| | | | | * vm/vm_page.c (db_show_vmstat): Drop displaying cache numbers a second time.
* db show vmstat: also show segments statsSamuel Thibault2021-08-211-0/+23
| | | | * vm/vm_page.c (db_show_vmstat)
* db: Add show vmstat commandSamuel Thibault2021-08-211-0/+43
| | | | | | | | with an output similar to the userland vmstat command * vm/vm_page.c (db_show_vmstat): New function. * vm/vm_page.h (db_show_vmstat): New prototype. * ddb/db_command.c (db_show_cmds): Add vmstat command.
* VM: really fix pageout of external objects backed by the default pagerRichard Braun2016-12-271-15/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit eb07428ffb0009085fcd01dd1b79d9953af8e0ad does fix pageout of external objects backed by the default pager, but the way it's done has a vicious side effect: because they're considered external, the pageout daemon can keep evicting them even though the external pagers haven't released them, unlike internal pages which must all be released before the pageout daemon can make progress. This can lead to a situation where too many pages become wired, the default pager cannot allocate memory to process new requests, and the pageout daemon cannot recycle any more page, causing a panic. This change makes the pageout daemon use the same strategy for both internal pages and external pages sent to the default pager: use the laundry bit and wait for all laundry pages to be released, thereby completely synchronizing the pageout daemon and the default pager. * vm/vm_page.c (vm_page_can_move): Allow external laundry pages to be moved. (vm_page_seg_evict): Don't alter the `external_laundry' bit, merely disable double paging for external pages sent to the default pager. * vm/vm_pageout.c: Include vm/memory_object.h. (vm_pageout_setup): Don't check whether the `external_laundry' bit is set, but handle external pages sent to the default pager the same as internal pages.
* VM: fix pageout of external objects backed by the default pagerRichard Braun2016-12-241-9/+28
| | | | | | | | | | | Double paging on such objects causes deadlocks. * vm/vm_page.c: Include <vm/memory_object.h>. (vm_page_seg_evict): Rename laundry to double_paging to increase clarity. Set the `external_laundry' bit when evicting a page from an external object backed by the default pager. * vm/vm_pageout.c (vm_pageout_setup): Wire page if the `external_laundry' bit is set.
* VM: fix pageability checkRichard Braun2016-12-241-0/+1
| | | | | | | | Unlike laundry pages sent to the default pager, pages marked with the `external_laundry' bit remain in the page queues and must be filtered out by the pageability check. * vm/vm_page.c (vm_page_can_move): Check the `external_laundry' bit.
* VM: fix pageout throttling to external pagersRichard Braun0 min.1-9/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the VM system has been tracking whether pages belong to internal or external objects, pageout throttling to external pagers has simply not been working. The reason is that, on pageout, requests for external pages are correctly tracked, but on page release (which is used to acknowledge the request), external pages are not marked external any more. This is because the external bit tracks whether a page belongs to an external object, and all pages, including external ones, are moved to an internal object during pageout. To solve this issue, a new "external_laundry" bit is added. It has the same purpose as the laundry bit, but for external pagers. * vm/vm_page.c (vm_page_seg_min_page_available): Function unused, remove. (vm_page_seg_evict): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. Add an assertion about double paging. (vm_page_check_usable): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. (vm_page_evict): Likewise. * vm/vm_page.h (struct vm_page): New `external_laundry' member. (vm_page_external_pagedout): Rename to ... (vm_page_external_laundry_count): ... this. * vm/vm_pageout.c: Include kern/printf.h. (DEBUG): New macro. (VM_PAGEOUT_TIMEOUT): Likewise. (vm_pageout_setup): Use vm_page_external_laundry_count instead of vm_page_external_pagedout. Set `external_laundry' where appropriate. (vm_pageout): Use VM_PAGEOUT_TIMEOUT with thread_set_timeout. Add debugging code, commented out by default. * vm/vm_resident.c (vm_page_external_pagedout): Rename to ... (vm_page_external_laundry_count): ... this. (vm_page_init_template): Set `external_laundry' member to FALSE. (vm_page_release): Rename external parameter to external_laundry. Slightly change pageout resuming. (vm_page_free): Rename external variable to external_laundry.
* VM: fix pageout on low memoryRichard Braun2016-11-301-37/+9
| | | | | | | | | | | | | | | Instead of determining if memory is low, directly use the vm_page_alloc_paused variable, which is true when memory has reached a minimum threshold until it gets back above the high thresholds. This makes sure double paging is used when external pagers are unable to allocate memory. * vm/vm_page.c (vm_page_seg_evict): Rename low_memory to alloc_paused. (vm_page_evict_once): Remove low_memory and its computation. Blindly pass the new alloc_paused argument instead. (vm_page_evict): Pass the value of vm_page_alloc_paused to vm_page_evict_once.
* VM: fix eviction logic errorRichard Braun2016-11-301-1/+1
| | | | | * vm/vm_page.c (vm_page_evict): Test both vm_page_external_pagedout and vm_page_laundry_count in order to determine there was "no pageout".
* VM: fix pageout stop conditionRichard Braun2016-11-301-0/+5
| | | | | | | | | | | | | | | | | | When checking whether to continue paging out or not, the pageout daemon only considers the high free page threshold of a segment. But if e.g. the default pager had to allocate reserved pages during a previous pageout cycle, it could have exhausted a segment (this is currently only seen with the DMA segment). In that case, the high threshold cannot be reached because the segment has currently no pageable page. This change makes the pageout daemon identify this condition and consider the segment as usable in order to make progress. The segment will simply be ignored on the allocation path for unprivileged threads, and if this happens with too many segments, the system will fail at allocation time. * vm/vm_page.c (vm_page_seg_usable): Report usable if the segment has no pageable page.
* Enable high memoryRichard Braun2016-09-211-16/+0
| | | | | | | | | | * i386/i386at/biosmem.c (biosmem_setup): Load the HIGHMEM segment if present. (biosmem_free_usable): Report high memory as usable. * vm/vm_page.c (vm_page_boot_table_size, vm_page_table_size, vm_page_mem_size, vm_page_mem_free): Scan all segments. * vm/vm_resident.c (vm_page_grab): Describe allocation strategy with regard to the HIGHMEM segment.
* Rework pageout to handle multiple segmentsRichard Braun2016-09-211-3/+1241
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we're about to use a new HIGHMEM segment, potentially much larger than the existing DMA and DIRECTMAP ones, it's now compulsory to make the pageout daemon aware of those segments. And while we're at it, let's fix some of the defects that have been plaguing pageout forever, such as throttling, and pageout of internal versus external pages (this commit notably introduces a hardcoded policy in which as many external pages are selected before considering internal pages). * kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release. * vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>. (VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM, VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM, VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW, VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM, VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES, VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros. (struct vm_page_queue): New type. (struct vm_page_seg): Add new members `min_free_pages', `low_free_pages', `high_free_pages', `active_pages', `nr_active_pages', `high_active_pages', `inactive_pages', `nr_inactive_pages'. (vm_page_alloc_paused): New variable. (vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New functions. (vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout daemon as appropriate. (vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove, vm_page_queue_first, vm_page_seg_get, vm_page_seg_index, vm_page_seg_compute_pageout_thresholds): New functions. (vm_page_seg_init): Initialize the new segment members. (vm_page_seg_add_active_page, vm_page_seg_remove_active_page, vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page, vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page, vm_page_seg_pull_cache_page): New functions. (vm_page_seg_min_page_available, vm_page_seg_page_available, vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock, vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict, vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive, vm_page_lookup_seg, vm_page_check): New functions. (vm_page_alloc_pa): Handle allocation failure from VM privileged thread. (vm_page_info_all): Display additional segment properties. (vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate, vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments. (vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance, vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions. (VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros. (vm_page_evict, vm_page_refill_inactive): New functions. * vm/vm_page.h: Include <kern/list.h>. (struct vm_page): Remove member `pageq', reuse the `node' member instead, move the `listq' and `next' members above `vm_page_header'. (VM_PAGE_CHECK): Define as an alias to vm_page_check. (vm_page_check): New function declaration. (vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations. (vm_page_external_pagedout): New extern declaration. (vm_page_release): Update declaration. (VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove. (VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros. (VM_PT_KERNEL): Update value. (vm_page_queues_remove, vm_page_balance, vm_page_evict, vm_page_refill_inactive): New function declarations. * vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN, VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX, VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN, VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL, VM_PAGEOUT_RESERVED_REALLY): Remove macros. (vm_pageout_reserved_internal, vm_pageout_reserved_really, vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait, vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max, vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock, vm_pageout_inactive_busy, vm_pageout_inactive_absent, vm_pageout_inactive_used, vm_pageout_inactive_clean, vm_pageout_inactive_dirty, vm_pageout_inactive_double, vm_pageout_inactive_cleaned_external): Remove variables. (vm_pageout_requested, vm_pageout_continue): New variables. (vm_pageout_setup): Wait for page allocation to succeed instead of falling back to flush, update double paging protocol with caller, add pageout throttling setup. (vm_pageout_scan): Rewrite to use the new vm_page balancing, eviction and inactive queue refill functions. (vm_pageout_scan_continue, vm_pageout_continue): Remove functions. (vm_pageout): Rewrite. (vm_pageout_start, vm_pageout_resume): New functions. * vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove function declarations. (vm_pageout_start, vm_pageout_resume): New function declarations. * vm/vm_resident.c: Include <kern/list.h>. (vm_page_queue_fictitious): Define as a struct list. (vm_page_free_wanted, vm_page_external_count, vm_page_free_avail, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved): Remove variables. (vm_page_external_pagedout): New variable. (vm_page_bootstrap): Don't initialize removed variable, update initialization of vm_page_queue_fictitious. (vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate. (vm_page_remove): Likewise. (vm_page_grab_fictitious): Update to use list_xxx functions. (vm_page_release_fictitious): Likewise. (vm_page_grab): Remove pageout related code. (vm_page_release): Add `laundry' and `external' parameters for pageout throttling. (vm_page_grab_contig): Remove pageout related code. (vm_page_free_contig): Likewise. (vm_page_free): Remove pageout related code, update call to vm_page_release. (vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate): Move to vm/vm_page.c.
* Remove phys_first_addr and phys_last_addr global variablesRichard Braun2016-09-211-0/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old assumption that all physical memory is directly mapped in kernel space is about to go away. Those variables are directly linked to that assumption. * i386/i386/model_dep.h (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT instead of phys_last_addr. (pmap_copy_page, copy_to_phys, copy_from_phys): Likewise. * i386/i386/trap.c (user_trap): Remove check against phys_last_addr. * i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set phys_last_addr. * i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if a physical address references physical memory. * i386/i386at/model_dep.c (phys_first_addr): Remove variable. (phys_last_addr): Likewise. (pmap_free_pages, pmap_valid_page): Remove functions. * i386/intel/pmap.c: Include i386at/biosmem.h. (pa_index): Turn into an alias for vm_page_table_index. (pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr as appropriate. (pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr and phys_last_addr to obtain the number of physical pages. (pmap_verify_free): Remove function. (valid_page): Turn this macro into an inline function and rewrite using vm_page_lookup_pa. (pmap_page_table_page_alloc): Build the pmap VM object using vm_page_table_size to determine its size. (pmap_remove_range, pmap_page_protect, phys_attribute_clear, phys_attribute_test): Turn page indexes into unsigned long integers. (pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or biosmem_directmap_end to determine if a physical address references physical memory. * i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of phys_last_addr to obtain the number of physical pages. * kern/startup.c (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the appropriate segment selector instead of phys_last_addr to determine where high memory starts. * vm/pmap.h: Update requirements description. (pmap_free_pages, pmap_valid_page): Remove declarations. * vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size, vm_page_table_size, vm_page_table_index): New functions. * vm/vm_page.h (vm_page_seg_end, vm_page_table_size, vm_page_table_index): New function declarations. * vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define as unsigned long integers. (vm_page_bootstrap): Compute VP table size based on the page table size instead of the value returned by pmap_free_pages.
* Fix early physical page allocationRichard Braun2016-09-031-8/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Import upstream biosmem and vm_page changes, and adjust for local modifications. Specifically, the biosmem module was mistakenly loading physical segments that did not clip with the heap as completely available. This change makes it load them as completely unavailable during startup, and once the VM system is ready, additional pages are loaded. * i386/i386at/biosmem.c (DEBUG): New macro. (struct biosmem_segment): Remove members `avail_start' and `avail_end'. (biosmem_heap_cur): Remove variable. (biosmem_heap_bottom, biosmem_heap_top, biosmem_heap_topdown): New variables. (biosmem_find_boot_data_update, biosmem_find_boot_data): Remove functions. (biosmem_find_heap_clip, biosmem_find_heap): New functions. (biosmem_setup_allocator): Rewritten to use the new biosmem_find_heap function. (biosmem_bootalloc): Support both bottom-up and top-down allocations. (biosmem_directmap_size): Renamed to ... (biosmem_directmap_end): ... this function. (biosmem_load_segment): Fix segment loading. (biosmem_setup): Restrict usable memory to the directmap segment. (biosmem_free_usable_range): Add checks on input parameters. (biosmem_free_usable_update_start, biosmem_free_usable_start, biosmem_free_usable_reserved, biosmem_free_usable_end): Remove functions. (biosmem_free_usable_entry): Rewritten to use the new biosmem_find_heap function. (biosmem_free_usable): Restrict usable memory to the directmap segment. * i386/i386at/biosmem.h (biosmem_bootalloc): Update description. (biosmem_directmap_size): Renamed to ... (biosmem_directmap_end): ... this function. (biosmem_free_usable): Update declaration. * i386/i386at/model_dep.c (machine_init): Call biosmem_free_usable. * vm/vm_page.c (DEBUG): New macro. (struct vm_page_seg): New member `heap_present'. (vm_page_load): Remove heap related parameters. (vm_page_load_heap): New function. * vm/vm_page.h (vm_page_load): Remove heap related parameters. Update description. (vm_page_load_heap): New function.
* Optimize slab lookup on the free pathRichard Braun2016-02-221-0/+1
| | | | | | | | | | | | | Caches that use external slab data but allocate slabs from the direct physical mapping can look up slab data in constant time by associating the slab data directly with the underlying page. * kern/slab.c (kmem_slab_use_tree): Take KMEM_CF_DIRECTMAP into account. (kmem_slab_create): Set page private data if relevant. (kmem_slab_destroy): Clear page private data if relevant. (kmem_cache_free_to_slab): Use page private data if relevant. * vm/vm_page.c (vm_page_init_pa): Set `priv' member to NULL. * vm/vm_page.h (vm_page_set_priv, vm_page_get_priv): New functions.
* Fix various memory managment errorsRichard Braun2016-02-021-4/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A few errors were introduced in the latest changes. o Add VM_PAGE_WAIT calls around physical allocation attempts in case of memory exhaustion. o Fix stack release. o Fix memory exhaustion report. o Fix free page accounting. * kern/slab.c (kmem_pagealloc, kmem_pagefree): New functions (kmem_slab_create, kmem_slab_destroy, kalloc, kfree): Use kmem_pagealloc and kmem_pagefree instead of the raw page allocation functions. (kmem_cache_compute_sizes): Don't store slab order. * kern/slab.h (struct kmem_cache): Remove `slab_order' member. * kern/thread.c (stack_alloc): Call VM_PAGE_WAIT in case of memory exhaustion. (stack_collect): Call vm_page_free_contig instead of kmem_free to release pages. * vm/vm_page.c (vm_page_seg_alloc): Fix memory exhaustion report. (vm_page_setup): Don't update vm_page_free_count. (vm_page_free_pa): Check page parameter. (vm_page_mem_free): New function. * vm/vm_page.h (vm_page_free_count): Remove extern declaration. (vm_page_mem_free): New prototype. * vm/vm_pageout.c: Update comments not to refer to vm_page_free_count. (vm_pageout_scan, vm_pageout_continue, vm_pageout): Use vm_page_mem_free instead of vm_page_free_count, update types accordingly. * vm/vm_resident.c (vm_page_free_count, vm_page_free_count_minimum): Remove variables. (vm_page_free_avail): New variable. (vm_page_bootstrap, vm_page_grab, vm_page_release, vm_page_grab_contig, vm_page_free_contig, vm_page_wait): Use vm_page_mem_free instead of vm_page_free_count, update types accordingly, don't set vm_page_free_count_minimum. * vm/vm_user.c (vm_statistics): Likewise.
* Import the vm_page module from X15 and relicense to GPLv2+Richard Braun2016-01-231-0/+762
* vm/vm_page.c: New File. * vm/vm_page.h: Merge vm_page.h from X15. (struct vm_page): New members: node, type, seg_index, order, vm_page_header. Turn phys_addr into a phys_addr_t.