aboutsummaryrefslogtreecommitdiff
path: root/vm
Commit message (Collapse)AuthorAgeFilesLines
* Avoid panics on physical memory exhaustionRichard Braun2016-03-131-2/+4
| | | | | * vm/vm_resident (vm_page_grab_contig): Return NULL instead of calling panic on memory exhaustion.
* Merge remote-tracking branch 'remotes/origin/rbraun/vm_cache_policy'Richard Braun2016-03-114-145/+108
|\ | | | | | | Finally ;-).
| * Fix page cache accountingRichard Braun2016-02-072-31/+42
| | | | | | | | | | | | | | | | | | | | * vm/vm_object.c (vm_object_bootstrap): Set template object `cached' member to FALSE. (vm_object_cache_add, vm_object_cache_remove): New functions. (vm_object_collect, vm_object_deallocate, vm_object_lookup, vm_object_lookup_name, vm_object_destroy): Use new cache management functions. (vm_object_terminate, vm_object_collapse): Make sure object isn't cached. * vm/vm_object.h (struct vm_object): New `cached' member.
| * VM cache policy changeRichard Braun2013-10-094-120/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch lets the kernel unconditionnally cache non empty unreferenced objects instead of using a fixed arbitrary limit. As the pageout daemon evicts pages, it collects cached objects that have become empty. The effective result is a graceful adjustment of the number of objects related to memory management (virtual memory objects, their associated ports, and potentially objects maintained in the external memory managers). Physical memory can now be almost entirely filled up with cached pages. In addition, these cached pages are not automatically deactivated as objects can quickly be referenced again. There are problems with this patch however. The first is that, on machines with a large amount of physical memory (above 1 GiB but it also depends on usage patterns), scalability issues are exposed. For example, file systems which don't throttle their writeback requests can create thread storms, strongly reducing system responsiveness. Other issues such as linear scans of memory objects also add visible CPU overhead. The second is that, as most memory is used, it increases the chances of swapping deadlocks. Applications that map large objects and quickly cause lots of page faults can still easily bring the system to its knees.
* | Fix slab allocator option handlingRichard Braun2016-02-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The slab allocator has grown to use multiple ways to allocate slabs as well as track them, which got a little messy. One consequence is the breaking of the KMEM_CF_VERIFY option. In order to make the code less confusing, this change expresses all options as explicit cache flags and clearly defines their relationships. The special kmem_slab and vm_map_entry caches are initialized accordingly. * kern/slab.c (KMEM_CF_DIRECTMAP): Rename to ... (KMEM_CF_PHYSMEM): ... this new macro. (KMEM_CF_DIRECT): Restore macro. (KMEM_CF_USE_TREE, KMEM_CF_USE_PAGE): New macros. (KMEM_CF_VERIFY): Update value. (kmem_pagealloc_directmap): Rename to... (kmem_pagealloc_physmem): ... this new function. (kmem_pagefree_directmap): Rename to ... (kmem_pagefree_physmem): ... this new function. (kmem_pagealloc, kmem_pagefree): Update macro names. (kmem_slab_use_tree): Remove function. (kmem_slab_create, kmem_slab_destroy): Update according to the new cache flags. (kmem_cache_compute_sizes): Rename to ... (kmem_cache_compute_properties): ... this new function, and update to properly set cache flags. (kmem_cache_init): Update call to kmem_cache_compute_properties. (kmem_cache_alloc_from_slab): Check KMEM_CF_USE_TREE instead of calling the defunct kmem_slab_use_tree function. (kmem_cache_free_to_slab): Update according to the new cache flags. kmem_cache_free_verify): Add assertion. (slab_init): Update initialization of kmem_slab_cache. * kern/slab.h (KMEM_CACHE_DIRECTMAP): Rename to ... (KMEM_CACHE_PHYSMEM): ... this new macro. * vm/vm_map.c (vm_map_init): Update initialization of vm_map_entry_cache.
* | Optimize slab lookup on the free pathRichard Braun2016-02-222-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | Caches that use external slab data but allocate slabs from the direct physical mapping can look up slab data in constant time by associating the slab data directly with the underlying page. * kern/slab.c (kmem_slab_use_tree): Take KMEM_CF_DIRECTMAP into account. (kmem_slab_create): Set page private data if relevant. (kmem_slab_destroy): Clear page private data if relevant. (kmem_cache_free_to_slab): Use page private data if relevant. * vm/vm_page.c (vm_page_init_pa): Set `priv' member to NULL. * vm/vm_page.h (vm_page_set_priv, vm_page_get_priv): New functions.
* | Avoid slab allocation failures caused by memory fragmentationRichard Braun2016-02-201-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the slab allocator has been changed to sit directly on top of the physical allocator, failures caused by fragmentation have been observed, as one could expect. This change makes the slab allocator revert to kernel virtual memory when allocating larger-than-page slabs. This solution is motivated in part to avoid the complexity of other solutions such as page mobility, and also because a microkernel cannot be extended to new arbitrary uncontrolled usage patterns such as a monolithic kernel with loadable modules. As such, large objects are rare, and their use infrequent, which is compatible with the use of kernel virtual memory. * kern/slab.c: Update module description. (KMEM_CF_SLAB_EXTERNAL, KMEM_CF_VERIFY): Update values. (KMEM_CF_DIRECT): Remove macro. (KMEM_CF_DIRECTMAP): New macro. (kmem_pagealloc_directmap, kmem_pagefree_directmap, kmem_pagealloc_virtual, kmem_pagefree_virtual): New functions. (kmem_pagealloc, kmem_pagefree, kmem_slab_create, kmem_slab_destroy, kalloc, kfree): Update to use the new pagealloc functions. (kmem_cache_compute_sizes): Update the algorithm used to determine slab size and other cache properties. (kmem_slab_use_tree, kmem_cache_free_to_slab, host_slab_info): Update to correctly use the cache flags. (slab_init): Add KMEM_CACHE_DIRECTMAP to the kmem_slab_cache init flags. * kern/slab.h (KMEM_CACHE_VERIFY): Change value. (KMEM_CACHE_DIRECTMAP): New macro. * vm/vm_map.c (vm_map_init): Add KMEM_CACHE_DIRECTMAP to the vm_map_entry_cache init flags.
* | Avoid panics on physical memory exhaustionRichard Braun2016-02-161-2/+4
| | | | | | | | | | * vm/vm_resident (vm_page_grab): Return NULL instead of calling panic on memory exhaustion.
* | vm: initialize external mapsJustus Winter2016-02-071-0/+2
| | | | | | | | * vm/vm_external.c (vm_external_create): Initialize allocated maps.
* | vm: allocate a large map for all objects larger than SMALL_SIZEJustus Winter2016-02-071-1/+1
| | | | | | | | | | | | | | * vm/vm_external.c (vm_external_create): Allocate a large map for all objects larger than SMALL_SIZE. 'vm_external_state_{g,s}et' can deal with offsets larger than 'LARGE_SIZE', so currently objects larger than 'LARGE_SIZE' are missing out on the optimization.
* | vm: remove unused field from struct vm_externalJustus Winter2016-02-071-0/+5
| | | | | | | | | | * vm/vm_external.h (struct vm_external): Remove unused field 'existence_count'.
* | Fix various memory managment errorsRichard Braun2016-02-025-43/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A few errors were introduced in the latest changes. o Add VM_PAGE_WAIT calls around physical allocation attempts in case of memory exhaustion. o Fix stack release. o Fix memory exhaustion report. o Fix free page accounting. * kern/slab.c (kmem_pagealloc, kmem_pagefree): New functions (kmem_slab_create, kmem_slab_destroy, kalloc, kfree): Use kmem_pagealloc and kmem_pagefree instead of the raw page allocation functions. (kmem_cache_compute_sizes): Don't store slab order. * kern/slab.h (struct kmem_cache): Remove `slab_order' member. * kern/thread.c (stack_alloc): Call VM_PAGE_WAIT in case of memory exhaustion. (stack_collect): Call vm_page_free_contig instead of kmem_free to release pages. * vm/vm_page.c (vm_page_seg_alloc): Fix memory exhaustion report. (vm_page_setup): Don't update vm_page_free_count. (vm_page_free_pa): Check page parameter. (vm_page_mem_free): New function. * vm/vm_page.h (vm_page_free_count): Remove extern declaration. (vm_page_mem_free): New prototype. * vm/vm_pageout.c: Update comments not to refer to vm_page_free_count. (vm_pageout_scan, vm_pageout_continue, vm_pageout): Use vm_page_mem_free instead of vm_page_free_count, update types accordingly. * vm/vm_resident.c (vm_page_free_count, vm_page_free_count_minimum): Remove variables. (vm_page_free_avail): New variable. (vm_page_bootstrap, vm_page_grab, vm_page_release, vm_page_grab_contig, vm_page_free_contig, vm_page_wait): Use vm_page_mem_free instead of vm_page_free_count, update types accordingly, don't set vm_page_free_count_minimum. * vm/vm_user.c (vm_statistics): Likewise.
* | Stack the slab allocator directly on top of the physical allocatorRichard Braun2016-02-028-70/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to increase the amount of memory available for kernel objects, without reducing the amount of memory available for user processes, a new allocation strategy is introduced in this change. Instead of allocating kernel objects out of kernel virtual memory, the slab allocator directly uses the direct mapping of physical memory as its backend. This largely increases the kernel heap, and removes the need for address translation updates. In order to allow this strategy, an assumption made by the interrupt code had to be removed. In addition, kernel stacks are now also allocated directly from the physical allocator. * i386/i386/db_trace.c: Include i386at/model_dep.h (db_i386_reg_value): Update stack check. * i386/i386/locore.S (trap_from_kernel, all_intrs, int_from_intstack): Update interrupt handling. * i386/i386at/model_dep.c: Include kern/macros.h. (int_stack, int_stack_base): New variables. (int_stack_high): Remove variable. (i386at_init): Update interrupt stack initialization. * i386/i386at/model_dep.h: Include i386/vm_param.h. (int_stack_top, int_stack_base): New extern declarations. (ON_INT_STACK): New macro. * kern/slab.c: Include vm/vm_page.h (KMEM_CF_NO_CPU_POOL, KMEM_CF_NO_RECLAIM): Remove macros. (kmem_pagealloc, kmem_pagefree, kalloc_pagealloc, kalloc_pagefree): Remove functions. (kmem_slab_create): Allocate slab pages directly from the physical allocator. (kmem_slab_destroy): Release slab pages directly to the physical allocator. (kmem_cache_compute_sizes): Update the slab size computation algorithm to return a power-of-two suitable for the physical allocator. (kmem_cache_init): Remove custom allocation function pointers. (kmem_cache_reap): Remove check on KMEM_CF_NO_RECLAIM. (slab_init, kalloc_init): Update calls to kmem_cache_init. (kalloc, kfree): Directly fall back on the physical allocator for big allocation sizes. (host_slab_info): Remove checks on defunct flags. * kern/slab.h (kmem_slab_alloc_fn_t, kmem_slab_free_fn_t): Remove types. (struct kmem_cache): Add `slab_order' member, remove `slab_alloc_fn' and `slab_free_fn' members. (KMEM_CACHE_NOCPUPOOL, KMEM_CACHE_NORECLAIM): Remove macros. (kmem_cache_init): Update prototype, remove custom allocation functions. * kern/thread.c (stack_alloc): Allocate stacks from the physical allocator. * vm/vm_map.c (vm_map_kentry_cache, kentry_data, kentry_data_size): Remove variables. (kentry_pagealloc): Remove function. (vm_map_init): Update calls to kmem_cache_init, remove initialization of vm_map_kentry_cache. (vm_map_create, _vm_map_entry_dispose, vm_map_copyout): Unconditionnally use vm_map_entry_cache. * vm/vm_map.h (kentry_data, kentry_data_size, kentry_count): Remove extern declarations. * vm/vm_page.h (VM_PT_STACK): New page type. * device/dev_lookup.c (dev_lookup_init): Update calls to kmem_cache_init. * device/dev_pager.c (dev_pager_hash_init, device_pager_init): Likewise. * device/ds_routines.c (mach_device_init, mach_device_trap_init): Likewise. * device/net_io.c (net_io_init): Likewise. * i386/i386/fpu.c (fpu_module_init): Likewise. * i386/i386/machine_task.c (machine_task_module_init): Likewise. * i386/i386/pcb.c (pcb_module_init): Likewise. * i386/intel/pmap.c (pmap_init): Likewise. * ipc/ipc_init.c (ipc_bootstrap): Likewise. * ipc/ipc_marequest.c (ipc_marequest_init): Likewise. * kern/act.c (global_act_init): Likewise. * kern/processor.c (pset_sys_init): Likewise. * kern/rdxtree.c (rdxtree_cache_init): Likewise. * kern/task.c (task_init): Likewise. * vm/memory_object_proxy.c (memory_object_proxy_init): Likewise. * vm/vm_external.c (vm_external_module_initialize): Likewise. * vm/vm_fault.c (vm_fault_init): Likewise. * vm/vm_object.c (vm_object_bootstrap): Likewise. * vm/vm_resident.c (vm_page_module_init): Likewise. (vm_page_bootstrap): Remove initialization of kentry_data.
* | Use vm_page as the physical memory allocatorRichard Braun2016-01-236-411/+172
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change replaces the historical page allocator with a buddy allocator implemented in vm/vm_page.c. This allocator allows easy contiguous allocations and also manages memory inside segments. In a future change, these segments will be used to service requests with special constraints, such as "usable for 16-bits DMA" or "must be part of the direct physical mapping". * Makefrag.am (libkernel_a_SOURCES): Add vm/vm_page.c. * i386/Makefrag.am (libkernel_a_SOURCES): Add i386/i386at/biosmem.{c,h}. * i386/i386/vm_param.h: Include kern/macros.h. (VM_PAGE_DMA_LIMIT, VM_PAGE_MAX_SEGS, VM_PAGE_DMA32_LIMIT, VM_PAGE_DIRECTMAP_LIMIT, VM_PAGE_HIGHMEM_LIMIT, VM_PAGE_SEG_DMA, VM_PAGE_SEG_DMA32, VM_PAGE_SEG_DIRECTMAP, VM_PAGE_SEG_HIGHMEM): New macros. * i386/i386at/model_dep.c: Include i386at/biosmem.h. (avail_next, avail_remaining): Remove variables. (mem_size_init): Remove function. (i386at_init): Initialize and use the biosmem module for early physical memory management. (pmap_free_pages): Return phys_last_addr instead of avail_remaining. (init_alloc_aligned): Turn into a wrapper for biosmem_bootalloc. (pmap_grab_page): Directly call init_alloc_aligned instead of pmap_next_page. * i386/include/mach/i386/vm_types.h (phys_addr_t): New type. * kern/bootstrap.c (free_bootstrap_pages): New function. (bootstrap_create): Call free_bootstrap_pages instead of vm_page_create. * kern/cpu_number.h (CPU_L1_SIZE): New macro. * kern/slab.h: Include kern/cpu_number.h. (CPU_L1_SIZE): Remove macro, moved to kern/cpu_number.h. * kern/startup.c (setup_main): Change the value of machine_info.memory_size. * linux/dev/glue/glue.h (alloc_contig_mem, free_contig_mem): Update prototypes. * linux/dev/glue/kmem.c (linux_kmem_init): Don't use defunct page queue. * linux/dev/init/main.c (linux_init): Don't free unused memory. (alloc_contig_mem, free_contig_mem): Turn into wrappers for the vm_page allocator. * linux/pcmcia-cs/glue/ds.c (PAGE_SHIFT): Don't undefine. * vm/pmap.h (pmap_startup, pmap_next_page): Remove prototypes. * vm/vm_fault.c (vm_fault_page): Update calls to vm_page_convert. * vm/vm_init.c (vm_mem_init): Call vm_page_info_all. * vm/vm_object.c (vm_object_page_map): Update call to vm_page_init. * vm/vm_page.h (vm_page_queue_free): Remove variable declaration. (vm_page_create, vm_page_release_fictitious, vm_page_release): Remove declarations. (vm_page_convert, vm_page_init): Update prototypes. (vm_page_grab_contig, vm_page_free_contig): New prototypes. * vm/vm_resident.c (vm_page_template, vm_page_queue_free, vm_page_big_pagenum): Remove variables. (vm_page_bootstrap): Update and call vm_page_setup. (pmap_steal_memory): Update and call vm_page_bootalloc. (pmap_startup, vm_page_create, vm_page_grab_contiguous_pages): Remove functions. (vm_page_init_template, vm_page_grab_contig, vm_page_free_contig): New functions. (vm_page_init): Update and call vm_page_init_template. (vm_page_release_fictitious): Make static. (vm_page_more_fictitious): Update call to vm_page_init. (vm_page_convert): Rewrite to comply with vm_page. (vm_page_grab): Update and call vm_page_alloc_pa. (vm_page_release): Update and call vm_page_free_pa.
* | Import the vm_page module from X15 and relicense to GPLv2+Richard Braun2016-01-232-3/+965
| | | | | | | | | | | | | | * vm/vm_page.c: New File. * vm/vm_page.h: Merge vm_page.h from X15. (struct vm_page): New members: node, type, seg_index, order, vm_page_header. Turn phys_addr into a phys_addr_t.
* | Fix object page list typeRichard Braun2016-01-021-1/+1
| | | | | | | | | | * vm/vm_object.h (struct vm_object): Use queue_head_t instead of queue_chain_t as the page list type.
* | Slightly improve map debugging readabilityRichard Braun2016-01-011-1/+2
| | | | | | | | | | * vm/vm_object.c (vm_object_print): Break line so all debugging output fits in a page.
* | Improve map debugging readabilityRichard Braun2015-12-292-16/+16
| | | | | | | | | | | | * vm/vm_map.c (vm_map_print): Reduce indentation, break lines earlier. (vm_map_copy_print): Likewise. * vm/vm_object.c (vm_object_print): Likewise.
* | Improve VM map debuggingRichard Braun2015-12-292-2/+9
| | | | | | | | | | | | * vm/vm_map.c (vm_map_print): Update arguments to conform to ddb protocol. * vm/vm_print.h (vm_map_print): Likewise for prototype.
* | Fix vm_map_copyoutRichard Braun2015-12-291-0/+1
| | | | | | | | * vm/vm_map.c (vm_map_copyout): Reinitialize copy map red-black tree.
* | Nicer out of memory condition reportingSamuel Thibault2015-11-291-0/+4
| | | | | | | | | | | | * vm/vm_object.c (_vm_object_allocate): Return 0 immediately when kmem_cache_alloc returned 0. (vm_object_allocate): Panic when _vm_object_allocate returns 0.
* | Fix wired accountingSamuel Thibault2015-11-271-0/+2
| | | | | | | | | | * vm/vm_map.c (vm_map_pageable_common): Put back wired_count decrementation into user_wired_count test.
* | vm: collapse unreachable branch into assertionJustus Winter2015-08-181-28/+3
| | | | | | | | | | * vm/vm_object.c (vm_object_collapse): Collapse unreachable branch into assertion.
* | vm: fix compiler warningJustus Winter2015-08-151-3/+0
| | | | | | | | * vm/vm_user.c (vm_wire): Drop unused but set variable `host'.
* | vm: enable extra assertionsJustus Winter2015-08-151-2/+0
| | | | | | | | * vm/vm_fault.c (vm_fault_page): Enable extra assertions.
* | vm: really fix traversing the list of inactive pagesJustus Winter2015-07-121-1/+1
| | | | | | | | | | | | | | | | Previously, the pageout code traversed the list of pages in an object instead of the list of inactive pages. * vm/vm_pageout.c (vm_pageout_scan): Fix traversing the list of inactive pages.
* | vm: fix traversing the list of inactive pagesJustus Winter2015-07-111-1/+1
| | | | | | | | | | | | | | | | | | | | Previously, the pageout code traversed the hash table chain instead of the list of inactive pages. The code merely compiled by accident, because the `struct page' also has a field called `next' for the hash table chain. * vm/vm_pageout.c (vm_pageout_scan): Fix traversing the list of inactive pages.
* | vm: drop debugging remnantsJustus Winter2015-07-101-10/+0
| | | | | | | | * vm/vm_object.c (vm_object_terminate): Drop debugging remnants.
* | vm: fix panic messageJustus Winter2015-07-091-2/+1
| | | | | | | | * vm/vm_kern.c (kmem_init): Fix panic message.
* | Allow non-privileged tasks to wire 64KiB task memorySamuel Thibault2015-07-093-3/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * doc/mach.texi (vm_wire): Document that the host port does not have to be privileged. * include/mach/mach_hosts.defs (vm_wire): Use mach_port_t instead of host_priv_t. * vm/vm_map.h (vm_map): Add user_wired field. * vm/vm_map.c (vm_map_setup): Initialize user_wired field to 0. (vm_map_pageable_common, vm_map_entry_delete, vm_map_copy_overwrite, vm_map_copyout_page_list, vm_map_copyin_page_list): When switching user_wired_count field of entry between 0 and non-0, accumulate the corresponding size into the user_wired field of map. * vm/vm_user.c (vm_wire): Turn host parameter into port parameter, and inline a version of convert_port_to_host_priv which records whether the host port is privileged or not. When it is not privileged, check whether the additional amount to user_wired will overcome 64KiB.
* | Fix typoFlávio Cruz2015-06-051-1/+1
| | | | | | | | * vm/vm_kern.c (kmem_alloc_aligned): Fix typo.
* | vm: drop unused `kmem_realloc'Justus Winter2015-05-232-103/+1
| | | | | | | | | | | | * vm/vm_kern.c (kmem_realloc): Remove function. (kmem_alloc_wired): Adopt comment. * vm/vm_kern.h (kmem_realloc): Remove declaration.
* | vm: gracefully handle resource shortageJustus Winter2015-05-201-14/+12
| | | | | | | | | | | | * vm/vm_object.c (vm_object_copy_call): Gracefully handle resource shortage by doing the allocation earlier and aborting the function if unsuccessful.
* | kern: import `macros.h' from x15Justus Winter2015-05-194-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Import the macro definitions from the x15 kernel project, and replace all similar definitions littered all over the place with it. Importing this file will make importing code from the x15 kernel easier. We are already using the red-black tree implementation and the slab allocator from it, and we will import even more code in the near future. * kern/list.h: Do not define `structof', include `macros.h' instead. * kern/rbtree.h: Likewise. * kern/slab.c: Do not define `ARRAY_SIZE', include `macros.h' instead. * i386/grub/misc.h: Likewise. * i386/i386/xen.h: Do not define `barrier', include `macros.h' instead. * kern/macro_help.h: Delete file. Replaced by `macros.h'. * kern/macros.h: New file. * Makefrag.am (libkernel_a_SOURCES): Add new file, remove old file. * device/dev_master.h: Adopt accordingly. * device/io_req.h: Likewise. * device/net_io.h: Likewise. * i386/intel/read_fault.c: Likewise. * ipc/ipc_kmsg.h: Likewise. * ipc/ipc_mqueue.h: Likewise. * ipc/ipc_object.h: Likewise. * ipc/ipc_port.h: Likewise. * ipc/ipc_space.h: Likewise. * ipc/ipc_splay.c: Likewise. * ipc/ipc_splay.h: Likewise. * kern/assert.h: Likewise. * kern/ast.h: Likewise. * kern/pc_sample.h: Likewise. * kern/refcount.h: Likewise. * kern/sched.h: Likewise. * kern/sched_prim.c: Likewise. * kern/timer.c: Likewise. * kern/timer.h: Likewise. * vm/vm_fault.c: Likewise. * vm/vm_map.h: Likewise. * vm/vm_object.h: Likewise. * vm/vm_page.h: Likewise.
* | kern: avoid #if 0ing out thread_collect_scanJustus Winter2015-02-181-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, `thread_collect_scan' does nothing because `pcb_collect' is a nop. Its body is exempt from compilation by means of the preprocessor. This is unfortunate as it increases the risk of bitrot, and we still need to pay the price of rate-limiting thread_collect_scan. * kern/thread.c (thread_collect_scan): Drop #if 0 around the body. * vm/vm_pageout.c (vm_pageout_scan): Do not call `consider_thread_collect' and document why.
* | vm: fix typoJustus Winter2015-02-181-1/+1
| | | | | | | | * vm/vm_resident.c: Fix typo.
* | vm: Fix typo in comment (found by codespell)Stefan Weil2015-01-021-1/+1
| | | | | | | | Signed-off-by: Stefan Weil <sw@weilnetz.de>
* | Revert "Make vm_map really ignore `address' when `anywhere' is true"Samuel Thibault2014-11-101-4/+1
| | | | | | | | This reverts commit 5ae510e35c54009626999a88f0f1cb34d6dfc94f.
* | Make vm_map really ignore `address' when `anywhere' is trueSamuel Thibault2014-09-061-1/+4
| | | | | | | | | | | | | | As vm_allocate does. * vm/vm_user.c (vm_map): When `anywhere' is true, set `address' to the minimum address of the `target_map'.
* | Tune pageout parametersSamuel Thibault2014-08-301-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | This targets having always at least 8% free memory instead of just 1%. This has shown improving buildd stability a lot. Also increase the reserved amount to nowadays standards. * vm/vm_pageout.c (VM_PAGE_FREE_TARGET): Increase to 10%. (VM_PAGE_FREE_MIN): Increase to 8%. (VM_PAGE_FREE_RESERVED): Increase to 500 pages. (VM_PAGEOUT_RESERVED_INTERNAL): Increase to 150 pages. (VM_PAGEOUT_RESERVED_REALLY): Increase to 100 pages.
* | Increate the pageout thread prioritySamuel Thibault2014-08-301-0/+1
| | | | | | | | * vm/vm_pageout.c (vm_pageout): Set the priority to 0.
* | vm: make struct vm_map fit into a cache lineJustus Winter2014-04-301-2/+5
| | | | | | | | | | | | | | | | Currently, the size of struct vm_map is 68 bytes. By using a bit field for the boolean flags, it can be made fit into a cache line. * vm/vm_map.h (struct vm_map): Use a bit field for the boolean flags wait_for_space and wiring_required.
* | Convert from K&R to ANSIMarin Ramesa2014-04-048-494/+454
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert from K&R style function definitions to ANSI style function definitions. * ddb/db_access.c: Convert function prototypes from K&R to ANSI. * ddb/db_aout.c: Likewise. * ddb/db_break.c: Likewise. * ddb/db_command.c: Likewise. * ddb/db_cond.c: Likewise. * ddb/db_examine.c: Likewise. * ddb/db_expr.c: Likewise. * ddb/db_ext_symtab.c: Likewise. * ddb/db_input.c: Likewise. * ddb/db_lex.c: Likewise. * ddb/db_macro.c: Likewise. * ddb/db_mp.c: Likewise. * ddb/db_output.c: Likewise. * ddb/db_print.c: Likewise. * ddb/db_run.c: Likewise. * ddb/db_sym.c: Likewise. * ddb/db_task_thread.c: Likewise. * ddb/db_trap.c: Likewise. * ddb/db_variables.c: Likewise. * ddb/db_watch.c: Likewise. * device/blkio.c: Likewise. * device/chario.c: Likewise. * device/dev_lookup.c: Likewise. * device/dev_name.c: Likewise. * device/dev_pager.c: Likewise. * device/ds_routines.c: Likewise. * device/net_io.c: Likewise. * device/subrs.c: Likewise. * i386/i386/db_interface.c: Likewise. * i386/i386/fpu.c: Likewise. * i386/i386/io_map.c: Likewise. * i386/i386/loose_ends.c: Likewise. * i386/i386/mp_desc.c: Likewise. * i386/i386/pcb.c: Likewise. * i386/i386/phys.c: Likewise. * i386/i386/trap.c: Likewise. * i386/i386/user_ldt.c: Likewise. * i386/i386at/com.c: Likewise. * i386/i386at/kd.c: Likewise. * i386/i386at/kd_event.c: Likewise. * i386/i386at/kd_mouse.c: Likewise. * i386/i386at/kd_queue.c: Likewise. * i386/i386at/lpr.c: Likewise. * i386/i386at/model_dep.c: Likewise. * i386/i386at/rtc.c: Likewise. * i386/intel/pmap.c: Likewise. * i386/intel/read_fault.c: Likewise. * ipc/ipc_entry.c: Likewise. * ipc/ipc_hash.c: Likewise. * ipc/ipc_kmsg.c: Likewise. * ipc/ipc_marequest.c: Likewise. * ipc/ipc_mqueue.c: Likewise. * ipc/ipc_notify.c: Likewise. * ipc/ipc_port.c: Likewise. * ipc/ipc_right.c: Likewise. * ipc/mach_debug.c: Likewise. * ipc/mach_msg.c: Likewise. * ipc/mach_port.c: Likewise. * ipc/mach_rpc.c: Likewise. * kern/act.c: Likewise. * kern/exception.c: Likewise. * kern/ipc_mig.c: Likewise. * kern/ipc_tt.c: Likewise. * kern/lock_mon.c: Likewise. * kern/mach_clock.c: Likewise. * kern/machine.c: Likewise. * kern/printf.c: Likewise. * kern/priority.c: Likewise. * kern/startup.c: Likewise. * kern/syscall_emulation.c: Likewise. * kern/syscall_subr.c: Likewise. * kern/thread_swap.c: Likewise. * kern/time_stamp.c: Likewise. * kern/timer.c: Likewise. * kern/xpr.c: Likewise. * vm/memory_object.c: Likewise. * vm/vm_debug.c: Likewise. * vm/vm_external.c: Likewise. * vm/vm_fault.c: Likewise. * vm/vm_kern.c: Likewise. * vm/vm_map.c: Likewise. * vm/vm_pageout.c: Likewise. * vm/vm_user.c: Likewise.
* | vm: trigger garbage collection on kernel memory pressureRichard Braun2014-02-061-3/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | In addition to physical pages, the slab allocator also consumes kernel virtual memory, so reclaim pages on failure to allocate from a kernel map. This method isn't foolproof but helps alleviate fragmentation. * vm/vm_kern.c (kmem_alloc): Call slab_collect and retry allocation once on failure. (kmem_realloc): Likewise. (kmem_alloc_wired): Likewise. (kmem_alloc_wired): Likewise. (kmem_alloc_aligned): Likewise.
* | Fix potential NULL dereferenceSamuel Thibault2014-02-041-2/+4
| | | | | | | | | | * vm/vm_kern.c (projected_buffer_deallocate): Look for `map' being NULL or kernel_map before locking it.
* | vm: remove the declaration of memory_object_create_proxyJustus Winter2014-01-161-11/+0
| | | | | | | | | | | | | | | | It is not clear to me why the declaration was put there in the first place. It is not used anywhere, and it conflicts with the declaration generated by mig. * vm/memory_object_proxy.h (memory_object_create_proxy): Remove declaration.
* | vm: reduce the size of struct vm_pageJustus Winter2014-01-031-1/+1
| | | | | | | | | | | | | | | | | | Previously, the bit field left 31 bits unused. By reducing the size of wire_count by one bit, the size of the whole struct is reduced by four bytes. * vm/vm_page.h (struct vm_page): Reduce the size of wire_count to 15 bits.
* | vm: merge the two bit fields in struct vm_pageJustus Winter2014-01-031-7/+3
| | | | | | | | * vm/vm_page.h (struct vm_page): Merge the two bit fields.
* | vm: remove NS32000-specific padding from struct vm_pageJustus Winter2014-01-031-3/+0
| | | | | | | | | | | | | | Apparently, the NS32000 was a 32-bit CPU from the 1990ies. The string "ns32000" appears nowhere else in the source. * vm/vm_page.h (struct vm_page): Remove NS32000-specific padding.
* | Declare void argument lists (part 2)Marin Ramesa2013-12-202-2/+2
| | | | | | | | | | | | | | | | Declare void argument lists that were not declared in the first part of this patch and * kern/sched_prim.h (recompute_priorities): Fix prototype. * kern/startup.c (setup_main) (recompute_priorities): Fix call.