aboutsummaryrefslogtreecommitdiff
path: root/kern
Commit message (Collapse)AuthorAgeFilesLines
...
* kern: Fix crash.Justus Winter2017-10-263-2/+32
| | | | | | | | | | Check receiver in task_create. Fixes a crash when sending that message to a non-task port. * kern/bootstrap.c (boot_script_task_create): Use the new function. * kern/task.c (task_create): Rename to task_create_internal, create a new function in its place that checks the receiver first. * kern/task.h (task_create_internal): New prototype.
* Drop the register qualifier.Justus Winter2017-10-233-5/+5
| | | | | | | | | * i386/intel/pmap.c: Drop the register qualifier. * ipc/ipc_kmsg.h: Likewise. * kern/bootstrap.c: Likewise. * kern/profile.c: Likewise. * kern/thread.c: Likewise. * vm/vm_object.c: Likewise.
* kern: Fix reporting the minimum quantum used for scheduling.Justus Winter2017-08-051-2/+3
| | | | | * kern/host.c (host_info): Scale 'min_quantum' by 'tick', then convert to milliseconds.
* Merge branch 'master' of git.savannah.gnu.org:/srv/git/hurd/gnumachSamuel Thibault2017-05-071-0/+6
|\
| * kern: Make kernel task available to bootscript.Justus Winter2017-03-181-0/+6
| | | | | | | | | | | | * kern/bootstrap.c (bootstrap_create): Insert the variable 'kernel-task' into the bootscript environment. Userspace can use this instead of guessing based on the order of the first tasks.
* | Rewrite gsync so that it works with remote tasks v2Agustina Arzille2017-03-041-138/+230
|/
* Implement basic sleeping locks for gnumachAgustina Arzille2017-03-045-2/+187
| | | | | | | | | | * kern/atomic.h: New file. * kern/kmutex.h: New file. * kern/kmutex.c: New file. * Makefrag.am (libkernel_a_SOURCES): Add atomic.h, kmutex.h, kmutex.c. * kern/sched_prim.h (thread_wakeup_prim): Make it return boolean_t. * kern/sched_prim.c (thread_wakeup_prim): Return TRUE if we woke a thread, and FALSE otherwise.
* rbtree: minor changeRichard Braun2016-12-091-1/+1
| | | | * kern/rbtree.h (rbtree_for_each_remove): Remove trailing slash.
* gsync: Avoid NULL pointer dereferenceBrent Baccala2016-11-101-9/+12
| | | | | * kern/gsync.c (gsync_wait, gsync_wake, gsync_requeue): Return immediately if task argument is TASK_NULL
* gsync: fix licenceSamuel Thibault2016-10-312-2/+2
| | | | | | | Agustina relicenced her work. * kern/gsync.c: Relicence to GPL 2+. * kern/gsync.h: Relicence to GPL 2+.
* gsync: Fix crash when task is not current taskSamuel Thibault2016-10-311-0/+9
| | | | | * kern/gsync.c (gsync_wait, gsync_wake, gsync_requeue): Return KERN_FAILURE when task != current_task().
* gsync: Fix assertion failure with MACH_LDEBUGSamuel Thibault2016-10-311-5/+5
| | | | | | | | | vm_map_lock_read calls check_simple_locks(), so we need to lock hbp after taking the vm_map read lock. * kern/gsync.c (gsync_wait): Call vm_map_lock_read before locking &hbp->lock. (gsync_wake): Likewise.
* Gracefully handle pmap allocation failures.Justus Winter2016-10-211-3/+14
| | | | | | * kern/task.c (task_create): Gracefully handle pmap allocation failures. * vm/vm_map.c (vm_map_fork): Likewise.
* kern: Improve panic messages from the scheduler.Justus Winter2016-10-011-9/+14
| | | | | * kern/sched_prim.c (state_panic): Turn into macro, print symbolic values of thread state.
* kern: Improve assertions and panics.Justus Winter2016-10-013-12/+17
| | | | | | | | | | * kern/assert.h (Assert): Add function argument. (assert): Supply function argument. * kern/debug.c (Assert): Add function argument. Unify message format. (panic): Rename to 'Panic', add location information. * kern/debug.h (panic): Rename, and add a macro version that supplies the location. * linux/dev/include/linux/kernel.h: Use the new panic macro.
* Rework pageout to handle multiple segmentsRichard Braun2016-09-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we're about to use a new HIGHMEM segment, potentially much larger than the existing DMA and DIRECTMAP ones, it's now compulsory to make the pageout daemon aware of those segments. And while we're at it, let's fix some of the defects that have been plaguing pageout forever, such as throttling, and pageout of internal versus external pages (this commit notably introduces a hardcoded policy in which as many external pages are selected before considering internal pages). * kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release. * vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>. (VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM, VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM, VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW, VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM, VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES, VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros. (struct vm_page_queue): New type. (struct vm_page_seg): Add new members `min_free_pages', `low_free_pages', `high_free_pages', `active_pages', `nr_active_pages', `high_active_pages', `inactive_pages', `nr_inactive_pages'. (vm_page_alloc_paused): New variable. (vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New functions. (vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout daemon as appropriate. (vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove, vm_page_queue_first, vm_page_seg_get, vm_page_seg_index, vm_page_seg_compute_pageout_thresholds): New functions. (vm_page_seg_init): Initialize the new segment members. (vm_page_seg_add_active_page, vm_page_seg_remove_active_page, vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page, vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page, vm_page_seg_pull_cache_page): New functions. (vm_page_seg_min_page_available, vm_page_seg_page_available, vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock, vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict, vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive, vm_page_lookup_seg, vm_page_check): New functions. (vm_page_alloc_pa): Handle allocation failure from VM privileged thread. (vm_page_info_all): Display additional segment properties. (vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate, vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments. (vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance, vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions. (VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros. (vm_page_evict, vm_page_refill_inactive): New functions. * vm/vm_page.h: Include <kern/list.h>. (struct vm_page): Remove member `pageq', reuse the `node' member instead, move the `listq' and `next' members above `vm_page_header'. (VM_PAGE_CHECK): Define as an alias to vm_page_check. (vm_page_check): New function declaration. (vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations. (vm_page_external_pagedout): New extern declaration. (vm_page_release): Update declaration. (VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove. (VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros. (VM_PT_KERNEL): Update value. (vm_page_queues_remove, vm_page_balance, vm_page_evict, vm_page_refill_inactive): New function declarations. * vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN, VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX, VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN, VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL, VM_PAGEOUT_RESERVED_REALLY): Remove macros. (vm_pageout_reserved_internal, vm_pageout_reserved_really, vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait, vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max, vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock, vm_pageout_inactive_busy, vm_pageout_inactive_absent, vm_pageout_inactive_used, vm_pageout_inactive_clean, vm_pageout_inactive_dirty, vm_pageout_inactive_double, vm_pageout_inactive_cleaned_external): Remove variables. (vm_pageout_requested, vm_pageout_continue): New variables. (vm_pageout_setup): Wait for page allocation to succeed instead of falling back to flush, update double paging protocol with caller, add pageout throttling setup. (vm_pageout_scan): Rewrite to use the new vm_page balancing, eviction and inactive queue refill functions. (vm_pageout_scan_continue, vm_pageout_continue): Remove functions. (vm_pageout): Rewrite. (vm_pageout_start, vm_pageout_resume): New functions. * vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove function declarations. (vm_pageout_start, vm_pageout_resume): New function declarations. * vm/vm_resident.c: Include <kern/list.h>. (vm_page_queue_fictitious): Define as a struct list. (vm_page_free_wanted, vm_page_external_count, vm_page_free_avail, vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target, vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved): Remove variables. (vm_page_external_pagedout): New variable. (vm_page_bootstrap): Don't initialize removed variable, update initialization of vm_page_queue_fictitious. (vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate. (vm_page_remove): Likewise. (vm_page_grab_fictitious): Update to use list_xxx functions. (vm_page_release_fictitious): Likewise. (vm_page_grab): Remove pageout related code. (vm_page_release): Add `laundry' and `external' parameters for pageout throttling. (vm_page_grab_contig): Remove pageout related code. (vm_page_free_contig): Likewise. (vm_page_free): Remove pageout related code, update call to vm_page_release. (vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate): Move to vm/vm_page.c.
* Redefine what an external page isRichard Braun2016-09-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of a "page considered external", which apparently takes into account whether a page is dirty or not, redefine this property to reliably mean "is in an external object". This commit mostly deals with the impact of this change on the page allocation interface. * i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to vm_page_grab. * kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of vm_page_grab_contig. (kmem_pagefree_physmem): Use vm_page_release instead of vm_page_free_contig. * linux/dev/glue/block.c (alloc_buffer, device_read): Update call to vm_page_grab. * vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and vm_page_convert. * vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab. * vm/vm_page.h (struct vm_page): Remove `extcounted' member. (vm_page_external_limit, vm_page_external_count): Remove extern declarations. (vm_page_convert, vm_page_grab): Update declarations. (vm_page_release, vm_page_grab_phys_addr): New function declarations. * vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro. (VM_PAGE_EXTERNAL_TARGET): Likewise. (vm_page_external_target): Remove variable. (vm_pageout_scan): Remove specific handling of external pages. (vm_pageout): Don't set vm_page_external_limit and vm_page_external_target. * vm/vm_resident.c (vm_page_external_limit): Remove variable. (vm_page_insert, vm_page_replace, vm_page_remove): Update external page tracking. (vm_page_convert): RemoveĀ `external' parameter. (vm_page_grab): Likewise. Remove specific handling of external pages. (vm_page_grab_phys_addr): Update call to vm_page_grab. (vm_page_release): Remove `external' parameter and remove specific handling of external pages. (vm_page_wait): Remove specific handling of external pages. (vm_page_alloc): Update call to vm_page_grab. (vm_page_free): Update call to vm_page_release. * xen/block.c (device_read): Update call to vm_page_grab. * xen/net.c (device_write): Likewise.
* Remove phys_first_addr and phys_last_addr global variablesRichard Braun2016-09-211-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old assumption that all physical memory is directly mapped in kernel space is about to go away. Those variables are directly linked to that assumption. * i386/i386/model_dep.h (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT instead of phys_last_addr. (pmap_copy_page, copy_to_phys, copy_from_phys): Likewise. * i386/i386/trap.c (user_trap): Remove check against phys_last_addr. * i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set phys_last_addr. * i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if a physical address references physical memory. * i386/i386at/model_dep.c (phys_first_addr): Remove variable. (phys_last_addr): Likewise. (pmap_free_pages, pmap_valid_page): Remove functions. * i386/intel/pmap.c: Include i386at/biosmem.h. (pa_index): Turn into an alias for vm_page_table_index. (pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr as appropriate. (pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr and phys_last_addr to obtain the number of physical pages. (pmap_verify_free): Remove function. (valid_page): Turn this macro into an inline function and rewrite using vm_page_lookup_pa. (pmap_page_table_page_alloc): Build the pmap VM object using vm_page_table_size to determine its size. (pmap_remove_range, pmap_page_protect, phys_attribute_clear, phys_attribute_test): Turn page indexes into unsigned long integers. (pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or biosmem_directmap_end to determine if a physical address references physical memory. * i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of phys_last_addr to obtain the number of physical pages. * kern/startup.c (phys_first_addr): Remove extern declaration. (phys_last_addr): Likewise. * linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the appropriate segment selector instead of phys_last_addr to determine where high memory starts. * vm/pmap.h: Update requirements description. (pmap_free_pages, pmap_valid_page): Remove declarations. * vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size, vm_page_table_size, vm_page_table_index): New functions. * vm/vm_page.h (vm_page_seg_end, vm_page_table_size, vm_page_table_index): New function declarations. * vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define as unsigned long integers. (vm_page_bootstrap): Compute VP table size based on the page table size instead of the value returned by pmap_free_pages.
* VM: improve pageout deadlock workaroundRichard Braun2016-09-162-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5dd4f67522ad0d49a2cecdb9b109251f546d4dd1 makes VM map entry allocation done with VM privilege, so that a VM map isn't held locked while physical allocations are paused, which may block the default pager during page eviction, causing a system-wide deadlock. First, it turns out that map entries aren't the only buffers allocated, and second, their number can't be easily determined, which makes a preallocation strategy very hard to implement. This change generalizes the strategy of VM privilege increase when a VM map is locked. * device/ds_routines.c (io_done_thread): Use integer values instead of booleans when setting VM privilege. * kern/thread.c (thread_init, thread_wire): Likewise. * vm/vm_pageout.c (vm_pageout): Likewise. * kern/thread.h (struct thread): Turn member `vm_privilege' into an unsigned integer. * vm/vm_map.c (vm_map_lock): New function, where VM privilege is temporarily increased. (vm_map_unlock): New function, where VM privilege is decreased. (_vm_map_entry_create): Remove VM privilege workaround from this function. * vm/vm_map.h (vm_map_lock, vm_map_unlock): Turn into functions.
* Remove map entry pageability property.Richard Braun2016-09-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Since the replacement of the zone allocator, kernel objects have been wired in memory. Besides, as of 5e9f6f (Stack the slab allocator directly on top of the physical allocator), there is a single cache used to allocate map entries. Those changes make the pageability attribute of VM maps irrelevant. * device/ds_routines.c (mach_device_init): Update call to kmem_submap. * ipc/ipc_init.c (ipc_init): Likewise. * kern/task.c (task_create): Update call to vm_map_create. * vm/vm_kern.c (kmem_submap): Remove `pageable' argument. Update call to vm_map_setup. (kmem_init): Update call to vm_map_setup. * vm/vm_kern.h (kmem_submap): Update declaration. * vm/vm_map.c (vm_map_setup): Remove `pageable' argument. Don't set `entries_pageable' member. (vm_map_create): Likewise. (vm_map_copyout): Don't bother creating copies of page entries with the right pageability. (vm_map_copyin): Don't set `entries_pageable' member. (vm_map_fork): Update call to vm_map_create. * vm/vm_map.h (struct vm_map_header): Remove `entries_pageable' member. (vm_map_setup, vm_map_create): Remove `pageable' argument.
* Add missing memory barriers in simple lock debuggingSamuel Thibault2016-08-251-0/+3
| | | | | | * kern/lock.c (_simple_lock, _simple_lock_try, simple_unlock): Add compiler memory barrier to separate simple_locks_taken update from information filling.
* Replace libc string functions with internal implementationsRichard Braun2016-08-161-0/+103
| | | | | | | * Makefile.am (clib_routines): Remove memcmp, memcpy, memmove, strchr, strstr and strsep. * kern/strings.c (memset): Comment out. (strchr, strsep, strstr): New functions.
* Augment VM maps with task namesRichard Braun2016-08-061-0/+3
| | | | | | | | | | | | This change improves the clarity of "no more room for ..." VM map allocation errors. * kern/task.c (task_init): Call vm_map_set_name for the kernel map. (task_create): Call vm_map_set_name where appropriate. * vm/vm_map.c (vm_map_setup): Set map name to NULL. (vm_map_find_entry_anywhere): Update error message to include map name. * vm/vm_map.h (struct vm_map): New `name' member. (vm_map_set_name): New inline function.
* Fix page fault in critical section in the slab allocatorRichard Braun2016-06-291-28/+36
| | | | | * kern/slab.c (host_slab_info): Use wired kernel memory to build the cache info.
* Fix locking error in the slab allocatorRichard Braun2016-06-292-19/+21
| | | | | | | * kern/slab.c (kmem_slab_create): Set `slab->cache` member. (kmem_cache_reap): Return dead slabs instead of destroying in place. (slab_collect): Destroy slabs outside of critical section. * kern/slab.h (struct kmem_slab): New `cache` member.
* Allow setting x86 debug flags for the current threadSamuel Thibault2016-06-101-0/+12
| | | | | | * kern/thread.c (thread_get_state): Allow call for the current thread, without suspending it. (thread_set_status): Likewise.
* Use int3 on x86_64 build tooSamuel Thibault2016-06-101-1/+1
| | | | | * kern/debug.c (SoftDebugger) [__x86_64__]: Use int3 instruction to trigger debugger.
* Fix some license headers.Richard Braun2016-06-022-24/+43
| | | | | | | | As the original author of the files imported, I explicitely dual license them to something compatible with GPLv2. kern/macros.h: Switch license from GPLv3 to BSD 2-clause. kern/rdxtree_i.h: Likewise.
* Fix potential divisal by zeroSamuel Thibault2016-05-261-2/+2
| | | | * kern/debug.c (panic, log): Pass 16 as default radix to _doprnt.
* Fix gcc-6 warningsSamuel Thibault2016-05-189-13/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * ddb/db_elf.c (elf_db_sym_init): Turn `i' into unsigned. * device/ds_routines.c (ds_device_open, device_writev_trap): Likewise. * i386/i386/user_ldt.c (i386_set_ldt): Likewise for `i', `min_selector', and `first_desc'. (i386_get_ldt): Likewise for `ldt_count'. (user_ldt_free): Likewise for `i'. * i386/i386/xen.h (hyp_set_ldt): Turn `count' into unsigned long. * i386/intel/pmap.c (pmap_bootstrap): Turn `i', `j' and 'n' into unsigned. (pmap_clear_bootstrap_pagetable): Likewise for `i' and `j'. * ipc/ipc_kmsg.c (ipc_msg_print): Turn `i' and `numwords' into unsigned. * kern/boot_script.c (boot_script_parse_line): Likewise for `i'. * kern/bootstrap.c (bootstrap_create): Likewise for `n' and `i'. * kern/host.c (host_processors): Likewise for `i'. * kern/ipc_tt.c (mach_ports_register): Likewise. * kern/mach_clock.c (tickadj, bigadj): turn into unsigned. * kern/processor.c (processor_set_things): Turn `i' into unsigned. * kern/task.c (task_threads): Likewise. * kern/thread.c (consider_thread_collect, stack_init): Likewise. * kern/strings.c (memset): Turn `i' into size_t. * vm/memory_object.c (memory_object_lock_request): Turn `i' into unsigned. * xen/block.c (hyp_block_init): Use %u format for evt. (device_open): Drop unused err variable. (device_write): Turn `copy_npages', `i', `nbpages', and `j' into unsigned. * xen/console.c (hypcnread, hypcnwrite, hypcnclose): Turn dev to dev_t. (hypcnclose): Return void. * xen/console.h (hypcnread, hypcnwrite, hypcnclose): Fix prototypes accordingly. * xen/evt.c (form_int_mask): Turn `i' into int. * xen/net.c (hyp_net_init): Use %u format for evt. (device_open): Remove unused `err' variable.
* Add kernel profiling through samplingSamuel Thibault2016-04-204-10/+23
| | | | | | | | | | | | | | | * NEWS: Advertise feature. * configfrac.ac (--enable-kernsample): Add option. * kern/pc_sample.h (take_pc_sample): Add usermode and pc parameter. (take_pc_sample_macro): Take usermode and pc parameters, pass as such to take_pc_sample. * kern/pc_sample.c (take_pc_sample): Use pc parameter when usermode is 1. * kern/mach_clock.c (clock_interrupt): Add pc parameter. Pass usermode and pc to take_pc_sample_macro call. * i386/i386/hardclock.c (hardclock): Pass regs->eip to clock_interrupt call on normal interrupts, NULL on interrupt interrupt. * vm/vm_fault.c (vm_fault_cleanup): Set usermode to 1 and pc to NULL in take_pc_sample_macro call.
* Avoid using C99 for variable declaration for nowSamuel Thibault2016-04-171-1/+2
| | | | * kern/gsync.c (gsync_setup): Declare `i' variable out of for loop.
* Lightweight synchronization mechanismAgustina Arzille2016-04-153-0/+456
| | | | | | | | | | | | * Makefrag.am (libkernel_a_SOURCES): Add kern/gsync.c and kern/gsync.h. * include/mach/gnumach.defs (gsync_wait, gsync_wake, gsync_requeue): New routines. * include/mach/kern_return.h (KERN_TIMEDOUT, KERN_INTERRUPTED): New error codes. * kern/gsync.c: New file. * kern/gsync.h: New file. * kern/startup.c: Include <kern/gsync.h> (setup_main): Call gsync_setup.
* Fix bootstraping issues with stdint.h.Flavio Cruz2016-04-051-1/+1
| | | | | * include/mach/std_types.h: Do not include stdint.h. * kern/rdxtree.h: Replace sys/types.h with stdint.h.
* Use uint32_t instead of unsigned32_t.Flavio Cruz2016-04-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement stdint.h and use it in gnumach. Remove old type definitions such as signed* and unsigned*. * Makefile.am: Add -ffreestanding. * i386/i386/xen.h: Use uint64_t. * i386/include/mach/i386/machine_types.defs: Use uint32_t and int32_t. * i386/include/mach/i386/vm_types.h: Remove definitions of int*, uint*, unsigned* and signed* types. * i386/xen/xen.c: Use uint64_t. * include/device/device_types.defs: Use uint32_t. * include/mach/std_types.defs: Use POSIX types. * include/mach/std_types.h: Include stdint.h. * include/stdint.h: New file with POSIX types. * include/sys/types.h: Include stdint.h. * ipc/ipc_kmsg.c: Use uint64_t. * kern/exception.c: Use uint32_t. * linux/dev/include/linux/types.h: Remove POSIX types. * xen/block.c: Use uint64_t. * xen/net.c: Do not use removed unsigned*_t types. * xen/ring.h: Use uint32_t instead. * xen/store.c: Use uint32_t. * xen/store.h: Use uint32_t. * xen/time.c: Use POSIX types only. * xen/time.h: Use uint64_t.
* Fix stack allocation on XenRichard Braun2016-03-091-21/+12
| | | | | | | | | | Stack allocation on Xen can fail because of fragmentation. This change makes stack allocation use the slab allocator. * kern/thread.c (thread_stack_cache): New global variable. (stack_alloc): Use kmem_cache_alloc instead of vm_page_grab_contig. (stack_collect): Use kmem_cache_free instead of vm_page_free_contig. (kmem_cache_init): Initialize thread_stack_cache.
* Relax slab allocation alignment constraintRichard Braun2016-03-091-8/+12
| | | | | | | | | | | * kern/slab.c (kmem_pagealloc_virtual): Pass alignment to function, call kmem_alloc_aligned when greater than a page. (kmem_pagealloc): Pass alignment to function. (kmem_slab_create): Update call to kmem_pagealloc. (kalloc): Likewise. (kmem_cache_compute_properties): Fix handling of color with large slab sizes. (kmem_cache_init): Allow alignment greater than the page size.
* Inherit fpu control word from parent to childSamuel Thibault2016-03-061-1/+1
| | | | | | | | | | | | * i386/i386/thread.h (struct pcb): Add init_control field. * i386/i386/fpu.h (fpinherit): New prototype. * i386/i386/fpu.c (fpinit): Add thread parameter. When init_control field is set, use that value instead of a hardcoded one. (fpinherit): New function. (fp_load): Pass thread parameter to fpinit(). * kern/thread.c (thread_create): Pass parent task to pcb_init(). * i386/i386/pcb.c (pcb_init): Add parent_task parameter, call fpinherit when it is equal to current_task().
* Document thread_sleep about events woken from interrupt handlersSamuel Thibault2016-02-261-0/+3
| | | | | * kern/sched_prim.c (thread_sleep): Document case of events woken from interrupt handlers.
* Include the exception protocol in 'gnumach.msgids'Justus Winter2016-02-231-0/+22
| | | | | * Makefrag.am: Include the exception protocol in 'gnumach.msgids'. * kern/exc.defs: New file.
* Remove kmem cache flags from the debugging interfaceRichard Braun2016-02-221-6/+1
| | | | | | | * include/mach_debug/slab_info.h (CACHE_FLAGS_NO_CPU_POOL, CACHE_FLAGS_SLAB_EXTERNAL, CACHE_FLAGS_NO_RECLAIM, CACHE_FLAGS_VERIFY, CACHE_FLAGS_DIRECT): Remove macros. * kern/slab.c (host_slab_info): Pass raw cache flags to caller.
* Fix slab allocator option handlingRichard Braun2016-02-222-50/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The slab allocator has grown to use multiple ways to allocate slabs as well as track them, which got a little messy. One consequence is the breaking of the KMEM_CF_VERIFY option. In order to make the code less confusing, this change expresses all options as explicit cache flags and clearly defines their relationships. The special kmem_slab and vm_map_entry caches are initialized accordingly. * kern/slab.c (KMEM_CF_DIRECTMAP): Rename to ... (KMEM_CF_PHYSMEM): ... this new macro. (KMEM_CF_DIRECT): Restore macro. (KMEM_CF_USE_TREE, KMEM_CF_USE_PAGE): New macros. (KMEM_CF_VERIFY): Update value. (kmem_pagealloc_directmap): Rename to... (kmem_pagealloc_physmem): ... this new function. (kmem_pagefree_directmap): Rename to ... (kmem_pagefree_physmem): ... this new function. (kmem_pagealloc, kmem_pagefree): Update macro names. (kmem_slab_use_tree): Remove function. (kmem_slab_create, kmem_slab_destroy): Update according to the new cache flags. (kmem_cache_compute_sizes): Rename to ... (kmem_cache_compute_properties): ... this new function, and update to properly set cache flags. (kmem_cache_init): Update call to kmem_cache_compute_properties. (kmem_cache_alloc_from_slab): Check KMEM_CF_USE_TREE instead of calling the defunct kmem_slab_use_tree function. (kmem_cache_free_to_slab): Update according to the new cache flags. kmem_cache_free_verify): Add assertion. (slab_init): Update initialization of kmem_slab_cache. * kern/slab.h (KMEM_CACHE_DIRECTMAP): Rename to ... (KMEM_CACHE_PHYSMEM): ... this new macro. * vm/vm_map.c (vm_map_init): Update initialization of vm_map_entry_cache.
* Optimize slab lookup on the free pathRichard Braun2016-02-221-11/+43
| | | | | | | | | | | | | Caches that use external slab data but allocate slabs from the direct physical mapping can look up slab data in constant time by associating the slab data directly with the underlying page. * kern/slab.c (kmem_slab_use_tree): Take KMEM_CF_DIRECTMAP into account. (kmem_slab_create): Set page private data if relevant. (kmem_slab_destroy): Clear page private data if relevant. (kmem_cache_free_to_slab): Use page private data if relevant. * vm/vm_page.c (vm_page_init_pa): Set `priv' member to NULL. * vm/vm_page.h (vm_page_set_priv, vm_page_get_priv): New functions.
* Fix unused variable warningsRichard Braun2016-02-221-2/+0
| | | | * kern/slab.c (slab_init): Remove unused variables.
* Avoid slab allocation failures caused by memory fragmentationRichard Braun2016-02-202-63/+93
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the slab allocator has been changed to sit directly on top of the physical allocator, failures caused by fragmentation have been observed, as one could expect. This change makes the slab allocator revert to kernel virtual memory when allocating larger-than-page slabs. This solution is motivated in part to avoid the complexity of other solutions such as page mobility, and also because a microkernel cannot be extended to new arbitrary uncontrolled usage patterns such as a monolithic kernel with loadable modules. As such, large objects are rare, and their use infrequent, which is compatible with the use of kernel virtual memory. * kern/slab.c: Update module description. (KMEM_CF_SLAB_EXTERNAL, KMEM_CF_VERIFY): Update values. (KMEM_CF_DIRECT): Remove macro. (KMEM_CF_DIRECTMAP): New macro. (kmem_pagealloc_directmap, kmem_pagefree_directmap, kmem_pagealloc_virtual, kmem_pagefree_virtual): New functions. (kmem_pagealloc, kmem_pagefree, kmem_slab_create, kmem_slab_destroy, kalloc, kfree): Update to use the new pagealloc functions. (kmem_cache_compute_sizes): Update the algorithm used to determine slab size and other cache properties. (kmem_slab_use_tree, kmem_cache_free_to_slab, host_slab_info): Update to correctly use the cache flags. (slab_init): Add KMEM_CACHE_DIRECTMAP to the kmem_slab_cache init flags. * kern/slab.h (KMEM_CACHE_VERIFY): Change value. (KMEM_CACHE_DIRECTMAP): New macro. * vm/vm_map.c (vm_map_init): Add KMEM_CACHE_DIRECTMAP to the vm_map_entry_cache init flags.
* Remove kmem mapRichard Braun2016-02-072-18/+0
| | | | | | | | | | Now that the slab allocator doesn't use kernel virtual memory any more, this map has become irrelevant. * kern/slab.c (KMEM_MAP_SIZE): Remove macro. (kmem_map_store, kmem_map): Remove variables. (slab_init): Remove call kmem_submap. * kern/slab.h (kmem_map): Remove extern declaration.
* Change computation of slab sizeRichard Braun2016-02-061-56/+26
| | | | | | | | | | | | Allocating directly out of the physical memory allocator makes the slab allocator vulnerable to failures due to fragmentation. This change makes the slab allocator use the lowest possible size for its slabs to reduce the chance of contiguous allocation failures. * kern/slab.c (KMEM_MIN_BUFS_PER_SLAB, KMEM_SLAB_SIZE_THRESHOLD): Remove macros. (kmem_cache_compute_sizes): Update the algorithm used to determine slab size and other cache properties.
* Fix various memory managment errorsRichard Braun2016-02-023-24/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A few errors were introduced in the latest changes. o Add VM_PAGE_WAIT calls around physical allocation attempts in case of memory exhaustion. o Fix stack release. o Fix memory exhaustion report. o Fix free page accounting. * kern/slab.c (kmem_pagealloc, kmem_pagefree): New functions (kmem_slab_create, kmem_slab_destroy, kalloc, kfree): Use kmem_pagealloc and kmem_pagefree instead of the raw page allocation functions. (kmem_cache_compute_sizes): Don't store slab order. * kern/slab.h (struct kmem_cache): Remove `slab_order' member. * kern/thread.c (stack_alloc): Call VM_PAGE_WAIT in case of memory exhaustion. (stack_collect): Call vm_page_free_contig instead of kmem_free to release pages. * vm/vm_page.c (vm_page_seg_alloc): Fix memory exhaustion report. (vm_page_setup): Don't update vm_page_free_count. (vm_page_free_pa): Check page parameter. (vm_page_mem_free): New function. * vm/vm_page.h (vm_page_free_count): Remove extern declaration. (vm_page_mem_free): New prototype. * vm/vm_pageout.c: Update comments not to refer to vm_page_free_count. (vm_pageout_scan, vm_pageout_continue, vm_pageout): Use vm_page_mem_free instead of vm_page_free_count, update types accordingly. * vm/vm_resident.c (vm_page_free_count, vm_page_free_count_minimum): Remove variables. (vm_page_free_avail): New variable. (vm_page_bootstrap, vm_page_grab, vm_page_release, vm_page_grab_contig, vm_page_free_contig, vm_page_wait): Use vm_page_mem_free instead of vm_page_free_count, update types accordingly, don't set vm_page_free_count_minimum. * vm/vm_user.c (vm_statistics): Likewise.
* Stack the slab allocator directly on top of the physical allocatorRichard Braun2016-02-027-120/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to increase the amount of memory available for kernel objects, without reducing the amount of memory available for user processes, a new allocation strategy is introduced in this change. Instead of allocating kernel objects out of kernel virtual memory, the slab allocator directly uses the direct mapping of physical memory as its backend. This largely increases the kernel heap, and removes the need for address translation updates. In order to allow this strategy, an assumption made by the interrupt code had to be removed. In addition, kernel stacks are now also allocated directly from the physical allocator. * i386/i386/db_trace.c: Include i386at/model_dep.h (db_i386_reg_value): Update stack check. * i386/i386/locore.S (trap_from_kernel, all_intrs, int_from_intstack): Update interrupt handling. * i386/i386at/model_dep.c: Include kern/macros.h. (int_stack, int_stack_base): New variables. (int_stack_high): Remove variable. (i386at_init): Update interrupt stack initialization. * i386/i386at/model_dep.h: Include i386/vm_param.h. (int_stack_top, int_stack_base): New extern declarations. (ON_INT_STACK): New macro. * kern/slab.c: Include vm/vm_page.h (KMEM_CF_NO_CPU_POOL, KMEM_CF_NO_RECLAIM): Remove macros. (kmem_pagealloc, kmem_pagefree, kalloc_pagealloc, kalloc_pagefree): Remove functions. (kmem_slab_create): Allocate slab pages directly from the physical allocator. (kmem_slab_destroy): Release slab pages directly to the physical allocator. (kmem_cache_compute_sizes): Update the slab size computation algorithm to return a power-of-two suitable for the physical allocator. (kmem_cache_init): Remove custom allocation function pointers. (kmem_cache_reap): Remove check on KMEM_CF_NO_RECLAIM. (slab_init, kalloc_init): Update calls to kmem_cache_init. (kalloc, kfree): Directly fall back on the physical allocator for big allocation sizes. (host_slab_info): Remove checks on defunct flags. * kern/slab.h (kmem_slab_alloc_fn_t, kmem_slab_free_fn_t): Remove types. (struct kmem_cache): Add `slab_order' member, remove `slab_alloc_fn' and `slab_free_fn' members. (KMEM_CACHE_NOCPUPOOL, KMEM_CACHE_NORECLAIM): Remove macros. (kmem_cache_init): Update prototype, remove custom allocation functions. * kern/thread.c (stack_alloc): Allocate stacks from the physical allocator. * vm/vm_map.c (vm_map_kentry_cache, kentry_data, kentry_data_size): Remove variables. (kentry_pagealloc): Remove function. (vm_map_init): Update calls to kmem_cache_init, remove initialization of vm_map_kentry_cache. (vm_map_create, _vm_map_entry_dispose, vm_map_copyout): Unconditionnally use vm_map_entry_cache. * vm/vm_map.h (kentry_data, kentry_data_size, kentry_count): Remove extern declarations. * vm/vm_page.h (VM_PT_STACK): New page type. * device/dev_lookup.c (dev_lookup_init): Update calls to kmem_cache_init. * device/dev_pager.c (dev_pager_hash_init, device_pager_init): Likewise. * device/ds_routines.c (mach_device_init, mach_device_trap_init): Likewise. * device/net_io.c (net_io_init): Likewise. * i386/i386/fpu.c (fpu_module_init): Likewise. * i386/i386/machine_task.c (machine_task_module_init): Likewise. * i386/i386/pcb.c (pcb_module_init): Likewise. * i386/intel/pmap.c (pmap_init): Likewise. * ipc/ipc_init.c (ipc_bootstrap): Likewise. * ipc/ipc_marequest.c (ipc_marequest_init): Likewise. * kern/act.c (global_act_init): Likewise. * kern/processor.c (pset_sys_init): Likewise. * kern/rdxtree.c (rdxtree_cache_init): Likewise. * kern/task.c (task_init): Likewise. * vm/memory_object_proxy.c (memory_object_proxy_init): Likewise. * vm/vm_external.c (vm_external_module_initialize): Likewise. * vm/vm_fault.c (vm_fault_init): Likewise. * vm/vm_object.c (vm_object_bootstrap): Likewise. * vm/vm_resident.c (vm_page_module_init): Likewise. (vm_page_bootstrap): Remove initialization of kentry_data.
* Use vm_page as the physical memory allocatorRichard Braun2016-01-234-6/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change replaces the historical page allocator with a buddy allocator implemented in vm/vm_page.c. This allocator allows easy contiguous allocations and also manages memory inside segments. In a future change, these segments will be used to service requests with special constraints, such as "usable for 16-bits DMA" or "must be part of the direct physical mapping". * Makefrag.am (libkernel_a_SOURCES): Add vm/vm_page.c. * i386/Makefrag.am (libkernel_a_SOURCES): Add i386/i386at/biosmem.{c,h}. * i386/i386/vm_param.h: Include kern/macros.h. (VM_PAGE_DMA_LIMIT, VM_PAGE_MAX_SEGS, VM_PAGE_DMA32_LIMIT, VM_PAGE_DIRECTMAP_LIMIT, VM_PAGE_HIGHMEM_LIMIT, VM_PAGE_SEG_DMA, VM_PAGE_SEG_DMA32, VM_PAGE_SEG_DIRECTMAP, VM_PAGE_SEG_HIGHMEM): New macros. * i386/i386at/model_dep.c: Include i386at/biosmem.h. (avail_next, avail_remaining): Remove variables. (mem_size_init): Remove function. (i386at_init): Initialize and use the biosmem module for early physical memory management. (pmap_free_pages): Return phys_last_addr instead of avail_remaining. (init_alloc_aligned): Turn into a wrapper for biosmem_bootalloc. (pmap_grab_page): Directly call init_alloc_aligned instead of pmap_next_page. * i386/include/mach/i386/vm_types.h (phys_addr_t): New type. * kern/bootstrap.c (free_bootstrap_pages): New function. (bootstrap_create): Call free_bootstrap_pages instead of vm_page_create. * kern/cpu_number.h (CPU_L1_SIZE): New macro. * kern/slab.h: Include kern/cpu_number.h. (CPU_L1_SIZE): Remove macro, moved to kern/cpu_number.h. * kern/startup.c (setup_main): Change the value of machine_info.memory_size. * linux/dev/glue/glue.h (alloc_contig_mem, free_contig_mem): Update prototypes. * linux/dev/glue/kmem.c (linux_kmem_init): Don't use defunct page queue. * linux/dev/init/main.c (linux_init): Don't free unused memory. (alloc_contig_mem, free_contig_mem): Turn into wrappers for the vm_page allocator. * linux/pcmcia-cs/glue/ds.c (PAGE_SHIFT): Don't undefine. * vm/pmap.h (pmap_startup, pmap_next_page): Remove prototypes. * vm/vm_fault.c (vm_fault_page): Update calls to vm_page_convert. * vm/vm_init.c (vm_mem_init): Call vm_page_info_all. * vm/vm_object.c (vm_object_page_map): Update call to vm_page_init. * vm/vm_page.h (vm_page_queue_free): Remove variable declaration. (vm_page_create, vm_page_release_fictitious, vm_page_release): Remove declarations. (vm_page_convert, vm_page_init): Update prototypes. (vm_page_grab_contig, vm_page_free_contig): New prototypes. * vm/vm_resident.c (vm_page_template, vm_page_queue_free, vm_page_big_pagenum): Remove variables. (vm_page_bootstrap): Update and call vm_page_setup. (pmap_steal_memory): Update and call vm_page_bootalloc. (pmap_startup, vm_page_create, vm_page_grab_contiguous_pages): Remove functions. (vm_page_init_template, vm_page_grab_contig, vm_page_free_contig): New functions. (vm_page_init): Update and call vm_page_init_template. (vm_page_release_fictitious): Make static. (vm_page_more_fictitious): Update call to vm_page_init. (vm_page_convert): Rewrite to comply with vm_page. (vm_page_grab): Update and call vm_page_alloc_pa. (vm_page_release): Update and call vm_page_free_pa.