| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
If a "wire_required" process calls vm_map_protect(0), the
memory gets unwired as expected. But if the process then calls
vm_map_protect(VM_PROT_READ) again, we need to wire that memory.
(This happens to be exactly what glibc does for its heap)
This fixes Hurd hangs on lack of memory, during which mach was swapping
pieces of mach-defpager out.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* vm/memory_object_proxy.c: truncate vm array types as if they were
the rpc_ version because MIG can't handle that. This rpc can't
handle more than one element anyway.
Note that the same issue with vm arrays is present at least with
syscall emulation, but that functionality seems unused for now.
A better fix could be to add a vm descriptor type in include/mach/message.h,
but then probably we don't need to use the rpc_ types in MIG anymore,
they would be needed only for the syscall definitions.
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-15-luca@orpolo.org>
|
|
|
|
|
|
|
| |
* vm/vm_user.c: sign-extend mask with USER32
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-6-luca@orpolo.org>
|
|
|
|
|
| |
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-13-luca@orpolo.org>
|
|
|
|
|
|
|
|
|
|
| |
This allows contiguous allocations aligned to values
smaller than one page, but still a power of 2,
by forcing the alignment to be to the nearest page.
This works because PAGE_SIZE is a power of two.
Message-Id: <20220821065732.269573-1-damien@zamaudio.com>
|
| |
|
| |
|
|
|
|
| |
vpi_offset is not currently large enough to store it.
|
| |
|
| |
|
|
|
|
| |
The map function is supposed to return physical addresses, thus phys_addr_t.
|
|
|
|
| |
Like Linux just did.
|
|
|
|
|
|
| |
The memmmap method may reject some offsets (because it falls in non-device
ranges), so device_map_page has to notice this and report the error.
device_pager_data_request then has to notice as well and report.
|
|
|
|
| |
For coherency with memory_object_create_proxy.
|
|
|
|
|
|
|
|
| |
* vm/vm_map.c (vm_region_get_proxy):
- Return KERN_INVALID_ARGUMENT when the entry is a submap.
- Create a pager for the vm_object when the entry doesn't
have any yet, since it's an anonymous mapping.
Message-Id: <20211106081333.10366-3-jlledom@mailfence.com>
|
|
|
|
|
|
|
|
|
| |
To get a proxy to the region a given address belongs to,
with protection and range limited to the region ones.
* include/mach/mach4.defs: vm_region_get_proxy RPC declaration
* vm/vm_map.c: vm_region_get_proxy implementation
Message-Id: <20211106081333.10366-2-jlledom@mailfence.com>
|
|
|
|
|
| |
* vm/memory_object_proxy.c: Include kern/mach4.server.h.
(memory_object_create_proxy): Drop const qualifiers.
|
|
|
|
|
|
|
|
|
|
|
| |
This is a no-op on i386.
* i386/include/mach/i386/vm_types.h (vm_size_array_t): New type.
* include/mach/mach4.defs (vm_size_array_t): New type.
(memory_object_create_proxy): Turn len parameter from vm_offset_array_t
to vm_size_array_t.
* vm/memory_object_proxy.c (memory_object_create_proxy): Turn len
parameter from const vm_offset_t * to const vm_size_t *.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_page_grab was systematically using the VM_PAGE_SEL_DIRECTMAP selector
to play safe with existing code.
This adds a flags parameter to let callers of vm_page_grab specify their
constraints.
Linux drivers need 32bit dmas, Xen drivers use kvtophys to clear some
data. Callers of kmem_pagealloc_physmem and vm_page_grab_phys_addr also use
kvtophys. Otherwise allocations can go to highmem.
This fixes the allocation jam in the directmap segment.
* vm/vm_page.h (VM_PAGE_DMA, VM_PAGE_DMA32, VM_PAGE_DIRECTMAP,
VM_PAGE_HIGHMEM): New macros.
(vm_page_grab): Add flags parameter.
* vm/vm_resident.c (vm_page_grab): Choose allocation selector according
to flags parameter.
(vm_page_convert, vm_page_alloc): Pass VM_PAGE_HIGHMEM to vm_page_grab.
(vm_page_grab_phys_addr): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* vm/vm_map.c (vm_map_copy_steal_pages): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Pass VM_PAGE_DIRECTMAP to
vm_page_grab.
* xen/block.c (device_read): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* linux/dev/glue/block.c (alloc_buffer): Pass VM_PAGE_DMA32 to vm_page_grab.
|
| |
|
|
|
|
| |
* vm/vm_page.c (db_show_vmstat): Add printing the segment size.
|
|
|
|
|
| |
* vm/vm_page.c (db_show_vmstat): Drop displaying cache numbers a second
time.
|
|
|
|
| |
* vm/vm_page.c (db_show_vmstat)
|
|
|
|
|
|
|
|
| |
with an output similar to the userland vmstat command
* vm/vm_page.c (db_show_vmstat): New function.
* vm/vm_page.h (db_show_vmstat): New prototype.
* ddb/db_command.c (db_show_cmds): Add vmstat command.
|
|
|
|
| |
Otherwise userland can send spurious notifications.
|
|
|
|
| |
On success we'd have to clean the port right. Just consume it.
|
|
|
|
|
| |
We want to prevent subproxies from requesting larger sizes than what a
proxy initially allowed.
|
| |
|
| |
|
|
|
|
|
|
|
| |
glibc's mmap implementation assumes that gnumach will cap the prot for it,
so for now let's revert back to capping rather than rejecting.
That fixes mmap(SHARED|READ) for read-only objects.
|
|
|
|
| |
This reverts commit af9f471b500bcd0c1023259c7577e074fe6d3ee5.
|
|
|
|
|
|
|
|
|
| |
* If not making a copy, don't cap protection to the limit enforced
by the proxy, and only require read access. This fixes mapping
parts of read-only files MAP_ANON + PROT_READ|PROT_WRITE.
* Instead of silently capping protection, return KERN_PROTECTION_FAILURE
to the caller like the other vm_*() routines do.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does not make sense, and produces incorrect results (since
vme_end is 0, etc.)
* vm/vm_map.h (_vm_map_clip_start, _vm_map_clip_end): Add link_gap
parameter.
* vm/vm_map.c (_vm_map_entry_link): Add link_gap parameter, do not call
vm_map_gap_insert if it is 0.
(vm_map_entry_link): Set link_gap to 1 in _vm_map_entry_link call.
(_vm_map_clip_start): Add link_gap parameter, pass it to
_vm_map_entry_link call..
(vm_map_clip_start): Set link_gap_to 1 in _vm_map_clip_start call.
(vm_map_copy_entry_link): Set link_gap to 0 in _vm_map_entry_link call.
(vm_map_copy_clip_start): Set link_gap_to 0 in _vm_map_clip_start call.
(_vm_map_entry_unlink): Add unlink_gap parameter, do not call
vm_map_gap_remove if it is 0.
(vm_map_entry_unlink): Set unlink_gap to 1 in _vm_map_entry_unlink call.
(_vm_map_clip_end): Add link_gap parameter, pass it to
_vm_map_entry_link call..
(vm_map_clip_end): Set link_gap_to 1 in _vm_map_clip_end call.
(vm_map_copy_entry_unlink): Set unlink_gap to 0 in _vm_map_entry_unlink call.
(vm_map_copy_clip_end): Set link_gap_to 0 in _vm_map_clip_end call.
* vm/vm_kern.c (projected_buffer_deallocate): set link_gap to 1 in
_vm_map_clip_start and _vm_map_clip_end calls.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_find_entry_anywhere): Print warning when max_size
gets smaller than size.
|
|
|
|
|
|
|
|
|
|
|
| |
glibc's sysdeps/mach/hurd/dl-sysdep.c has been wanting to use this for
decades.
* include/string.h (ffs): New declaration.
* vm/vm_map.c: Include <string.h>.
(vm_map_find_entry_anywhere): Separate out high bits from mask, to
compute the maximum offset instead of map->max_offset.
* doc/mach.texi (vm_map): Update documentation accordingly.
|
|
|
|
|
|
|
|
| |
This function allows to map a table in a memory page,
using its physical address,aligning the start of the page with the start of the table
*vm/vm_kern.c (kmem_alloc_aligned_table): New function. Returns a reference for the virtual address of the table.
*vm/vm_kern.h (kmem_alloc_aligned_table): New prototype
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows privileged userland drivers to allocate buffers for e.g. DMA,
and thus need them to be physically contiguous and get their physical
address.
Initial work by Zheng Da, reworked by Richard Braun, Damien Zammit, and
myself.
* doc/mach.texi (vm_allocate_contiguous): New RPC.
* i386/include/mach/i386/machine_types.defs (rpc_phys_addr_t): New type.
* i386/include/mach/i386/vm_types.h [!MACH_KERNEL] (phys_addr_t): Set
type to 64bits.
(rpc_phys_addr_t): New type, always 64bits.
* include/mach/gnumach.defs (vm_allocate_contiguous):New RPC.
* vm/vm_user.c (vm_allocate_contiguous): New function.
|
|
|
|
|
|
|
| |
as is returned by vm_info.
* vm/vm_user.c (vm_map): Before trying to vm_object_enter, try to simply
lookup the name.
|
|
|
|
| |
* vm/vm_kern.c (kmem_alloc_wired): Factorize with kmem_valloc.
|
|
|
|
|
|
|
|
|
|
|
| |
Functions like vremap need to allocate some virtual addressing space
before making their own mapping. kmem_alloc_wired can be used for that
but that wastes memory.
* vm/vm_kern.c (kmem_valloc): New function.
* vm/vm_kern.h (kmem_valloc): New prototype.
* linux/dev/glue/kmem.c (vremap): Call kmem_valloc instead of
kmem_alloc_wired. Also check that `offset' is aligned on a page.
|
|
|
|
|
|
|
|
|
|
| |
It needs to be able to hold > 4G size.
* i386/include/mach/i386/vm_types.h (vm_size_t): Set type to unsigned
long.
* vm/vm_user.c (vm_read, vm_write): Fix type according to RPC.
* i386/i386at/model_dep.c (c_boot_entry): Fix format.
* device/dev_pager.c (device_pager_data_request): Fix format.
|
|
|
|
|
|
|
| |
Suggested by guy fleury iteriteka <gfleury@disroot.org>
* vm/vm_object.c (ipc/mavm_object_copy_call): Make sure vm_object_enter call
succeeds.
|
|
|
|
| |
* vm/vm_map.c(vm_map_msync): explit group of first condition.
|
|
|
|
|
| |
* vm/vm_map.c(vm_map_fork): use VM_MAP_NULL instead of PMAP_NULL when compare
with new_map.
|
|
|
|
| |
* vm/vm_map.c(vm_map_msync): Add missing return keyword.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_fork): Check for `new_map` being non-NULL, and not
for `new_pmap` a second time.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* include/mach/vm_sync.h: New file.
* include/mach/mach_types.h: Include <mach/vm_sync.h>
* Makefrag.am (include_mach_HEADERS): Add include/mach/vm_sync.h.
* include/mach/mach_types.defs (vm_sync_t): Add type.
* include/mach/gnumach.defs (vm_object_sync, vm_msync): Add RPCs.
* vm/vm_map.h: Include <mach/vm_sync.h>.
(vm_map_msync): New declaration.
* vm/vm_map.c (vm_map_msync): New function.
* vm/vm_user.c: Include <mach/vm_sync.h> and <kern/mach.server.h>.
(vm_object_sync, vm_msync): New functions.
|
|
|
|
|
| |
* vm/vm_map.c (vm_map_find_entry_anywhere): Also check that (min + mask) &
~mask remains bigger than min.
|
|
|
|
| |
* vm/vm_map.c (vm_map_copyout): Fix panic format.
|
|
|
|
|
|
|
|
|
| |
* i386/intel/pmap.c: Drop the register qualifier.
* ipc/ipc_kmsg.h: Likewise.
* kern/bootstrap.c: Likewise.
* kern/profile.c: Likewise.
* kern/thread.c: Likewise.
* vm/vm_object.c: Likewise.
|