| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
This is convenient when tracking buffer overflows
|
| |
|
| |
|
| |
|
|
|
|
|
| |
*result_paddr + size is exactly pass the allocated memory, so it can be
equal to the requested bound.
|
| |
|
|
|
|
|
|
| |
In case pmax is inside a segment, we should avoid using it, and stay
with the previous segment, thus being sure to respect the caller's
constraints.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Rumpdisk needs to allocate dma32 memory areas, so we do always need this
limit.
The non-Xen x86_64 case had a typo, and the 32bit PAE case didn't have
the DMA32 limit.
Also, we have to cope with VM_PAGE_DMA32_LIMIT being either above or below
VM_PAGE_DIRECTMAP_LIMIT depending on the cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_object_coalesce() callers used to rely on the fact that it always
merged the next_object into prev_object, potentially destroying
next_object and leaving prev_object the result of the whole operation.
After ee65849bec5da261be90f565bee096abb4117bdd
"vm: Allow coalescing null object with an internal object", this is no
longer true, since in case of prev_object == VM_OBJECT_NULL and
next_object != VM_OBJECT_NULL, the overall result is next_object, not
prev_object. The current callers are prepared to deal with this since
they handle this case seprately anyway, but the following commit will
introduce another caller that handles both cases in the same code path.
So, declare the way vm_object_coalesce() coalesces the two objects its
implementation detail, and make it return the resulting object and the
offset into it explicitly. This simplifies the callers, too.
Message-Id: <20230705141639.85792-2-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a deallocated VM map entry refers to an object that only has a single
reference and doesn't have a pager port, we can eagerly release any
physical pages that were contained in the deallocated range.
This is not a 100% solution: it is still possible to "leak" physical
pages that can never appear in virtual memory again by creating several
references to a memory object (perhaps by forking a VM map with
VM_INHERIT_SHARE) and deallocating the pages from all the maps referring
to the object. That being said, it should help to release the pages in
the common case sooner.
Message-Id: <20230626112656.435622-6-bugaevc@gmail.com>
|
|
|
|
|
|
|
| |
When entering an object into a map, try to extend the next entry
backward, in addition to the previously existing attempt to extend the
previous entry forward.
Message-Id: <20230626112656.435622-5-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, vm_object_coalesce would only succeed with next_object being
VM_OBJECT_NULL (and with the previous patch, with the two object
references pointing to the same object). This patch additionally allows
the inverse: prev_object being VM_OBJECT_NULL and next_object being some
internal VM object that we have not created a pager port for, provided
the offset of the existing mapping in the object allows for placing the
new mapping before it.
This is not used anywhere at the moment (the only caller, vm_map_enter,
ensures that next_object is either VM_OBJECT_NULL or an object that has
a pager port), but it will get used with the next patch.
Message-Id: <20230626112656.435622-4-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
| |
If a mapping of an object is made right next to another mapping of the
same object have the same properties (protection, inheritance, etc.),
Mach will now expand the previous VM map entry to cover the new address
range instead of creating a new entry.
Message-Id: <20230626112656.435622-3-bugaevc@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct vm_page is supposed to be a "small structure", but it takes up 96
bytes on x86_64 (to represent a 4k page). By utilizing bitfields and
strategically reordering members to avoid excessive padding, it can be
shrunk to 80 bytes.
- page_lock and unlock_request only need to store a bitmask of
VM_PROT_READ, VM_PROT_WRITE, and VM_PROT_EXECUTE. Even though the
special values VM_PROT_NO_CHANGE and VM_PROT_NOTIFY are defined, they
are not used for the two struct vm_page members.
- type and seg_index both need to store one of the four possible values
in the range from 0 to 3. Two bits are sufficient for this.
- order needs to store a number from 0 to VM_PAGE_NR_FREE_LISTS (which
is 11), or a special value VM_PAGE_ORDER_UNLISTED. Four bits are
sufficient for this.
No functional change.
Message-Id: <20230626112656.435622-2-bugaevc@gmail.com>
|
|
|
|
|
|
| |
types are correct now
Message-Id: <Y+SfNtIRuwj0Zap1@jupiter.tail36e24.ts.net>
|
|
|
|
|
|
| |
The documentation of vm_page_insert says that the object must be locked.
Moreover, the unlock call is here but no call was present.
Message-Id: <20230208225436.23365-1-etienne.brateau@gmail.com>
|
|
|
|
| |
(this is actually a no-op for i386)
|
|
|
|
|
|
|
|
|
|
| |
When generating stubs, Mig will will take the vm_size_array_t and define the
input request struct using rpc_vm_size_t since the size is variable. This will turn cause a mismatch
between types (vm_size_t* vs rpc_vm_size_t*). We could also ask Mig to produce
a prototype by using rpc_vm_size_t*, however we would need to change the implementation
of the RPC to use rpc_* types anyway since we want to avoid another allocation
of the array.
Message-Id: <Y9iwScHpmsgY3V0N@jupiter.tail36e24.ts.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* i386/i386/io_map.c: code is unused.
* i386/i386/io_perm.c: include mig prototypes.
* i386/i386/mp_desc.c: Deleted interrupt_stack_alloc since it is not
used.
* i386/i386/seg.h: Moved descriptor structs to i386/include/mach/i386/mach_i386_types.h
as that represents the interface types for RPCs.
Defined aliases for real_descriptor since those are used by the i386 RPCs. Inlined many
functions here too and removed seg.c.
* i386/i386/seg.c: Removed. All the functions are inline now.
* i386/i386/trap.c: Use static.
* i386/i386/trap.h: Define missing prototypes.
* i386/i386/tss.h: Use static inline for ltr.
* i386/i386/user_ldt.c: Include mig prototypes.
* i386/include/mach/i386/mach_i386.defs: Define real_descriptor_t types
since those are used in the RPC definition. Now both prototypes and
definitions will match.
* i386/include/mach/i386/mach_i386_types.h: Move struct descriptor
from seg.h since we need those for the RPC interfaces. Removed include
of io_perm.h since it generates circular includes otherwise.
* i386/intel/pmap.c: pmap_map is unused. Added static qualifier for
several functions.
* i386/intel/pmap.h: pmap_update_interrupt declared for non-SMP and SMP.
Message-Id: <Y89+R2VekOQK4IUo@jupiter.lan>
|
|
|
|
| |
Message-Id: <Y8mYd/pt/og4Tj5I@mercury.tail36e24.ts.net>
|
|
|
|
|
|
| |
This also reverts 566c227636481b246d928772ebeaacbc7c37145b and
963b1794d7117064cee8ab5638b329db51dad854
Message-Id: <Y8d75KSqNL4FFInm@mercury.tail36e24.ts.net>
|
|
|
|
|
| |
stack_statistics, swapin_thread_continue, and memory_object_lock_page are
not used outside their module.
|
|
|
|
|
|
|
| |
mach4.defs and mach_host.defs.
Also move more mach_debug rpcs to kern/mach_debug.h.
Message-Id: <Y7+LPMLOafUQrNHZ@jupiter.tail36e24.ts.net>
|
|
|
|
|
|
|
|
|
|
|
| |
Marked some functions as static (private) as needed and added missing
includes.
This also revealed some dead code which was removed.
Note that -Wmissing-prototypes is not enabled here since there is a
bunch more warnings.
Message-Id: <Y6j72lWRL9rsYy4j@mars>
|
| |
|
|
|
|
|
|
|
| |
Most of the changes include defining and using proper function type
declarations (with argument types declared) and avoiding using the
K&R style of function declarations.
Message-Id: <Y6Jazsuis1QA0lXI@mars>
|
|
|
|
|
|
| |
It seems we hit he "unable to recycle any page" even when there is no
memory pressure, probably just because the pageout thread somehow to
kicked but there's nothing to page out left.
|
|
|
|
|
|
| |
We already use this built-in in other places and this will move us closer to
being able to build the kernel without libc.
Message-Id: <Y5l80/VUFvJYZTjy@jupiter.tail36e24.ts.net>
|
|
|
|
|
|
|
|
|
|
| |
This allows *printf to use %zd/%zu/%zx to print vm_size_t and
vm_offset_t. Warnings using the incorrect specifiers were fixed.
Note that MACH_PORT_NULL became just 0 because GCC thinks that we were
comparing a pointer to a character (due to it being an unsigned int) so
I removed the explicit cast.
Message-Id: <Y47UNdcUF35Ag4Vw@reue>
|
|
|
|
|
|
|
|
|
|
|
| |
If a "wire_required" process calls vm_map_protect(0), the
memory gets unwired as expected. But if the process then calls
vm_map_protect(VM_PROT_READ) again, we need to wire that memory.
(This happens to be exactly what glibc does for its heap)
This fixes Hurd hangs on lack of memory, during which mach was swapping
pieces of mach-defpager out.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* vm/memory_object_proxy.c: truncate vm array types as if they were
the rpc_ version because MIG can't handle that. This rpc can't
handle more than one element anyway.
Note that the same issue with vm arrays is present at least with
syscall emulation, but that functionality seems unused for now.
A better fix could be to add a vm descriptor type in include/mach/message.h,
but then probably we don't need to use the rpc_ types in MIG anymore,
they would be needed only for the syscall definitions.
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-15-luca@orpolo.org>
|
|
|
|
|
|
|
| |
* vm/vm_user.c: sign-extend mask with USER32
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-6-luca@orpolo.org>
|
|
|
|
|
| |
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220628101054.446126-13-luca@orpolo.org>
|
|
|
|
|
|
|
|
|
|
| |
This allows contiguous allocations aligned to values
smaller than one page, but still a power of 2,
by forcing the alignment to be to the nearest page.
This works because PAGE_SIZE is a power of two.
Message-Id: <20220821065732.269573-1-damien@zamaudio.com>
|
| |
|
| |
|
|
|
|
| |
vpi_offset is not currently large enough to store it.
|
| |
|
| |
|
|
|
|
| |
The map function is supposed to return physical addresses, thus phys_addr_t.
|
|
|
|
| |
Like Linux just did.
|
|
|
|
|
|
| |
The memmmap method may reject some offsets (because it falls in non-device
ranges), so device_map_page has to notice this and report the error.
device_pager_data_request then has to notice as well and report.
|
|
|
|
| |
For coherency with memory_object_create_proxy.
|
|
|
|
|
|
|
|
| |
* vm/vm_map.c (vm_region_get_proxy):
- Return KERN_INVALID_ARGUMENT when the entry is a submap.
- Create a pager for the vm_object when the entry doesn't
have any yet, since it's an anonymous mapping.
Message-Id: <20211106081333.10366-3-jlledom@mailfence.com>
|
|
|
|
|
|
|
|
|
| |
To get a proxy to the region a given address belongs to,
with protection and range limited to the region ones.
* include/mach/mach4.defs: vm_region_get_proxy RPC declaration
* vm/vm_map.c: vm_region_get_proxy implementation
Message-Id: <20211106081333.10366-2-jlledom@mailfence.com>
|
|
|
|
|
| |
* vm/memory_object_proxy.c: Include kern/mach4.server.h.
(memory_object_create_proxy): Drop const qualifiers.
|
|
|
|
|
|
|
|
|
|
|
| |
This is a no-op on i386.
* i386/include/mach/i386/vm_types.h (vm_size_array_t): New type.
* include/mach/mach4.defs (vm_size_array_t): New type.
(memory_object_create_proxy): Turn len parameter from vm_offset_array_t
to vm_size_array_t.
* vm/memory_object_proxy.c (memory_object_create_proxy): Turn len
parameter from const vm_offset_t * to const vm_size_t *.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_page_grab was systematically using the VM_PAGE_SEL_DIRECTMAP selector
to play safe with existing code.
This adds a flags parameter to let callers of vm_page_grab specify their
constraints.
Linux drivers need 32bit dmas, Xen drivers use kvtophys to clear some
data. Callers of kmem_pagealloc_physmem and vm_page_grab_phys_addr also use
kvtophys. Otherwise allocations can go to highmem.
This fixes the allocation jam in the directmap segment.
* vm/vm_page.h (VM_PAGE_DMA, VM_PAGE_DMA32, VM_PAGE_DIRECTMAP,
VM_PAGE_HIGHMEM): New macros.
(vm_page_grab): Add flags parameter.
* vm/vm_resident.c (vm_page_grab): Choose allocation selector according
to flags parameter.
(vm_page_convert, vm_page_alloc): Pass VM_PAGE_HIGHMEM to vm_page_grab.
(vm_page_grab_phys_addr): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* vm/vm_map.c (vm_map_copy_steal_pages): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Pass VM_PAGE_DIRECTMAP to
vm_page_grab.
* xen/block.c (device_read): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* linux/dev/glue/block.c (alloc_buffer): Pass VM_PAGE_DMA32 to vm_page_grab.
|