diff options
author | Richard Braun <rbraun@sceen.net> | 2016-12-24 01:30:22 +0100 |
---|---|---|
committer | Richard Braun <rbraun@sceen.net> | 2016-12-24 01:30:22 +0100 |
commit | 023401c5b97023670a44059a60eb2a3a11c8a929 (patch) | |
tree | 9e4fc6286d8545275dab9ba2140bb98bf7ab19b5 /ipc | |
parent | eb07428ffb0009085fcd01dd1b79d9953af8e0ad (diff) | |
download | gnumach-023401c5b97023670a44059a60eb2a3a11c8a929.tar.gz gnumach-023401c5b97023670a44059a60eb2a3a11c8a929.tar.bz2 gnumach-023401c5b97023670a44059a60eb2a3a11c8a929.zip |
VM: rework map entry wiring
First, user wiring is removed, simply because it has never been used.
Second, make the VM system track wiring requests to better handle
protection. This change makes it possible to wire entries with
VM_PROT_NONE protection without actually reserving any page for
them until protection changes, and even make those pages pageable
if protection is downgraded to VM_PROT_NONE.
* ddb/db_ext_symtab.c: Update call to vm_map_pageable.
* i386/i386/user_ldt.c: Likewise.
* ipc/mach_port.c: Likewise.
* vm/vm_debug.c (mach_vm_region_info): Update values returned
as appropriate.
* vm/vm_map.c (vm_map_entry_copy): Update operation as appropriate.
(vm_map_setup): Update member names as appropriate.
(vm_map_find_entry): Update to account for map member variable changes.
(vm_map_enter): Likewise.
(vm_map_entry_inc_wired): New function.
(vm_map_entry_reset_wired): Likewise.
(vm_map_pageable_scan): Likewise.
(vm_map_protect): Update wired access, call vm_map_pageable_scan.
(vm_map_pageable_common): Rename to ...
(vm_map_pageable): ... and rewrite to use vm_map_pageable_scan.
(vm_map_entry_delete): Fix unwiring.
(vm_map_copy_overwrite): Replace inline code with a call to
vm_map_entry_reset_wired.
(vm_map_copyin_page_list): Likewise.
(vm_map_print): Likewise. Also print map size and wired size.
(vm_map_copyout_page_list): Update to account for map member variable
changes.
* vm/vm_map.h (struct vm_map_entry): Remove `user_wired_count' member,
add `wired_access' member.
(struct vm_map): Rename `user_wired' member to `size_wired'.
(vm_map_pageable_common): Remove function.
(vm_map_pageable_user): Remove macro.
(vm_map_pageable): Replace macro with function declaration.
* vm/vm_user.c (vm_wire): Update call to vm_map_pageable.
Diffstat (limited to 'ipc')
-rw-r--r-- | ipc/mach_port.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/ipc/mach_port.c b/ipc/mach_port.c index 93a1248f..5cc39984 100644 --- a/ipc/mach_port.c +++ b/ipc/mach_port.c @@ -216,11 +216,11 @@ mach_port_names( /* can't fault while we hold locks */ kr = vm_map_pageable(ipc_kernel_map, addr1, addr1 + size, - VM_PROT_READ|VM_PROT_WRITE); + VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE); assert(kr == KERN_SUCCESS); kr = vm_map_pageable(ipc_kernel_map, addr2, addr2 + size, - VM_PROT_READ|VM_PROT_WRITE); + VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE); assert(kr == KERN_SUCCESS); } /* space is read-locked and active */ @@ -263,12 +263,12 @@ mach_port_names( kr = vm_map_pageable(ipc_kernel_map, addr1, addr1 + size_used, - VM_PROT_NONE); + VM_PROT_NONE, TRUE, TRUE); assert(kr == KERN_SUCCESS); kr = vm_map_pageable(ipc_kernel_map, addr2, addr2 + size_used, - VM_PROT_NONE); + VM_PROT_NONE, TRUE, TRUE); assert(kr == KERN_SUCCESS); kr = vm_map_copyin(ipc_kernel_map, addr1, size_used, @@ -938,7 +938,7 @@ mach_port_get_set_status( /* can't fault while we hold locks */ kr = vm_map_pageable(ipc_kernel_map, addr, addr + size, - VM_PROT_READ|VM_PROT_WRITE); + VM_PROT_READ|VM_PROT_WRITE, TRUE, TRUE); assert(kr == KERN_SUCCESS); kr = ipc_right_lookup_read(space, name, &entry); @@ -1003,7 +1003,7 @@ mach_port_get_set_status( kr = vm_map_pageable(ipc_kernel_map, addr, addr + size_used, - VM_PROT_NONE); + VM_PROT_NONE, TRUE, TRUE); assert(kr == KERN_SUCCESS); kr = vm_map_copyin(ipc_kernel_map, addr, size_used, |