| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* vm/vm_page.c(vm_page_setup): %lu -> %zu
vm/vm_page.c: In function 'vm_page_setup':
vm/vm_page.c:1425:41: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
1425 | printf("vm_page: page table size: %lu entries (%luk)\n", nr_pages,
| ~~^ ~~~~~~~~
| | |
| long unsigned int size_t {aka unsigned int}
| %u
vm/vm_page.c:1425:54: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
1425 | printf("vm_page: page table size: %lu entries (%luk)\n", nr_pages,
| ~~^
| |
| long unsigned int
| %u
1426 | table_size >> 10);
| ~~~~~~~~~~~~~~~~
| |
| size_t {aka unsigned int}
Message-ID: <20241020190744.2522-1-jbranso@dismail.de>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct vm_page is supposed to be a "small structure", but it takes up 96
bytes on x86_64 (to represent a 4k page). By utilizing bitfields and
strategically reordering members to avoid excessive padding, it can be
shrunk to 80 bytes.
- page_lock and unlock_request only need to store a bitmask of
VM_PROT_READ, VM_PROT_WRITE, and VM_PROT_EXECUTE. Even though the
special values VM_PROT_NO_CHANGE and VM_PROT_NOTIFY are defined, they
are not used for the two struct vm_page members.
- type and seg_index both need to store one of the four possible values
in the range from 0 to 3. Two bits are sufficient for this.
- order needs to store a number from 0 to VM_PAGE_NR_FREE_LISTS (which
is 11), or a special value VM_PAGE_ORDER_UNLISTED. Four bits are
sufficient for this.
No functional change.
Message-Id: <20230626112656.435622-2-bugaevc@gmail.com>
|
|
|
|
|
|
| |
The documentation of vm_page_insert says that the object must be locked.
Moreover, the unlock call is here but no call was present.
Message-Id: <20230208225436.23365-1-etienne.brateau@gmail.com>
|
| |
|
|
|
|
|
|
| |
It seems we hit he "unable to recycle any page" even when there is no
memory pressure, probably just because the pageout thread somehow to
kicked but there's nothing to page out left.
|
| |
|
|
|
|
| |
* vm/vm_page.c (db_show_vmstat): Add printing the segment size.
|
|
|
|
|
| |
* vm/vm_page.c (db_show_vmstat): Drop displaying cache numbers a second
time.
|
|
|
|
| |
* vm/vm_page.c (db_show_vmstat)
|
|
|
|
|
|
|
|
| |
with an output similar to the userland vmstat command
* vm/vm_page.c (db_show_vmstat): New function.
* vm/vm_page.h (db_show_vmstat): New prototype.
* ddb/db_command.c (db_show_cmds): Add vmstat command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit eb07428ffb0009085fcd01dd1b79d9953af8e0ad does fix pageout of
external objects backed by the default pager, but the way it's done
has a vicious side effect: because they're considered external, the
pageout daemon can keep evicting them even though the external pagers
haven't released them, unlike internal pages which must all be
released before the pageout daemon can make progress. This can lead
to a situation where too many pages become wired, the default pager
cannot allocate memory to process new requests, and the pageout
daemon cannot recycle any more page, causing a panic.
This change makes the pageout daemon use the same strategy for both
internal pages and external pages sent to the default pager: use
the laundry bit and wait for all laundry pages to be released,
thereby completely synchronizing the pageout daemon and the default
pager.
* vm/vm_page.c (vm_page_can_move): Allow external laundry pages to
be moved.
(vm_page_seg_evict): Don't alter the `external_laundry' bit, merely
disable double paging for external pages sent to the default pager.
* vm/vm_pageout.c: Include vm/memory_object.h.
(vm_pageout_setup): Don't check whether the `external_laundry' bit
is set, but handle external pages sent to the default pager the same
as internal pages.
|
|
|
|
|
|
|
|
|
|
|
| |
Double paging on such objects causes deadlocks.
* vm/vm_page.c: Include <vm/memory_object.h>.
(vm_page_seg_evict): Rename laundry to double_paging to increase
clarity. Set the `external_laundry' bit when evicting a page
from an external object backed by the default pager.
* vm/vm_pageout.c (vm_pageout_setup): Wire page if the
`external_laundry' bit is set.
|
|
|
|
|
|
|
|
| |
Unlike laundry pages sent to the default pager, pages marked with the
`external_laundry' bit remain in the page queues and must be filtered
out by the pageability check.
* vm/vm_page.c (vm_page_can_move): Check the `external_laundry' bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the VM system has been tracking whether pages belong to internal
or external objects, pageout throttling to external pagers has simply
not been working. The reason is that, on pageout, requests for external
pages are correctly tracked, but on page release (which is used to
acknowledge the request), external pages are not marked external
any more. This is because the external bit tracks whether a page
belongs to an external object, and all pages, including external
ones, are moved to an internal object during pageout.
To solve this issue, a new "external_laundry" bit is added. It has
the same purpose as the laundry bit, but for external pagers.
* vm/vm_page.c (vm_page_seg_min_page_available): Function unused, remove.
(vm_page_seg_evict): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout. Add an assertion about double paging.
(vm_page_check_usable): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout.
(vm_page_evict): Likewise.
* vm/vm_page.h (struct vm_page): New `external_laundry' member.
(vm_page_external_pagedout): Rename to ...
(vm_page_external_laundry_count): ... this.
* vm/vm_pageout.c: Include kern/printf.h.
(DEBUG): New macro.
(VM_PAGEOUT_TIMEOUT): Likewise.
(vm_pageout_setup): Use vm_page_external_laundry_count instead of
vm_page_external_pagedout. Set `external_laundry' where appropriate.
(vm_pageout): Use VM_PAGEOUT_TIMEOUT with thread_set_timeout.
Add debugging code, commented out by default.
* vm/vm_resident.c (vm_page_external_pagedout): Rename to ...
(vm_page_external_laundry_count): ... this.
(vm_page_init_template): Set `external_laundry' member to FALSE.
(vm_page_release): Rename external parameter to external_laundry.
Slightly change pageout resuming.
(vm_page_free): Rename external variable to external_laundry.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of determining if memory is low, directly use the
vm_page_alloc_paused variable, which is true when memory has reached
a minimum threshold until it gets back above the high thresholds.
This makes sure double paging is used when external pagers are unable
to allocate memory.
* vm/vm_page.c (vm_page_seg_evict): Rename low_memory to alloc_paused.
(vm_page_evict_once): Remove low_memory and its computation. Blindly
pass the new alloc_paused argument instead.
(vm_page_evict): Pass the value of vm_page_alloc_paused to
vm_page_evict_once.
|
|
|
|
|
| |
* vm/vm_page.c (vm_page_evict): Test both vm_page_external_pagedout
and vm_page_laundry_count in order to determine there was "no pageout".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When checking whether to continue paging out or not, the pageout daemon
only considers the high free page threshold of a segment. But if e.g.
the default pager had to allocate reserved pages during a previous
pageout cycle, it could have exhausted a segment (this is currently
only seen with the DMA segment). In that case, the high threshold
cannot be reached because the segment has currently no pageable page.
This change makes the pageout daemon identify this condition and
consider the segment as usable in order to make progress. The segment
will simply be ignored on the allocation path for unprivileged threads,
and if this happens with too many segments, the system will fail at
allocation time.
* vm/vm_page.c (vm_page_seg_usable): Report usable if the segment has
no pageable page.
|
|
|
|
|
|
|
|
|
|
| |
* i386/i386at/biosmem.c (biosmem_setup): Load the HIGHMEM segment if
present.
(biosmem_free_usable): Report high memory as usable.
* vm/vm_page.c (vm_page_boot_table_size, vm_page_table_size,
vm_page_mem_size, vm_page_mem_free): Scan all segments.
* vm/vm_resident.c (vm_page_grab): Describe allocation strategy
with regard to the HIGHMEM segment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As we're about to use a new HIGHMEM segment, potentially much larger
than the existing DMA and DIRECTMAP ones, it's now compulsory to make
the pageout daemon aware of those segments.
And while we're at it, let's fix some of the defects that have been
plaguing pageout forever, such as throttling, and pageout of internal
versus external pages (this commit notably introduces a hardcoded
policy in which as many external pages are selected before considering
internal pages).
* kern/slab.c (kmem_pagefree_physmem): Update call to vm_page_release.
* vm/vm_page.c: Include <kern/counters.h> and <vm/vm_pageout.h>.
(VM_PAGE_SEG_THRESHOLD_MIN_NUM, VM_PAGE_SEG_THRESHOLD_MIN_DENOM,
VM_PAGE_SEG_THRESHOLD_MIN, VM_PAGE_SEG_THRESHOLD_LOW_NUM,
VM_PAGE_SEG_THRESHOLD_LOW_DENOM, VM_PAGE_SEG_THRESHOLD_LOW,
VM_PAGE_SEG_THRESHOLD_HIGH_NUM, VM_PAGE_SEG_THRESHOLD_HIGH_DENOM,
VM_PAGE_SEG_THRESHOLD_HIGH, VM_PAGE_SEG_MIN_PAGES,
VM_PAGE_HIGH_ACTIVE_PAGE_NUM, VM_PAGE_HIGH_ACTIVE_PAGE_DENOM): New macros.
(struct vm_page_queue): New type.
(struct vm_page_seg): Add new members `min_free_pages', `low_free_pages',
`high_free_pages', `active_pages', `nr_active_pages', `high_active_pages',
`inactive_pages', `nr_inactive_pages'.
(vm_page_alloc_paused): New variable.
(vm_page_pageable, vm_page_can_move, vm_page_remove_mappings): New functions.
(vm_page_seg_alloc_from_buddy): Pause allocations and start the pageout
daemon as appropriate.
(vm_page_queue_init, vm_page_queue_push, vm_page_queue_remove,
vm_page_queue_first, vm_page_seg_get, vm_page_seg_index,
vm_page_seg_compute_pageout_thresholds): New functions.
(vm_page_seg_init): Initialize the new segment members.
(vm_page_seg_add_active_page, vm_page_seg_remove_active_page,
vm_page_seg_add_inactive_page, vm_page_seg_remove_inactive_page,
vm_page_seg_pull_active_page, vm_page_seg_pull_inactive_page,
vm_page_seg_pull_cache_page): New functions.
(vm_page_seg_min_page_available, vm_page_seg_page_available,
vm_page_seg_usable, vm_page_seg_double_lock, vm_page_seg_double_unlock,
vm_page_seg_balance_page, vm_page_seg_balance, vm_page_seg_evict,
vm_page_seg_compute_high_active_page, vm_page_seg_refill_inactive,
vm_page_lookup_seg, vm_page_check): New functions.
(vm_page_alloc_pa): Handle allocation failure from VM privileged thread.
(vm_page_info_all): Display additional segment properties.
(vm_page_wire, vm_page_unwire, vm_page_deactivate, vm_page_activate,
vm_page_wait): Move from vm/vm_resident.c and rewrite to use segments.
(vm_page_queues_remove, vm_page_check_usable, vm_page_may_balance,
vm_page_balance_once, vm_page_balance, vm_page_evict_once): New functions.
(VM_PAGE_MAX_LAUNDRY, VM_PAGE_MAX_EVICTIONS): New macros.
(vm_page_evict, vm_page_refill_inactive): New functions.
* vm/vm_page.h: Include <kern/list.h>.
(struct vm_page): Remove member `pageq', reuse the `node' member instead,
move the `listq' and `next' members above `vm_page_header'.
(VM_PAGE_CHECK): Define as an alias to vm_page_check.
(vm_page_check): New function declaration.
(vm_page_queue_fictitious, vm_page_queue_active, vm_page_queue_inactive,
vm_page_free_target, vm_page_free_min, vm_page_inactive_target,
vm_page_free_reserved, vm_page_free_wanted): Remove extern declarations.
(vm_page_external_pagedout): New extern declaration.
(vm_page_release): Update declaration.
(VM_PAGE_QUEUES_REMOVE): Define as an alias to vm_page_queues_remove.
(VM_PT_PMAP, VM_PT_KMEM, VM_PT_STACK): Remove macros.
(VM_PT_KERNEL): Update value.
(vm_page_queues_remove, vm_page_balance, vm_page_evict,
vm_page_refill_inactive): New function declarations.
* vm/vm_pageout.c (VM_PAGEOUT_BURST_MAX, VM_PAGEOUT_BURST_MIN,
VM_PAGEOUT_BURST_WAIT, VM_PAGEOUT_EMPTY_WAIT, VM_PAGEOUT_PAUSE_MAX,
VM_PAGE_INACTIVE_TARGET, VM_PAGE_FREE_TARGET, VM_PAGE_FREE_MIN,
VM_PAGE_FREE_RESERVED, VM_PAGEOUT_RESERVED_INTERNAL,
VM_PAGEOUT_RESERVED_REALLY): Remove macros.
(vm_pageout_reserved_internal, vm_pageout_reserved_really,
vm_pageout_burst_max, vm_pageout_burst_min, vm_pageout_burst_wait,
vm_pageout_empty_wait, vm_pageout_pause_count, vm_pageout_pause_max,
vm_pageout_active, vm_pageout_inactive, vm_pageout_inactive_nolock,
vm_pageout_inactive_busy, vm_pageout_inactive_absent,
vm_pageout_inactive_used, vm_pageout_inactive_clean,
vm_pageout_inactive_dirty, vm_pageout_inactive_double,
vm_pageout_inactive_cleaned_external): Remove variables.
(vm_pageout_requested, vm_pageout_continue): New variables.
(vm_pageout_setup): Wait for page allocation to succeed instead of
falling back to flush, update double paging protocol with caller,
add pageout throttling setup.
(vm_pageout_scan): Rewrite to use the new vm_page balancing,
eviction and inactive queue refill functions.
(vm_pageout_scan_continue, vm_pageout_continue): Remove functions.
(vm_pageout): Rewrite.
(vm_pageout_start, vm_pageout_resume): New functions.
* vm/vm_pageout.h (vm_pageout_continue, vm_pageout_scan_continue): Remove
function declarations.
(vm_pageout_start, vm_pageout_resume): New function declarations.
* vm/vm_resident.c: Include <kern/list.h>.
(vm_page_queue_fictitious): Define as a struct list.
(vm_page_free_wanted, vm_page_external_count, vm_page_free_avail,
vm_page_queue_active, vm_page_queue_inactive, vm_page_free_target,
vm_page_free_min, vm_page_inactive_target, vm_page_free_reserved):
Remove variables.
(vm_page_external_pagedout): New variable.
(vm_page_bootstrap): Don't initialize removed variable, update
initialization of vm_page_queue_fictitious.
(vm_page_replace): Call VM_PAGE_QUEUES_REMOVE where appropriate.
(vm_page_remove): Likewise.
(vm_page_grab_fictitious): Update to use list_xxx functions.
(vm_page_release_fictitious): Likewise.
(vm_page_grab): Remove pageout related code.
(vm_page_release): Add `laundry' and `external' parameters for
pageout throttling.
(vm_page_grab_contig): Remove pageout related code.
(vm_page_free_contig): Likewise.
(vm_page_free): Remove pageout related code, update call to
vm_page_release.
(vm_page_wait, vm_page_wire, vm_page_unwire, vm_page_deactivate,
vm_page_activate): Move to vm/vm_page.c.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old assumption that all physical memory is directly mapped in
kernel space is about to go away. Those variables are directly linked
to that assumption.
* i386/i386/model_dep.h (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT
instead of phys_last_addr.
(pmap_copy_page, copy_to_phys, copy_from_phys): Likewise.
* i386/i386/trap.c (user_trap): Remove check against phys_last_addr.
* i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set
phys_last_addr.
* i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if
a physical address references physical memory.
* i386/i386at/model_dep.c (phys_first_addr): Remove variable.
(phys_last_addr): Likewise.
(pmap_free_pages, pmap_valid_page): Remove functions.
* i386/intel/pmap.c: Include i386at/biosmem.h.
(pa_index): Turn into an alias for vm_page_table_index.
(pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr
as appropriate.
(pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr
and phys_last_addr to obtain the number of physical pages.
(pmap_verify_free): Remove function.
(valid_page): Turn this macro into an inline function and rewrite
using vm_page_lookup_pa.
(pmap_page_table_page_alloc): Build the pmap VM object using
vm_page_table_size to determine its size.
(pmap_remove_range, pmap_page_protect, phys_attribute_clear,
phys_attribute_test): Turn page indexes into unsigned long integers.
(pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or
biosmem_directmap_end to determine if a physical address references
physical memory.
* i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of
phys_last_addr to obtain the number of physical pages.
* kern/startup.c (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the
appropriate segment selector instead of phys_last_addr to determine
where high memory starts.
* vm/pmap.h: Update requirements description.
(pmap_free_pages, pmap_valid_page): Remove declarations.
* vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size,
vm_page_table_size, vm_page_table_index): New functions.
* vm/vm_page.h (vm_page_seg_end, vm_page_table_size,
vm_page_table_index): New function declarations.
* vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define
as unsigned long integers.
(vm_page_bootstrap): Compute VP table size based on the page table
size instead of the value returned by pmap_free_pages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Import upstream biosmem and vm_page changes, and adjust for local
modifications.
Specifically, the biosmem module was mistakenly loading physical
segments that did not clip with the heap as completely available.
This change makes it load them as completely unavailable during
startup, and once the VM system is ready, additional pages are
loaded.
* i386/i386at/biosmem.c (DEBUG): New macro.
(struct biosmem_segment): Remove members `avail_start' and `avail_end'.
(biosmem_heap_cur): Remove variable.
(biosmem_heap_bottom, biosmem_heap_top, biosmem_heap_topdown): New variables.
(biosmem_find_boot_data_update, biosmem_find_boot_data): Remove functions.
(biosmem_find_heap_clip, biosmem_find_heap): New functions.
(biosmem_setup_allocator): Rewritten to use the new biosmem_find_heap
function.
(biosmem_bootalloc): Support both bottom-up and top-down allocations.
(biosmem_directmap_size): Renamed to ...
(biosmem_directmap_end): ... this function.
(biosmem_load_segment): Fix segment loading.
(biosmem_setup): Restrict usable memory to the directmap segment.
(biosmem_free_usable_range): Add checks on input parameters.
(biosmem_free_usable_update_start, biosmem_free_usable_start,
biosmem_free_usable_reserved, biosmem_free_usable_end): Remove functions.
(biosmem_free_usable_entry): Rewritten to use the new biosmem_find_heap
function.
(biosmem_free_usable): Restrict usable memory to the directmap segment.
* i386/i386at/biosmem.h (biosmem_bootalloc): Update description.
(biosmem_directmap_size): Renamed to ...
(biosmem_directmap_end): ... this function.
(biosmem_free_usable): Update declaration.
* i386/i386at/model_dep.c (machine_init): Call biosmem_free_usable.
* vm/vm_page.c (DEBUG): New macro.
(struct vm_page_seg): New member `heap_present'.
(vm_page_load): Remove heap related parameters.
(vm_page_load_heap): New function.
* vm/vm_page.h (vm_page_load): Remove heap related parameters. Update
description.
(vm_page_load_heap): New function.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Caches that use external slab data but allocate slabs from the direct
physical mapping can look up slab data in constant time by associating
the slab data directly with the underlying page.
* kern/slab.c (kmem_slab_use_tree): Take KMEM_CF_DIRECTMAP into account.
(kmem_slab_create): Set page private data if relevant.
(kmem_slab_destroy): Clear page private data if relevant.
(kmem_cache_free_to_slab): Use page private data if relevant.
* vm/vm_page.c (vm_page_init_pa): Set `priv' member to NULL.
* vm/vm_page.h (vm_page_set_priv, vm_page_get_priv): New functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A few errors were introduced in the latest changes.
o Add VM_PAGE_WAIT calls around physical allocation attempts in case of
memory exhaustion.
o Fix stack release.
o Fix memory exhaustion report.
o Fix free page accounting.
* kern/slab.c (kmem_pagealloc, kmem_pagefree): New functions
(kmem_slab_create, kmem_slab_destroy, kalloc, kfree): Use kmem_pagealloc
and kmem_pagefree instead of the raw page allocation functions.
(kmem_cache_compute_sizes): Don't store slab order.
* kern/slab.h (struct kmem_cache): Remove `slab_order' member.
* kern/thread.c (stack_alloc): Call VM_PAGE_WAIT in case of memory
exhaustion.
(stack_collect): Call vm_page_free_contig instead of kmem_free to
release pages.
* vm/vm_page.c (vm_page_seg_alloc): Fix memory exhaustion report.
(vm_page_setup): Don't update vm_page_free_count.
(vm_page_free_pa): Check page parameter.
(vm_page_mem_free): New function.
* vm/vm_page.h (vm_page_free_count): Remove extern declaration.
(vm_page_mem_free): New prototype.
* vm/vm_pageout.c: Update comments not to refer to vm_page_free_count.
(vm_pageout_scan, vm_pageout_continue, vm_pageout): Use vm_page_mem_free
instead of vm_page_free_count, update types accordingly.
* vm/vm_resident.c (vm_page_free_count, vm_page_free_count_minimum):
Remove variables.
(vm_page_free_avail): New variable.
(vm_page_bootstrap, vm_page_grab, vm_page_release, vm_page_grab_contig,
vm_page_free_contig, vm_page_wait): Use vm_page_mem_free instead of vm_page_free_count,
update types accordingly, don't set vm_page_free_count_minimum.
* vm/vm_user.c (vm_statistics): Likewise.
|
|
* vm/vm_page.c: New File.
* vm/vm_page.h: Merge vm_page.h from X15.
(struct vm_page): New members: node, type, seg_index, order,
vm_page_header. Turn phys_addr into a phys_addr_t.
|