| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
The pmap module is a bit limited on 64 bit paging, so this should be
refined when we'll be able to use addresses over 4G.
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220205175129.309469-6-luca@orpolo.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* configure: compile for native x86_64 by default instead of xen
* x86_64/Makefrag.am: introduce KERNEL_MAP_BASE to reuse the constant
in both code and linker script
* x86_64/ldscript: use a .boot section for the very first operations,
until we reach long mode. This section is not really allocated, so
it doesn't need to be freed later. The vm system is later
initialized starting from .text and not including .boot
* link kernel at 0x4000000 as the xen version, higher values causes
linker errors
* we can't use full segmentation in long mode, so we need to create a
temporary mapping during early boot to be able to jump to high
addresses
* build direct map for first 4G in boothdr, it seems required by Linux
drivers
* add INTEL_PTE_PS bit definition to enable 2MB pages during bootstrap
* ensure write bit is set in PDP entry access rights. This only
applies to PAE-enabled kernels, mandatory for x86_64. On xen
platform it seems to be handled differently
Signed-off-by: Luca Dariz <luca@orpolo.org>
Message-Id: <20220205175129.309469-2-luca@orpolo.org>
|
|
|
|
| |
Message-Id: <Yo+lzS7RtW5ZjQHN@debian>
|
|
|
|
| |
vm_page_insert would not be able to store the offset.
|
| |
|
|
|
|
| |
The map function is supposed to return physical addresses, thus phys_addr_t.
|
|
|
|
|
| |
User processes loaded in high memory are not visible to ddb through the
direct mapping. We can however read/write data through copy_from/to_phys.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vm_page_grab was systematically using the VM_PAGE_SEL_DIRECTMAP selector
to play safe with existing code.
This adds a flags parameter to let callers of vm_page_grab specify their
constraints.
Linux drivers need 32bit dmas, Xen drivers use kvtophys to clear some
data. Callers of kmem_pagealloc_physmem and vm_page_grab_phys_addr also use
kvtophys. Otherwise allocations can go to highmem.
This fixes the allocation jam in the directmap segment.
* vm/vm_page.h (VM_PAGE_DMA, VM_PAGE_DMA32, VM_PAGE_DIRECTMAP,
VM_PAGE_HIGHMEM): New macros.
(vm_page_grab): Add flags parameter.
* vm/vm_resident.c (vm_page_grab): Choose allocation selector according
to flags parameter.
(vm_page_convert, vm_page_alloc): Pass VM_PAGE_HIGHMEM to vm_page_grab.
(vm_page_grab_phys_addr): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* vm/vm_map.c (vm_map_copy_steal_pages): Pass VM_PAGE_HIGHMEM to vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Pass VM_PAGE_DIRECTMAP to
vm_page_grab.
* xen/block.c (device_read): Pass VM_PAGE_DIRECTMAP to vm_page_grab.
* linux/dev/glue/block.c (alloc_buffer): Pass VM_PAGE_DMA32 to vm_page_grab.
|
|
|
|
|
|
|
|
| |
Not only Linux drivers need BIOS mapped.
* i386/i386at/model_dep.c (i386at_init) [!LINUX_DEV]: Map the low
memory.
* i386/intel/pmap.c (pmap_create) [!LINUX_DEV]: Likewise.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As described in XXX comments, the workaround for memory mapping is
implemented for 80386 and it is unnecessary on i486 or later. Thus, it
is safe to omit that if the kernel is built for the recent (1989~)
processors.
Fuhito Inagawa pointed out the problem to me.
* i386/i386/trap.c (kernel_trap): Disable the workaround when the kernel
is built for i[456]86.
* i386/intel/pmap.c (pmap_protect, pmap_enter): Ditto.
* i386/intel/read_fault.c: Define intel_read_fault if and only if it is
necessary.
|
|
|
|
| |
* i386/intel/pmap.c (pmap_enter): Fix debugging print format.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* configfrag.ac (--enable-ncpus): Add option to set $mach_ncpus.
* i386/i386/cpu_number.h (CPU_NUMBER, cpu_number): New macros, set to 0 for
now.
* i386/i386/db_interface.c (cpu_interrupt_to_db): New function.
* i386/i386/db_interface.h (cpu_interrupt_to_db): New declaration.
* i386/i386/mp_desc.c (int_stack_base): New array.
(intel_startCPU): New function.
* i386/i386at/model_dep.c: Include <i386/smp.h>
(int_stack_top, int_stack_base): Turn into arrays
(i386at_init): Update accesses accordingly.
* i386/i386at/model_dep.h (int_stack_top, int_stack_base, ON_INT_STACK):
Likewise.
* i386/intel/pmap.c (cpus_active, cpus_idle, cpu_update_needed): Add
variables.
* i386/intel/pmap.h (cpus_active, cpus_idle, cpu_update_needed): Mark
extern.
* kern/cpu_number.h: Include <machine/cpu_number.h>
* linux/dev/arch/i386/kernel/irq.c (local_bh_count, local_irq_count):
Hardcode to the address of intr_count. We will not use the Linux code in
SMP mode anyway.
|
| |
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_bootstrap): Reload base from boot_info at each
loop.
|
|
|
|
|
|
|
|
| |
It seems some systems refuse the W bit in the pdp. Only enable it for
Xen PV tables which do require it.
* i386/intel/pmap.c (pmap_bootstrap, pmap_create) [!MACH_PV_PAGETABLES]:
Do not set INTEL_PTE_WRITE in pdpbase entries.
|
|
|
|
|
|
|
| |
This was erroneously set to 0x1ff in 0b3504b6 ('pmap.h: Add 64bit
variant')
* i386/intel/pmap.h (PDPMASK) [PAE && !__x86_64__]: Set to 3.
|
|
|
|
| |
* i386/intel/pmap.c (pmap_enter): Fix print format.
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_bootstrap) [!MACH_PV_PAGETABLES]: Do not call
pmap_set_page_readonly_init.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* i386/intel/pmap.h (L4SHIFT, L4MASK, lin2l4num): New macros
(PDPNUM, PDPMASK, set_pmap): Add 64bit variant. Make PAE use the 64bit mask
too.
(pmap): Add l4base, user_l4base, user_pdpbase fields.
* i386/intel/pmap.c (pmap_bootstrap): Clear the whole PDP. Enable write
bit in PDP. Set user pagetable to NULL. Initialize l4base.
(pmap_clear_bootstrap_pagetable): Add 4th-level support.
(pmap_ceate): Clear the whole PDP. Enable write bit in PDP. Initialize
l4base, user_pdpbase, user_l4base.
(pmap_destroy): Clear l4base, user_pdpbase, user_l4base.
* i386/i386at/model_dep.c (i386at_init): Load l4base on 64bits.
|
|
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_bootstrap, pmap_set_page_readwrite,
pmap_set_page_readonly, pmap_set_page_readonly_init,
pmap_clear_bootstrap_pagetable, pmap_map_mfn, pmap_destroy, pmap_enter,
pmap_collect, phys_attribute_clear): Fix format warnings.
|
|
|
|
|
|
|
| |
Xen seems to want a whole page for the PAE pdp.
* i386/intel/pmap.c (pmap_init): Make pdpt cache use page-size
allocation.
|
|
|
|
| |
* i386/intel/pmap.c (pmap_enter): Fix panic format.
|
|
|
|
|
|
| |
* i386/i386/mp_desc.h [MULTIPROCESSOR] (interrupt_processor): Add
prototype.
* i386/intel/pmap.c [NCPUS > 1]: Include <i386/mp_desc.h>
|
|
|
|
| |
* i386/intel/pmap.c: Include <i386/spl.h>.
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_map_bd): Lock kernel_pmap, not just the
inexistent pmap.
|
|
|
|
|
|
|
|
|
| |
* i386/intel/pmap.c: Drop the register qualifier.
* ipc/ipc_kmsg.h: Likewise.
* kern/bootstrap.c: Likewise.
* kern/profile.c: Likewise.
* kern/thread.c: Likewise.
* vm/vm_object.c: Likewise.
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_remove): Fix iteration over page directory.
(pmap_enter): Explain why it is ok here.
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_remove_range): Make function static.
* i386/intel/pmap.h (pmap_remove_range): Remove declaration.
|
|
|
|
|
|
|
|
|
|
| |
Memory wiring is about to be reworked, at which point the VM system
will properly track wired mappings. Removing them when changing
protection makes sense, and is fine as long as the VM system
rewires them when access is restored.
* i386/intel/pmap.c (pmap_page_protect): Decrease wiring count instead
of causing a panic when removing a wired mapping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we used contiguous page directories four pages in length
when using PAE. To prevent physical memory fragmentation, we need to
use virtual memory for objects spanning multiple pages. Virtual
kernel memory, however, is a scarce commodity.
* i386/intel/pmap.h (lin2pdenum): Never include the page directory pointer table index.
(lin2pdenum_cont): New macro which does include said index.
(struct pmap): Remove the directory base pointer when using PAE.
* i386/intel/pmap.c (pmap_pde): Fix lookup.
(pmap_pte): Fix check for uninitialized pmap.
(pmap_bootstrap): Do not store the page directory base if PAE.
(pmap_init): Reduce size of page directories to one page, use
direct-mapped memory.
(pmap_create): Allocate four page directories per pmap.
(pmap_destroy): Adapt to the discontinuous directories.
(pmap_collect): Likewise.
* i386/i386/xen.h (hyp_mmu_update_la): Adapt code manipulating the
kernels page directory.
* i386/i386at/model_dep.c (i386at_init): Likewise.
|
|
|
|
|
|
|
|
| |
* i386/intel/pmap.c (pd_cache): New variable.
(pdp_cache): Likewise.
(pmap_init): Initialize new caches.
(pmap_create): Use the caches.
(pmap_destroy): Free to the caches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of a "page considered external", which apparently takes into
account whether a page is dirty or not, redefine this property to
reliably mean "is in an external object".
This commit mostly deals with the impact of this change on the page
allocation interface.
* i386/intel/pmap.c (pmap_page_table_page_alloc): Update call to
vm_page_grab.
* kern/slab.c (kmem_pagealloc_physmem): Use vm_page_grab instead of
vm_page_grab_contig.
(kmem_pagefree_physmem): Use vm_page_release instead of
vm_page_free_contig.
* linux/dev/glue/block.c (alloc_buffer, device_read): Update call
to vm_page_grab.
* vm/vm_fault.c (vm_fault_page): Update calls to vm_page_grab and
vm_page_convert.
* vm/vm_map.c (vm_map_copy_steal_pages): Update call to vm_page_grab.
* vm/vm_page.h (struct vm_page): Remove `extcounted' member.
(vm_page_external_limit, vm_page_external_count): Remove extern
declarations.
(vm_page_convert, vm_page_grab): Update declarations.
(vm_page_release, vm_page_grab_phys_addr): New function declarations.
* vm/vm_pageout.c (VM_PAGE_EXTERNAL_LIMIT): Remove macro.
(VM_PAGE_EXTERNAL_TARGET): Likewise.
(vm_page_external_target): Remove variable.
(vm_pageout_scan): Remove specific handling of external pages.
(vm_pageout): Don't set vm_page_external_limit and
vm_page_external_target.
* vm/vm_resident.c (vm_page_external_limit): Remove variable.
(vm_page_insert, vm_page_replace, vm_page_remove): Update external
page tracking.
(vm_page_convert): RemoveĀ `external' parameter.
(vm_page_grab): Likewise. Remove specific handling of external pages.
(vm_page_grab_phys_addr): Update call to vm_page_grab.
(vm_page_release): Remove `external' parameter and remove specific
handling of external pages.
(vm_page_wait): Remove specific handling of external pages.
(vm_page_alloc): Update call to vm_page_grab.
(vm_page_free): Update call to vm_page_release.
* xen/block.c (device_read): Update call to vm_page_grab.
* xen/net.c (device_write): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* i386/i386/phys.c (pmap_zero_page, pmap_copy_page, copy_to_phys,
copy_from_phys, kvtophys): Use the phys_addr_t type for physical
addresses.
* i386/intel/pmap.c (pmap_map, pmap_map_bd, pmap_destroy,
pmap_remove_range, pmap_page_protect, pmap_enter, pmap_extract,
pmap_collect, phys_attribute_clear, phys_attribute_test,
pmap_clear_modify, pmap_is_modified, pmap_clear_reference,
pmap_is_referenced): Likewise.
* i386/intel/pmap.h (pt_entry_t): Unconditionally define as a
phys_addr_t.
(pmap_zero_page, pmap_copy_page, kvtophys): Use the phys_addr_t
type for physical addresses.
* vm/pmap.h (pmap_enter, pmap_page_protect, pmap_clear_reference,
pmap_is_referenced, pmap_clear_modify, pmap_is_modified,
pmap_extract, pmap_map_bd): Likewise.
* vm/vm_page.h (vm_page_fictitious_addr): Declare as a phys_addr_t.
* vm/vm_resident.c (vm_page_fictitious_addr): Likewise.
(vm_page_grab_phys_addr): Change return type to phys_addr_t.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old assumption that all physical memory is directly mapped in
kernel space is about to go away. Those variables are directly linked
to that assumption.
* i386/i386/model_dep.h (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* i386/i386/phys.c (pmap_zero_page): Use VM_PAGE_DIRECTMAP_LIMIT
instead of phys_last_addr.
(pmap_copy_page, copy_to_phys, copy_from_phys): Likewise.
* i386/i386/trap.c (user_trap): Remove check against phys_last_addr.
* i386/i386at/biosmem.c (biosmem_bootstrap_common): Don't set
phys_last_addr.
* i386/i386at/mem.c (memmmap): Use vm_page_lookup_pa to determine if
a physical address references physical memory.
* i386/i386at/model_dep.c (phys_first_addr): Remove variable.
(phys_last_addr): Likewise.
(pmap_free_pages, pmap_valid_page): Remove functions.
* i386/intel/pmap.c: Include i386at/biosmem.h.
(pa_index): Turn into an alias for vm_page_table_index.
(pmap_bootstrap): Replace uses of phys_first_addr and phys_last_addr
as appropriate.
(pmap_virtual_space): Use vm_page_table_size instead of phys_first_addr
and phys_last_addr to obtain the number of physical pages.
(pmap_verify_free): Remove function.
(valid_page): Turn this macro into an inline function and rewrite
using vm_page_lookup_pa.
(pmap_page_table_page_alloc): Build the pmap VM object using
vm_page_table_size to determine its size.
(pmap_remove_range, pmap_page_protect, phys_attribute_clear,
phys_attribute_test): Turn page indexes into unsigned long integers.
(pmap_enter): Likewise. In addition, use either vm_page_lookup_pa or
biosmem_directmap_end to determine if a physical address references
physical memory.
* i386/xen/xen.c (hyp_p2m_init): Use vm_page_table_size instead of
phys_last_addr to obtain the number of physical pages.
* kern/startup.c (phys_first_addr): Remove extern declaration.
(phys_last_addr): Likewise.
* linux/dev/init/main.c (linux_init): Use vm_page_seg_end with the
appropriate segment selector instead of phys_last_addr to determine
where high memory starts.
* vm/pmap.h: Update requirements description.
(pmap_free_pages, pmap_valid_page): Remove declarations.
* vm/vm_page.c (vm_page_seg_end, vm_page_boot_table_size,
vm_page_table_size, vm_page_table_index): New functions.
* vm/vm_page.h (vm_page_seg_end, vm_page_table_size,
vm_page_table_index): New function declarations.
* vm/vm_resident.c (vm_page_bucket_count, vm_page_hash_mask): Define
as unsigned long integers.
(vm_page_bootstrap): Compute VP table size based on the page table
size instead of the value returned by pmap_free_pages.
|
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_get_mapwindow, pmap_put_mapwindow): Use
the appropriate xen hypercall if building for paravirtualized page
table management.
|
|
|
|
|
| |
* i386/intel/pmap.c (INVALIDATE_TLB): When e-s is constant, equal to
PAGE_SIZE, use just one invlpg instruction to flush the TLB.
|
|
|
|
| |
* i386/intel/pmap.c (MAX_TBIS_SIZE): Drop unused macro.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ddb/db_elf.c (elf_db_sym_init): Turn `i' into unsigned.
* device/ds_routines.c (ds_device_open, device_writev_trap): Likewise.
* i386/i386/user_ldt.c (i386_set_ldt): Likewise for `i', `min_selector', and
`first_desc'.
(i386_get_ldt): Likewise for `ldt_count'.
(user_ldt_free): Likewise for `i'.
* i386/i386/xen.h (hyp_set_ldt): Turn `count' into unsigned long.
* i386/intel/pmap.c (pmap_bootstrap): Turn `i', `j' and 'n' into unsigned.
(pmap_clear_bootstrap_pagetable): Likewise for `i' and `j'.
* ipc/ipc_kmsg.c (ipc_msg_print): Turn `i' and `numwords' into unsigned.
* kern/boot_script.c (boot_script_parse_line): Likewise for `i'.
* kern/bootstrap.c (bootstrap_create): Likewise for `n' and `i'.
* kern/host.c (host_processors): Likewise for `i'.
* kern/ipc_tt.c (mach_ports_register): Likewise.
* kern/mach_clock.c (tickadj, bigadj): turn into unsigned.
* kern/processor.c (processor_set_things): Turn `i' into unsigned.
* kern/task.c (task_threads): Likewise.
* kern/thread.c (consider_thread_collect, stack_init): Likewise.
* kern/strings.c (memset): Turn `i' into size_t.
* vm/memory_object.c (memory_object_lock_request): Turn `i' into unsigned.
* xen/block.c (hyp_block_init): Use %u format for evt.
(device_open): Drop unused err variable.
(device_write): Turn `copy_npages', `i', `nbpages', and `j' into unsigned.
* xen/console.c (hypcnread, hypcnwrite, hypcnclose): Turn dev to dev_t.
(hypcnclose): Return void.
* xen/console.h (hypcnread, hypcnwrite, hypcnclose): Fix prototypes
accordingly.
* xen/evt.c (form_int_mask): Turn `i' into int.
* xen/net.c (hyp_net_init): Use %u format for evt.
(device_open): Remove unused `err' variable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to increase the amount of memory available for kernel objects,
without reducing the amount of memory available for user processes,
a new allocation strategy is introduced in this change.
Instead of allocating kernel objects out of kernel virtual memory,
the slab allocator directly uses the direct mapping of physical
memory as its backend. This largely increases the kernel heap, and
removes the need for address translation updates.
In order to allow this strategy, an assumption made by the interrupt
code had to be removed. In addition, kernel stacks are now also
allocated directly from the physical allocator.
* i386/i386/db_trace.c: Include i386at/model_dep.h
(db_i386_reg_value): Update stack check.
* i386/i386/locore.S (trap_from_kernel, all_intrs,
int_from_intstack): Update interrupt handling.
* i386/i386at/model_dep.c: Include kern/macros.h.
(int_stack, int_stack_base): New variables.
(int_stack_high): Remove variable.
(i386at_init): Update interrupt stack initialization.
* i386/i386at/model_dep.h: Include i386/vm_param.h.
(int_stack_top, int_stack_base): New extern declarations.
(ON_INT_STACK): New macro.
* kern/slab.c: Include vm/vm_page.h
(KMEM_CF_NO_CPU_POOL, KMEM_CF_NO_RECLAIM): Remove macros.
(kmem_pagealloc, kmem_pagefree, kalloc_pagealloc, kalloc_pagefree): Remove
functions.
(kmem_slab_create): Allocate slab pages directly from the physical allocator.
(kmem_slab_destroy): Release slab pages directly to the physical allocator.
(kmem_cache_compute_sizes): Update the slab size computation algorithm to
return a power-of-two suitable for the physical allocator.
(kmem_cache_init): Remove custom allocation function pointers.
(kmem_cache_reap): Remove check on KMEM_CF_NO_RECLAIM.
(slab_init, kalloc_init): Update calls to kmem_cache_init.
(kalloc, kfree): Directly fall back on the physical allocator for big
allocation sizes.
(host_slab_info): Remove checks on defunct flags.
* kern/slab.h (kmem_slab_alloc_fn_t, kmem_slab_free_fn_t): Remove types.
(struct kmem_cache): Add `slab_order' member, remove `slab_alloc_fn' and
`slab_free_fn' members.
(KMEM_CACHE_NOCPUPOOL, KMEM_CACHE_NORECLAIM): Remove macros.
(kmem_cache_init): Update prototype, remove custom allocation functions.
* kern/thread.c (stack_alloc): Allocate stacks from the physical allocator.
* vm/vm_map.c (vm_map_kentry_cache, kentry_data, kentry_data_size): Remove
variables.
(kentry_pagealloc): Remove function.
(vm_map_init): Update calls to kmem_cache_init, remove initialization of
vm_map_kentry_cache.
(vm_map_create, _vm_map_entry_dispose, vm_map_copyout): Unconditionnally
use vm_map_entry_cache.
* vm/vm_map.h (kentry_data, kentry_data_size, kentry_count): Remove extern
declarations.
* vm/vm_page.h (VM_PT_STACK): New page type.
* device/dev_lookup.c (dev_lookup_init): Update calls to kmem_cache_init.
* device/dev_pager.c (dev_pager_hash_init, device_pager_init): Likewise.
* device/ds_routines.c (mach_device_init, mach_device_trap_init): Likewise.
* device/net_io.c (net_io_init): Likewise.
* i386/i386/fpu.c (fpu_module_init): Likewise.
* i386/i386/machine_task.c (machine_task_module_init): Likewise.
* i386/i386/pcb.c (pcb_module_init): Likewise.
* i386/intel/pmap.c (pmap_init): Likewise.
* ipc/ipc_init.c (ipc_bootstrap): Likewise.
* ipc/ipc_marequest.c (ipc_marequest_init): Likewise.
* kern/act.c (global_act_init): Likewise.
* kern/processor.c (pset_sys_init): Likewise.
* kern/rdxtree.c (rdxtree_cache_init): Likewise.
* kern/task.c (task_init): Likewise.
* vm/memory_object_proxy.c (memory_object_proxy_init): Likewise.
* vm/vm_external.c (vm_external_module_initialize): Likewise.
* vm/vm_fault.c (vm_fault_init): Likewise.
* vm/vm_object.c (vm_object_bootstrap): Likewise.
* vm/vm_resident.c (vm_page_module_init): Likewise.
(vm_page_bootstrap): Remove initialization of kentry_data.
|
|
|
|
|
|
|
|
| |
* linux/dev/glue/block.c (out): Cast to device_t.
* linux/dev/init/main.c (alloc_contig_mem): Initialize addr and cast return value to void *.
* i386/i386/phys.c (pmap_copy_page): Initialize src_map.
* i386/intel/pmap.c: Include i386at/model_dep.h.
* kern/mach_clock.c (mapable_time_init): Cast to void *.
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_page_protect): Enable assertions.
(phys_attribute_clear, phys_attribute_test): Likewise.
|
|
|
|
|
| |
* i386/intel/pmap.c (pmap_page_protect): Fix function name in panic
message.
|
|
|
|
| |
* i386/intel/pmap.c: Fix typo.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Import the macro definitions from the x15 kernel project, and replace
all similar definitions littered all over the place with it.
Importing this file will make importing code from the x15 kernel
easier. We are already using the red-black tree implementation and
the slab allocator from it, and we will import even more code in the
near future.
* kern/list.h: Do not define `structof', include `macros.h' instead.
* kern/rbtree.h: Likewise.
* kern/slab.c: Do not define `ARRAY_SIZE', include `macros.h' instead.
* i386/grub/misc.h: Likewise.
* i386/i386/xen.h: Do not define `barrier', include `macros.h' instead.
* kern/macro_help.h: Delete file. Replaced by `macros.h'.
* kern/macros.h: New file.
* Makefrag.am (libkernel_a_SOURCES): Add new file, remove old file.
* device/dev_master.h: Adopt accordingly.
* device/io_req.h: Likewise.
* device/net_io.h: Likewise.
* i386/intel/read_fault.c: Likewise.
* ipc/ipc_kmsg.h: Likewise.
* ipc/ipc_mqueue.h: Likewise.
* ipc/ipc_object.h: Likewise.
* ipc/ipc_port.h: Likewise.
* ipc/ipc_space.h: Likewise.
* ipc/ipc_splay.c: Likewise.
* ipc/ipc_splay.h: Likewise.
* kern/assert.h: Likewise.
* kern/ast.h: Likewise.
* kern/pc_sample.h: Likewise.
* kern/refcount.h: Likewise.
* kern/sched.h: Likewise.
* kern/sched_prim.c: Likewise.
* kern/timer.c: Likewise.
* kern/timer.h: Likewise.
* vm/vm_fault.c: Likewise.
* vm/vm_map.h: Likewise.
* vm/vm_object.h: Likewise.
* vm/vm_page.h: Likewise.
|
|
|
|
|
|
|
|
|
|
|
| |
Since we need it to access some BIOS information, e.g. at ACPI shutdown. When
the kernel VM is not starting at 0, there is already nothing mapped there in
user tasks, anyway.
* i386/i386at/model_dep.c (machine_init) [VM_MIN_KERNEL_ADDRESS != 0]:
Do not call pmap_unmap_page_zero.
* i386/intel/pmap.c (pmap_unmap_page_zero): Warn that unmapping page
zero may break some BIOS functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert from K&R style function definitions to ANSI style
function definitions.
* ddb/db_access.c: Convert function prototypes from K&R to ANSI.
* ddb/db_aout.c: Likewise.
* ddb/db_break.c: Likewise.
* ddb/db_command.c: Likewise.
* ddb/db_cond.c: Likewise.
* ddb/db_examine.c: Likewise.
* ddb/db_expr.c: Likewise.
* ddb/db_ext_symtab.c: Likewise.
* ddb/db_input.c: Likewise.
* ddb/db_lex.c: Likewise.
* ddb/db_macro.c: Likewise.
* ddb/db_mp.c: Likewise.
* ddb/db_output.c: Likewise.
* ddb/db_print.c: Likewise.
* ddb/db_run.c: Likewise.
* ddb/db_sym.c: Likewise.
* ddb/db_task_thread.c: Likewise.
* ddb/db_trap.c: Likewise.
* ddb/db_variables.c: Likewise.
* ddb/db_watch.c: Likewise.
* device/blkio.c: Likewise.
* device/chario.c: Likewise.
* device/dev_lookup.c: Likewise.
* device/dev_name.c: Likewise.
* device/dev_pager.c: Likewise.
* device/ds_routines.c: Likewise.
* device/net_io.c: Likewise.
* device/subrs.c: Likewise.
* i386/i386/db_interface.c: Likewise.
* i386/i386/fpu.c: Likewise.
* i386/i386/io_map.c: Likewise.
* i386/i386/loose_ends.c: Likewise.
* i386/i386/mp_desc.c: Likewise.
* i386/i386/pcb.c: Likewise.
* i386/i386/phys.c: Likewise.
* i386/i386/trap.c: Likewise.
* i386/i386/user_ldt.c: Likewise.
* i386/i386at/com.c: Likewise.
* i386/i386at/kd.c: Likewise.
* i386/i386at/kd_event.c: Likewise.
* i386/i386at/kd_mouse.c: Likewise.
* i386/i386at/kd_queue.c: Likewise.
* i386/i386at/lpr.c: Likewise.
* i386/i386at/model_dep.c: Likewise.
* i386/i386at/rtc.c: Likewise.
* i386/intel/pmap.c: Likewise.
* i386/intel/read_fault.c: Likewise.
* ipc/ipc_entry.c: Likewise.
* ipc/ipc_hash.c: Likewise.
* ipc/ipc_kmsg.c: Likewise.
* ipc/ipc_marequest.c: Likewise.
* ipc/ipc_mqueue.c: Likewise.
* ipc/ipc_notify.c: Likewise.
* ipc/ipc_port.c: Likewise.
* ipc/ipc_right.c: Likewise.
* ipc/mach_debug.c: Likewise.
* ipc/mach_msg.c: Likewise.
* ipc/mach_port.c: Likewise.
* ipc/mach_rpc.c: Likewise.
* kern/act.c: Likewise.
* kern/exception.c: Likewise.
* kern/ipc_mig.c: Likewise.
* kern/ipc_tt.c: Likewise.
* kern/lock_mon.c: Likewise.
* kern/mach_clock.c: Likewise.
* kern/machine.c: Likewise.
* kern/printf.c: Likewise.
* kern/priority.c: Likewise.
* kern/startup.c: Likewise.
* kern/syscall_emulation.c: Likewise.
* kern/syscall_subr.c: Likewise.
* kern/thread_swap.c: Likewise.
* kern/time_stamp.c: Likewise.
* kern/timer.c: Likewise.
* kern/xpr.c: Likewise.
* vm/memory_object.c: Likewise.
* vm/vm_debug.c: Likewise.
* vm/vm_external.c: Likewise.
* vm/vm_fault.c: Likewise.
* vm/vm_kern.c: Likewise.
* vm/vm_map.c: Likewise.
* vm/vm_pageout.c: Likewise.
* vm/vm_user.c: Likewise.
|
|
|
|
|
|
|
|
| |
We were filling much more than the mapwindows array, thus overwriting in
at least debugger variables.
* i386/intel/pmap.c (pmap_bootstrap): Make sure to limit mapwindows
initialization within PMAP_NMAPWINDOWS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PCI devices expose their memory etc. way beyond last_phys_addr. Userland
drivers opening /dev/mem need to open those too, even if phystokv() will
not work for them.
* i386/intel/pmap.h (pmap_mapwindow_t): New type.
(pmap_get_mapwindow, pmap_put_mapwindow): New prototypes.
(PMAP_NMAPWINDOWS): New macro.
* i386/intel/pmap.c (mapwindows): New array.
(pmap_get_mapwindow, pmap_put_mapwindow): New functions.
(pmap_bootstrap, pmap_virtual_space): Reserve virtual pages for the mapping
windows.
* i386/i386/phys.c: Include <i386/model_dep.h>
(INTEL_PTE_W, INTEL_PTE_R): New macros
(pmap_zero_page, pmap_copy_page, copy_to_phys, copy_from_phys): Use
`pmap_get_mapwindow' to temporarily map physical pages beyond last_phys_addr.
|
| |
|