aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* x86_64: Do not disassemble instructionsSamuel Thibault2023-08-131-0/+6
| | | | The instruction set decoding needs an update, avoid showing bogus output.
* lock: Fix SMP buildSamuel Thibault2023-08-131-1/+1
|
* IPI: Do not include support when NCPUS=1Samuel Thibault2023-08-135-0/+14
|
* IPI: Rework irq names and fix x86_64 buildSamuel Thibault2023-08-138-19/+23
|
* i386/x86_64: Add remote AST via IPI mechanismDamien Zammit2023-08-139-3/+38
|
* kern/sched_prim: Cause ast on cpu coming out of idleDamien Zammit2023-08-131-0/+6
| | | | Message-Id: <20230811083424.2154350-3-damien@zamaudio.com>
* simple lock: check that the non-_irq variants are not called from IRQSamuel Thibault2023-08-124-8/+38
|
* assert: fix concurrency against irqsSamuel Thibault2023-08-121-6/+7
| | | | by using simple_lock_irq.
* clock: Convert timer_lock to using simple_lock_irqSamuel Thibault2023-08-121-33/+17
|
* sched: Add waitq_lock helpers which check they are called at spl7Samuel Thibault2023-08-121-6/+21
|
* sched: Add runq_lock helpers which check they are called at spl7Samuel Thibault2023-08-124-11/+26
|
* kern: Check that locking thread is done at spl7Samuel Thibault2023-08-121-2/+13
|
* xen: Convert console to using simple_lock_irqSamuel Thibault2023-08-121-10/+9
|
* kmsg: Fix concurrency against irqsSamuel Thibault2023-08-121-17/+18
| | | | by using simple_lock_irq.
* device: Convert io_done_list_lock to simple_lock_irqSamuel Thibault2023-08-121-12/+8
|
* tty: Convert t_lock to using simple_lock_irqSamuel Thibault2023-08-124-83/+51
|
* lock: Add _irq variantsSamuel Thibault2023-08-121-9/+75
| | | | | | | | And pave the way for making the non-_irq variants check that they are never used within interrupts. We do have a few places which were missing it, as the following commits will show.
* pmap: Make pmap_protect sparse-pde awareSamuel Thibault2023-08-121-4/+3
| | | | | | 222020cff440 ("pmap: dynamically allocate the whole user page tree map") made the pde array sparse, but missed updating pmap_protect accordingly: we have to re-lookup for the pde on each PDE_MAPPED_SIZE section.
* x86_64: don't bother printing function argumentsSamuel Thibault2023-08-121-0/+5
| | | | They are in registers, and most probably very quickly overwritten.
* kdb: Show page fault details in tracesSamuel Thibault2023-08-121-1/+8
|
* x86_64: Fix catching kernel NULL deferencesSamuel Thibault2023-08-121-1/+2
| | | | | | On x86_64 we have no segmentation, and thus kernel's NULL is at linear address zero, while LINEAR_MIN_KERNEL_ADDRESS is not zero. We thus have to special-case it in otder to catch NULL dereferences.
* x86_64: Fix loading ELF symbolsSamuel Thibault2023-08-122-16/+46
|
* elf64: Update namesSamuel Thibault2023-08-123-37/+35
| | | | | | | | | Apparently the ELF world changed their mind on the naming of integers, let's get coherent with it. Elf64_Quarter (16b) disappeared, replaced by Elf64_Half (now 16b instead of Elf64_32b). And previous Elf64_Half (16b) thus now need to be Elf64_Word (16b).
* lock: Add more sanity checksSamuel Thibault2023-08-121-0/+3
|
* model_dep: drop duplicate declarationSamuel Thibault2023-08-121-4/+0
|
* lock: Reset l->writer also for read-write upgradees which are doneSamuel Thibault2023-08-121-2/+5
|
* lock: Fix building with MACH_LDEBUG but NCPUS==1Samuel Thibault2023-08-121-1/+1
|
* x86_64: fix NCPUS > 1 build of CX() macroSamuel Thibault2023-08-123-58/+58
| | | | | With the kernel gone to -2GB, the base+index addressing needs to use a 64bit register index.
* lock: Rename simple_unlock version with information to _simple_unlockSamuel Thibault2023-08-122-2/+3
| | | | For coherency with the rest of the implementations
* x86_64: Fix printing kernel trap number and errorSamuel Thibault2023-08-121-1/+1
|
* Acknowledge IRQ *before* calling the handlerSamuel Thibault2023-08-102-48/+51
| | | | | | | | | | | | 5da1aea7ab3c ("Acknoledge interrupt after handler call") moved the IRQ ack to after calling the handler because of overflows. But that was because the interrupts were getting enabled at some point. Now that all spl levels above 0 just disable interrupts, once we have called spl7 we are safe until splx_cli is called (and even that doesn't release interrupts, only the eventual iret will). And if the handler triggers another IRQ, it will be lost, so we do want to ack the IRQ before handling it.
* x86_64: homogeneize with i386 about _call_singleSamuel Thibault2023-08-102-6/+10
|
* x86_64: fix recursive disabling of interruptsSamuel Thibault2023-08-101-2/+4
| | | | | In case interrupts were already disabled before TIME_TRAP_[US]ENTRY are called, we don't want to execute sti.
* Fix missing DMA32 limitSamuel Thibault2023-08-093-3/+32
| | | | | | | | | | | Rumpdisk needs to allocate dma32 memory areas, so we do always need this limit. The non-Xen x86_64 case had a typo, and the 32bit PAE case didn't have the DMA32 limit. Also, we have to cope with VM_PAGE_DMA32_LIMIT being either above or below VM_PAGE_DIRECTMAP_LIMIT depending on the cases.
* net_io: Fix long / int confusionSamuel Thibault2023-08-081-5/+5
| | | | In network terms, long is 32bit, i.e. an int for us.
* pmap: Add missing PMAP_READ_LOCK fixes uninitialized splDamien Zammit2023-08-061-0/+1
| | | | Message-Id: <20230805154913.2003121-1-damien@zamaudio.com>
* kern/thread: Only loop over cpus that existDamien Zammit2023-08-061-1/+2
| | | | Message-Id: <20230805154859.2003109-1-damien@zamaudio.com>
* interrupt.S: No nested interrupts during IPIs && more x86_64 smp supportDamien Zammit2023-08-062-0/+11
| | | | Message-Id: <20230805154843.2003098-1-damien@zamaudio.com>
* cpu_number: Look up cpu kernel_id via lookup tableDamien Zammit2023-08-055-29/+28
| | | | | | | | | | This speeds up smp slightly by reducing the cpu_number() complexity to have no branching, just a look up table. It also addresses the problem that CPU_NUMBER was only using raw apic_id as an approximation of the kernel_id. Other improvements were to remove unnecessary checks now that the lookup table always resolves to a valid value. Message-Id: <20230805074945.1983707-1-damien@zamaudio.com>
* Add timing info to MACH_LOCK_MON lock monitoringDamien Zammit2023-08-051-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Booting to beginning of bootstrap with different number of cpus and checking the lock statistics where TIME is in milliseconds: Set MACH_LOCK_MON to 1 in configfrag.ac, then Configure options --enable-ncpus=8 --enable-kdb --enable-apic --disable-linux-groups -smp 1 db{0}> show all slocks SUCCESS FAIL MASKED STACK TIME LOCK/CALLER 4208 0/0 4208/100 2/0 7890/1 0xc1098f54(c11847c8) 1 0/0 1/100 0/0 7890/7890 0x315(c11966e0) 30742 0/0 0/0 2106/0 160/0 0xf52a9e2c(f5a07958) 30742 0/0 0/0 0/0 140/0 0xf52a5e2c(f5a07b10) 149649 0/0 3372/2 1/0 120/0 0xc118a590(c118a9d4) 16428 0/0 0/0 1/0 90/0 0xf52a5dd0(f5a07ab8) 14345 0/0 0/0 18/0 80/0 0xf64afe2c(f64aa488) 1791 0/0 0/0 1/0 80/0 0xf52a3e70(f5e57f70) 17331 total locks, 0 empty buckets 2320150 0/0 455490/19 11570533/4 17860/0 0xc10a4580(c10a4580) -smp 2 (could not wait until booted) db{0}> show all slocks SUCCESS FAIL MASKED STACK TIME LOCK/CALLER 47082 0/0 47082/100 0/0 413940/8 0xc1098f54(c11847c8) 2 0/0 2/100 0/0 413940/206970 0x6ede(c11966e0) 47139 0/0 0/0 2106/0 4670/0 0xc119edec(f5e409b0) 132895 3/0 3372/2 1/0 4580/0 0xc118a590(c118a9d4) 118313 0/0 2/0 0/0 3660/0 0xc1098ec4(c1189f80) 183233 1/0 1714/0 2/0 2290/0 0xc1098e54(c118aa8c) 14357 0/0 0/0 1878/0 1200/0 0xf52a4de0(f5e40a60) 14345 0/0 0/0 18/0 1200/0 0xf52a4dec(f528f488) 16910 total locks, 0 empty buckets 2220850 455/0 485391/21 11549793/5 879030/0 0xc10a4580(c10a4580) Message-Id: <20230722045043.1579134-1-damien@zamaudio.com>
* db_interface: Don't call db_on if MACH_KDB is offDamien Zammit2023-08-051-1/+1
| | | | | Allows building of gnumach with --disable-kdb and --enable-ncpus > 1 Message-Id: <20230722045019.1579102-1-damien@zamaudio.com>
* x86_64: remove unneeded segment selectors handling on full 64 bitLuca Dariz2023-08-046-18/+28
| | | | | | | | | | | | | * i386/i386/db_interface.c: don't set unused segment selectors on full 64-bit * i386/i386/db_trace.c: likewise. * i386/i386/i386asm.sym: likewise. * i386/i386/pcb.c:: likewise. * i386/i386/thread.h: remove ES/DS/FS/GS from thread state on !USER32, as they are unused in this configuration. Only SS and CS are kept. * x86_64/locore.S: convert segment handling macros to no-op on full 64-bit Message-Id: <20230729174753.1145878-5-luca@orpolo.org>
* x86_64: refactor segment register handlingLuca Dariz2023-08-046-160/+108
| | | | | | | | | | | | | | | | | | | | | | | | | The actual values are not saved together with the rest of the thread state, both because it would be quite espensive (reading MSR, unless rdfsbase instructions are supported, but that's optional) and not really needed. The only way the user has to change its value is with a specific RPC, so we can intercept the change easily. Furthermore, Leaving the values there exposes them to being corrupted in case of a double interruption, e.g. an irq is handled just before iretq but after starting to restore the thread state. This solution was suggested by Sergey Bugaev. * i386/i386/db_trace.c: remove fsbase/gsbase from the registers available * i386/i386/debug_i386.c: remove fsbase/gsbase from the printed thread state * i386/i386/i386asm.sym: remove fsbase/gsbase as it's not needed in asm anymore * i386/i386/pcb.c: point fsbase/gsbase to the new location * i386/i386/thread.h: move fsbase/gsbase to the machine state * x86_64/locore.S: generalize segment-handling including es/ds/gs/fs and remove fsbase/gsbase handling. Also, factor out kernel segment selector setting to a macro. Message-Id: <20230729174753.1145878-4-luca@orpolo.org>
* x86_64: format pusha/popa macros for readabilityLuca Dariz2023-08-041-2/+35
| | | | Message-Id: <20230729174753.1145878-3-luca@orpolo.org>
* x86_64: disable V86 mode on full 64-bit configurationLuca Dariz2023-08-043-9/+40
| | | | | | | | * i386/i386/pcb.c: simplify exception stack location and adapt thread gettrs/setters * i386/i386/thread.h: don't include V86 fields on full 64-bit * x86_64/locore.S: don't include checks for V86 mode on full 64-bit Message-Id: <20230729174753.1145878-2-luca@orpolo.org>
* x86_64: fix stack handling on recursive interrupts for USER32Luca Dariz2023-08-041-5/+11
| | | | | | | | * x86_64/locore.S: ensure the thread state is filled completely even on recursive interrups. The value of the segment selectors is not very important in this case, but we still need to align the stack to the bottom of i386_interrupt_state. Message-Id: <20230729174753.1145878-1-luca@orpolo.org>
* x86_64: install emergency handler for double faultLuca Dariz2023-08-045-14/+48
| | | | | | | | | | | | * i386/i386/idt.c: add selector for the interrupt-specific stack * i386/i386/ktss.c: configure ist1 to use a dedicated stack * i386/i386/trap.c: add double fault handler, which just prints the state and panics. There is not much else to do in this case but it's useful for troubleshooting * x86_64/idt_inittab.S: allow to specify an interrupt stack for custom handlers * x86_64/locore.S: add double fault handler Message-Id: <20230729174514.1145656-1-luca@orpolo.org>
* vm: Make vm_object_coalesce return new object and offsetSergey Bugaev2023-07-053-19/+49
| | | | | | | | | | | | | | | | | | | vm_object_coalesce() callers used to rely on the fact that it always merged the next_object into prev_object, potentially destroying next_object and leaving prev_object the result of the whole operation. After ee65849bec5da261be90f565bee096abb4117bdd "vm: Allow coalescing null object with an internal object", this is no longer true, since in case of prev_object == VM_OBJECT_NULL and next_object != VM_OBJECT_NULL, the overall result is next_object, not prev_object. The current callers are prepared to deal with this since they handle this case seprately anyway, but the following commit will introduce another caller that handles both cases in the same code path. So, declare the way vm_object_coalesce() coalesces the two objects its implementation detail, and make it return the resulting object and the offset into it explicitly. This simplifies the callers, too. Message-Id: <20230705141639.85792-2-bugaevc@gmail.com>
* vm: Eagerly release deallocated pagesSergey Bugaev2023-07-031-5/+21
| | | | | | | | | | | | | | If a deallocated VM map entry refers to an object that only has a single reference and doesn't have a pager port, we can eagerly release any physical pages that were contained in the deallocated range. This is not a 100% solution: it is still possible to "leak" physical pages that can never appear in virtual memory again by creating several references to a memory object (perhaps by forking a VM map with VM_INHERIT_SHARE) and deallocating the pages from all the maps referring to the object. That being said, it should help to release the pages in the common case sooner. Message-Id: <20230626112656.435622-6-bugaevc@gmail.com>
* vm: Allow coalescing entries forwardSergey Bugaev2023-07-031-4/+35
| | | | | | | When entering an object into a map, try to extend the next entry backward, in addition to the previously existing attempt to extend the previous entry forward. Message-Id: <20230626112656.435622-5-bugaevc@gmail.com>