aboutsummaryrefslogtreecommitdiff
path: root/absl/synchronization/mutex.cc
Commit message (Collapse)AuthorAgeFilesLines
* PR #1589: Use compare_exchange_weak in the loop in Mutex::ReaderLockAtariDreams2024-01-021-1/+1
| | | | | | | | | | | | | Imported from GitHub PR https://github.com/abseil/abseil-cpp/pull/1589 It makes sense because even if it fails spuriously, we can just try again since we have to check for other readers anyway. Merge 0b1780299b9e43205202d6b25f6e57759722d063 into 6a19ff47352a2112e953f4ab813d820e0ecfe1e3 Merging this change closes #1589 COPYBARA_INTEGRATE_REVIEW=https://github.com/abseil/abseil-cpp/pull/1589 from AtariDreams:atomics 0b1780299b9e43205202d6b25f6e57759722d063 PiperOrigin-RevId: 595149382 Change-Id: I24f678f6bf95c6a37b2ed541a2b6668a58a67702
* Mutex: Prevent false race in EnableInvariantDebugging.Dmitry Vyukov2023-12-191-0/+10
| | | | | | | | | | | | | | The added test exposes a false TSan race report in EnableInvariantDebugging/EnableDebugLog related to SynchEvent reuse. We ignore most of the stuff that happens inside of the Mutex code, but not for the code inside of EnableInvariantDebugging/EnableDebugLog. So these can cause occasional false reports on SynchEvent bankruptcy. Also ignore accesses in EnableInvariantDebugging/EnableDebugLog. PiperOrigin-RevId: 592226791 Change-Id: I066edb1ef5661ba6cf86a195f91a9d5328b93d10
* Mutex: Remove destructor in release buildDmitry Vyukov2023-10-311-74/+76
| | | | | | | | | | | | | | | | The Mutex destructor is needed only to clean up debug logging and invariant checking synch events. These are not supposed to be used in production, but the non-empty destructor has costs for production builds. Instead of removing synch events in destructor, drop all of them if we accumulated too many. For tests is should not matter (we maybe only consume a bit more memory). Production builds should be either unaffected (if don't use debug logging), or use periodic reset of all synch events. PiperOrigin-RevId: 578123259 Change-Id: I0ec59183a5f63ea0a6b7fc50f0a77974e7f677be
* absl: speed up Mutex::LockDmitry Vyukov2023-10-301-26/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently Mutex::Lock contains not inlined non-tail call: TryAcquireWithSpinning -> GetMutexGlobals -> LowLevelCallOnce -> init closure This turns the function into non-leaf with stack frame allocation and additional register use. Remove this non-tail call to make the function leaf. Move spin iterations initialization to LockSlow. Current Lock happy path: 00000000001edc20 <absl::Mutex::Lock()>: 1edc20: 55 push %rbp 1edc21: 48 89 e5 mov %rsp,%rbp 1edc24: 53 push %rbx 1edc25: 50 push %rax 1edc26: 48 89 fb mov %rdi,%rbx 1edc29: 48 8b 07 mov (%rdi),%rax 1edc2c: a8 19 test $0x19,%al 1edc2e: 75 0e jne 1edc3e <absl::Mutex::Lock()+0x1e> 1edc30: 48 89 c1 mov %rax,%rcx 1edc33: 48 83 c9 08 or $0x8,%rcx 1edc37: f0 48 0f b1 0b lock cmpxchg %rcx,(%rbx) 1edc3c: 74 42 je 1edc80 <absl::Mutex::Lock()+0x60> ... unhappy path ... 1edc80: 48 83 c4 08 add $0x8,%rsp 1edc84: 5b pop %rbx 1edc85: 5d pop %rbp 1edc86: c3 ret New Lock happy path: 00000000001eea80 <absl::Mutex::Lock()>: 1eea80: 48 8b 07 mov (%rdi),%rax 1eea83: a8 19 test $0x19,%al 1eea85: 75 0f jne 1eea96 <absl::Mutex::Lock()+0x16> 1eea87: 48 89 c1 mov %rax,%rcx 1eea8a: 48 83 c9 08 or $0x8,%rcx 1eea8e: f0 48 0f b1 0f lock cmpxchg %rcx,(%rdi) 1eea93: 75 01 jne 1eea96 <absl::Mutex::Lock()+0x16> 1eea95: c3 ret ... unhappy path ... PiperOrigin-RevId: 577790105 Change-Id: I20793534050302ff9f7a20aed93791c088d98562
* Rollback "Mutex: Remove destructor in release build"Dmitry Vyukov2023-10-271-76/+74
| | | | | PiperOrigin-RevId: 577180526 Change-Id: Iec53709456805ca8dc5327669cc0f6c95825d0e9
* Mutex: Remove destructor in release buildDmitry Vyukov2023-10-271-74/+76
| | | | | | | | | | | | | | | | The Mutex destructor is needed only to clean up debug logging and invariant checking synch events. These are not supposed to be used in production, but the non-empty destructor has costs for production builds. Instead of removing synch events in destructor, drop all of them if we accumulated too many. For tests is should not matter (we maybe only consume a bit more memory). Production builds should be either unaffected (if don't use debug logging), or use periodic reset of all synch events. PiperOrigin-RevId: 577106805 Change-Id: Icaaf7166b99afcd5dce92b4acd1be661fb72f10b
* absl: requeue waiters as LIFODmitry Vyukov2023-10-241-0/+18
| | | | | | | | | | | Currently if a thread already blocked on a Mutex, but then failed to acquire the Mutex, we queue it in FIFO order again. As the result unlucky threads can suffer bad latency if they are requeued several times. The least we can do for them is to queue in LIFO order after blocking. PiperOrigin-RevId: 576174725 Change-Id: I9e2a329d34279a26bd1075b42e3217a5dc065f0a
* Mutex: Rollback requeing waiters as LIFOAbseil Team2023-09-211-18/+0
| | | | | PiperOrigin-RevId: 567415671 Change-Id: I59bfcb5ac9fbde227a4cdb3b497b0bd5969b0770
* Rollback "absl: speed up Mutex::Lock"Dmitry Vyukov2023-09-201-23/+13
| | | | | | | There are some regressions reported. PiperOrigin-RevId: 567181925 Change-Id: I4ee8a61afd336de7ecb22ec307adb2068932bc8b
* absl:speed up Mutex::[Reader]TryLockDmitry Vyukov2023-09-201-41/+67
| | | | | | | | | | | | | | | | | | | Tidy up Mutex::[Reader]TryLock codegen by outlining slow path and non-tail function call, and un-unrolling the loop. Current codegen: https://gist.githubusercontent.com/dvyukov/a4d353fd71ac873af9332c1340675b60/raw/226537ffa305b25a79ef3a85277fa870fee5191d/gistfile1.txt New codegen: https://gist.githubusercontent.com/dvyukov/686a094c5aa357025689764f155e5a29/raw/e3125c1cdb5669fac60faf336e2f60395e29d888/gistfile1.txt name old cpu/op new cpu/op delta BM_TryLock 18.0ns ± 0% 17.7ns ± 0% -1.64% (p=0.016 n=4+5) BM_ReaderTryLock/real_time/threads:1 17.9ns ± 0% 17.9ns ± 0% -0.10% (p=0.016 n=5+5) BM_ReaderTryLock/real_time/threads:72 9.61µs ± 8% 8.42µs ± 7% -12.37% (p=0.008 n=5+5) PiperOrigin-RevId: 567006472 Change-Id: Iea0747e71bbf2dc1f00c70a4235203071d795b99
* absl: speed up Mutex::ReaderLock/UnlockDmitry Vyukov2023-09-201-12/+23
| | | | | | | | | | | | | | | | Currently ReaderLock/Unlock tries CAS only once. Even if there is moderate contention from other readers only, ReaderLock/Unlock go onto slow path, which does lots of additional work before retrying the CAS (since there are only readers, the slow path logic is not really needed for anything). Retry CAS while there are only readers. name old cpu/op new cpu/op delta BM_ReaderLock/real_time/threads:1 17.9ns ± 0% 17.9ns ± 0% ~ (p=0.071 n=5+5) BM_ReaderLock/real_time/threads:72 11.4µs ± 3% 8.4µs ± 4% -26.24% (p=0.008 n=5+5) PiperOrigin-RevId: 566981511 Change-Id: I432a3c1d85b84943d0ad4776a34fa5bfcf5b3b8e
* absl: speed up Mutex::LockDmitry Vyukov2023-09-181-13/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently Mutex::Lock contains not inlined non-tail call: TryAcquireWithSpinning -> GetMutexGlobals -> LowLevelCallOnce -> init closure This turns the function into non-leaf with stack frame allocation and additional register use. Remove this non-tail call to make the function leaf. Move spin iterations initialization to LockSlow. Current Lock happy path: 00000000001edc20 <absl::Mutex::Lock()>: 1edc20: 55 push %rbp 1edc21: 48 89 e5 mov %rsp,%rbp 1edc24: 53 push %rbx 1edc25: 50 push %rax 1edc26: 48 89 fb mov %rdi,%rbx 1edc29: 48 8b 07 mov (%rdi),%rax 1edc2c: a8 19 test $0x19,%al 1edc2e: 75 0e jne 1edc3e <absl::Mutex::Lock()+0x1e> 1edc30: 48 89 c1 mov %rax,%rcx 1edc33: 48 83 c9 08 or $0x8,%rcx 1edc37: f0 48 0f b1 0b lock cmpxchg %rcx,(%rbx) 1edc3c: 74 42 je 1edc80 <absl::Mutex::Lock()+0x60> ... unhappy path ... 1edc80: 48 83 c4 08 add $0x8,%rsp 1edc84: 5b pop %rbx 1edc85: 5d pop %rbp 1edc86: c3 ret New Lock happy path: 00000000001eea80 <absl::Mutex::Lock()>: 1eea80: 48 8b 07 mov (%rdi),%rax 1eea83: a8 19 test $0x19,%al 1eea85: 75 0f jne 1eea96 <absl::Mutex::Lock()+0x16> 1eea87: 48 89 c1 mov %rax,%rcx 1eea8a: 48 83 c9 08 or $0x8,%rcx 1eea8e: f0 48 0f b1 0f lock cmpxchg %rcx,(%rdi) 1eea93: 75 01 jne 1eea96 <absl::Mutex::Lock()+0x16> 1eea95: c3 ret ... unhappy path ... PiperOrigin-RevId: 566488042 Change-Id: I62f854b82a322cfb1d42c34f8ed01b4677693fca
* absl: requeue waiters as LIFODmitry Vyukov2023-09-181-0/+18
| | | | | | | | | | | Currently if a thread already blocked on a Mutex, but then failed to acquire the Mutex, we queue it in FIFO order again. As the result unlucky threads can suffer bad latency if they are requeued several times. The least we can do for them is to queue in LIFO order after blocking. PiperOrigin-RevId: 566478783 Change-Id: I8bac08325f20ff6ccc2658e04e1847fd4614c653
* absl: remove special case for timed CondVar waitsDmitry Vyukov2023-09-151-21/+4
| | | | | | | | | | | | CondVar wait morhping has a special case for timed waits. The code goes back to 2006, it seems that there might have been some reasons to do this back then. But now it does not seem to be necessary. Wait morphing should work just fine after timed CondVar waits. Remove the special case and simplify code. PiperOrigin-RevId: 565798838 Change-Id: I4e4d61ae7ebd521f5c32dfc673e57a0c245e7cfb
* absl: optimize Condition checks in Mutex codeDmitry Vyukov2023-09-151-25/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. Remove special handling of Condition::kTrue. Condition::kTrue is used very rarely (frequently its uses even indicate confusion and bugs). But we pay few additional branches for kTrue on all Condition operations. Remove that special handling and simplify logic. 2. And remove known_false condition in Mutex code. Checking known_false condition only causes slow down because: 1. We already built skip list with equivalent conditions (and keep improving it on every Skip call). And when we built the skip list, we used more capable GuaranteedEqual function (it does not just check for equality of pointers, but for also for equality of function/arg). 2. Condition pointer are rarely equal even for equivalent conditions becuase temp Condition objects are usually created on the stack. We could call GuaranteedEqual(cond, known_false) instead of cond == known_false, but that slows down things even more (see point 1). So remove the known_false optimization. Benchmark results for this and the previous change: name old cpu/op new cpu/op delta BM_ConditionWaiters/0/1 36.0ns ± 0% 34.9ns ± 0% -3.02% (p=0.008 n=5+5) BM_ConditionWaiters/1/1 36.0ns ± 0% 34.9ns ± 0% -2.98% (p=0.008 n=5+5) BM_ConditionWaiters/2/1 35.9ns ± 0% 34.9ns ± 0% -3.03% (p=0.016 n=5+4) BM_ConditionWaiters/0/8 55.5ns ± 5% 49.8ns ± 3% -10.33% (p=0.008 n=5+5) BM_ConditionWaiters/1/8 36.2ns ± 0% 35.2ns ± 0% -2.90% (p=0.016 n=5+4) BM_ConditionWaiters/2/8 53.2ns ± 7% 48.3ns ± 7% ~ (p=0.056 n=5+5) BM_ConditionWaiters/0/64 295ns ± 1% 254ns ± 2% -13.73% (p=0.008 n=5+5) BM_ConditionWaiters/1/64 36.2ns ± 0% 35.2ns ± 0% -2.85% (p=0.008 n=5+5) BM_ConditionWaiters/2/64 290ns ± 6% 250ns ± 4% -13.68% (p=0.008 n=5+5) BM_ConditionWaiters/0/512 5.50µs ±12% 4.99µs ± 8% ~ (p=0.056 n=5+5) BM_ConditionWaiters/1/512 36.7ns ± 3% 35.2ns ± 0% -4.10% (p=0.008 n=5+5) BM_ConditionWaiters/2/512 4.44µs ±13% 4.01µs ± 3% -9.74% (p=0.008 n=5+5) BM_ConditionWaiters/0/4096 104µs ± 6% 101µs ± 3% ~ (p=0.548 n=5+5) BM_ConditionWaiters/1/4096 36.2ns ± 0% 35.1ns ± 0% -3.03% (p=0.008 n=5+5) BM_ConditionWaiters/2/4096 90.4µs ± 5% 85.3µs ± 7% ~ (p=0.222 n=5+5) BM_ConditionWaiters/0/8192 384µs ± 5% 367µs ± 7% ~ (p=0.222 n=5+5) BM_ConditionWaiters/1/8192 36.2ns ± 0% 35.2ns ± 0% -2.84% (p=0.008 n=5+5) BM_ConditionWaiters/2/8192 363µs ± 3% 316µs ± 7% -12.84% (p=0.008 n=5+5) PiperOrigin-RevId: 565669535 Change-Id: I5180c4a787933d2ce477b004a111853753304684
* Rollback:Abseil Team2023-09-081-9/+25
| | | | | | | | | absl: remove special handling of Condition::kTrue absl: remove known_false condition in Mutex code There are some test breakages. PiperOrigin-RevId: 563751370 Change-Id: Ie14dc799e0a0d286a7e1b47f0a9bbe59dfb23f70
* absl: remove leftovers of CondVar support for other mutexesAbseil Team2023-09-081-19/+13
| | | | | | | | | | When CondVar accepted generic non-Mutex mutexes, Mutex pointer could be nullptr. Now that support is removed, but we still have some lingering checks for Mutex* == nullptr. Remove them. PiperOrigin-RevId: 563740239 Change-Id: Ib744e0b991f411dd8dba1b0da6477c13832e0f65
* absl: inline and de-dup Mutex::Await/LockWhen/CondVar::WaitAbseil Team2023-09-081-99/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mutex::Await/LockWhen/CondVar::Wait duplicate code, and cause additional calls at runtime and code bloat. Inline thin wrappers that just convert argument types and add a single de-duped implementation for these methods. This reduces code size, shaves off 55K from the mutex_test in release build, and should make things marginally faster. $ nm -nS mutex_test | egrep "(_ZN4absl5Mutex.*(Await|LockWhen))|(_ZN4absl7CondVar.*Wait)" before: 00000000000912c0 00000000000001a8 T _ZN4absl7CondVar4WaitEPNS_5MutexE 00000000000988c0 0000000000000c36 T _ZN4absl7CondVar16WaitWithDeadlineEPNS_5MutexENS_4TimeE 000000000009a6e0 0000000000000041 T _ZN4absl5Mutex19LockWhenWithTimeoutERKNS_9ConditionENS_8DurationE 00000000000a28c0 0000000000000779 T _ZN4absl5Mutex17AwaitWithDeadlineERKNS_9ConditionENS_4TimeE 00000000000cf4e0 0000000000000011 T _ZN4absl5Mutex8LockWhenERKNS_9ConditionE 00000000000cf500 0000000000000041 T _ZN4absl5Mutex20LockWhenWithDeadlineERKNS_9ConditionENS_4TimeE 00000000000cf560 0000000000000011 T _ZN4absl5Mutex14ReaderLockWhenERKNS_9ConditionE 00000000000cf580 0000000000000041 T _ZN4absl5Mutex26ReaderLockWhenWithDeadlineERKNS_9ConditionENS_4TimeE 00000000000cf5e0 0000000000000766 T _ZN4absl5Mutex5AwaitERKNS_9ConditionE 00000000000cfd60 00000000000007b5 T _ZN4absl5Mutex16AwaitWithTimeoutERKNS_9ConditionENS_8DurationE 00000000000d0700 00000000000003cf T _ZN4absl7CondVar15WaitWithTimeoutEPNS_5MutexENS_8DurationE 000000000011c280 0000000000000041 T _ZN4absl5Mutex25ReaderLockWhenWithTimeoutERKNS_9ConditionENS_8DurationE after: 000000000009c300 00000000000007ed T _ZN4absl7CondVar10WaitCommonEPNS_5MutexENS_24synchronization_internal13KernelTimeoutE 00000000000a03c0 00000000000006fe T _ZN4absl5Mutex11AwaitCommonERKNS_9ConditionENS_24synchronization_internal13KernelTimeoutE 000000000011ae00 0000000000000025 T _ZN4absl5Mutex14LockWhenCommonERKNS_9ConditionENS_24synchronization_internal13KernelTimeoutEb PiperOrigin-RevId: 563729364 Change-Id: Ic6b43761f76719c01e03d43cc0e0c419e41a85c1
* absl: remove known_false condition in Mutex codeAbseil Team2023-09-081-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Checking known_false condition only causes slow down because: 1. We already built skip list with equivalent conditions (and keep improving it on every Skip call). And when we built the skip list, we used more capable GuaranteedEqual function (it does not just check for equality of pointers, but for also for equality of function/arg). 2. Condition pointer are rarely equal even for equivalent conditions becuase temp Condition objects are usually created on the stack. We could call GuaranteedEqual(cond, known_false) instead of cond == known_false, but that slows down things even more (see point 1). So remove the known_false optimization. Benchmark results for this and the previous change: name old cpu/op new cpu/op delta BM_ConditionWaiters/0/1 36.0ns ± 0% 34.9ns ± 0% -3.02% (p=0.008 n=5+5) BM_ConditionWaiters/1/1 36.0ns ± 0% 34.9ns ± 0% -2.98% (p=0.008 n=5+5) BM_ConditionWaiters/2/1 35.9ns ± 0% 34.9ns ± 0% -3.03% (p=0.016 n=5+4) BM_ConditionWaiters/0/8 55.5ns ± 5% 49.8ns ± 3% -10.33% (p=0.008 n=5+5) BM_ConditionWaiters/1/8 36.2ns ± 0% 35.2ns ± 0% -2.90% (p=0.016 n=5+4) BM_ConditionWaiters/2/8 53.2ns ± 7% 48.3ns ± 7% ~ (p=0.056 n=5+5) BM_ConditionWaiters/0/64 295ns ± 1% 254ns ± 2% -13.73% (p=0.008 n=5+5) BM_ConditionWaiters/1/64 36.2ns ± 0% 35.2ns ± 0% -2.85% (p=0.008 n=5+5) BM_ConditionWaiters/2/64 290ns ± 6% 250ns ± 4% -13.68% (p=0.008 n=5+5) BM_ConditionWaiters/0/512 5.50µs ±12% 4.99µs ± 8% ~ (p=0.056 n=5+5) BM_ConditionWaiters/1/512 36.7ns ± 3% 35.2ns ± 0% -4.10% (p=0.008 n=5+5) BM_ConditionWaiters/2/512 4.44µs ±13% 4.01µs ± 3% -9.74% (p=0.008 n=5+5) BM_ConditionWaiters/0/4096 104µs ± 6% 101µs ± 3% ~ (p=0.548 n=5+5) BM_ConditionWaiters/1/4096 36.2ns ± 0% 35.1ns ± 0% -3.03% (p=0.008 n=5+5) BM_ConditionWaiters/2/4096 90.4µs ± 5% 85.3µs ± 7% ~ (p=0.222 n=5+5) BM_ConditionWaiters/0/8192 384µs ± 5% 367µs ± 7% ~ (p=0.222 n=5+5) BM_ConditionWaiters/1/8192 36.2ns ± 0% 35.2ns ± 0% -2.84% (p=0.008 n=5+5) BM_ConditionWaiters/2/8192 363µs ± 3% 316µs ± 7% -12.84% (p=0.008 n=5+5) PiperOrigin-RevId: 563717887 Change-Id: I9a62670628510d764a4f2f88a047abb8f85009e2
* absl: remove special handling of Condition::kTrueAbseil Team2023-09-081-18/+7
| | | | | | | | | Condition::kTrue is used very rarely (frequently its uses even indicate confusion and bugs). But we pay few additional branches for kTrue on all Condition operations. Remove that special handling and simplify logic. PiperOrigin-RevId: 563691160 Change-Id: I76125adde4872489da069dd9c894ed73a65d1d83
* absl: fix a priority bug in CondVar wait morphingAbseil Team2023-08-291-16/+21
| | | | | | | | | | | | | | Enqueue updates priority of the queued thread. It was assumed that the queued thread is the current thread. But it's not the case in CondVar wait morhping, where we requeue an existing CondVar waiter on the Mutex. As the result one thread can falsely get priority of another thread. Fix this by not updating priority in this case. And make the assumption explicit and checked. PiperOrigin-RevId: 561249402 Change-Id: I9476c047757090b893a88a2839b795b85fe220ad
* absl: fix lint errors in MutexAbseil Team2023-06-201-0/+1
| | | | | | | | | | | | | | | Currently linter warns on all changes: missing #include <cstdlib> for 'std::atexit' and single-argument constructors must be marked explicit to avoid unintentional implicit conversions Fix that. PiperOrigin-RevId: 542135136 Change-Id: Ic86649de6baef7f2de71f45875bb66bd730bf6e1
* absl: cosmetic changes for MutexAbseil Team2023-06-201-21/+17
| | | | | | | | | | Few pure cosmetic changes: - remove unused headers - add using for CycleClock since it's used multiple times - restructure GetMutexGlobals to be more consistent PiperOrigin-RevId: 542002120 Change-Id: I117faae05cb8224041f7e3771999f3a35bdf4aef
* absl: reformat Mutex-related filesAbseil Team2023-06-201-343/+332
| | | | | | | | | | | Reformat Mutex-related files so that incremental formatting changes don't distract during review of logical changes. These files are subtle and any unnecessary diffs make reviews harder. No changes besides running clang-format. PiperOrigin-RevId: 541981737 Change-Id: I41cccb7a97158c78d17adaff6fe553c2c9c2b9ed
* absl: fix Mutex writer starvation related to uninit priorityAbseil Team2023-06-161-22/+29
| | | | | | | | | | | | | | | | | | Currently when we queue the first thread, we don't init its priority. Subsequent queued threads init priority, but they compare it against the first thread priority, which is uninit. Thus the order can be wrong. It can lead to complete false starvation in some corner cases. On Linux the default priority is 0, which matches the uninit value, thus the problem is harder to spot on Linux (only possible if explicit thread priorities are used). But on Darwin the default priority is 31, thus the first thread falsely looks like lower priority than subsequently queued threads. The added test exposes the problem on Darwin. Always initialize the priority before queuing threads. PiperOrigin-RevId: 540814133 Change-Id: I513ce1493a67afe77d3e92fb49000b046b42a9f2
* absl: move comment in mutex.cc to where it belongsAbseil Team2023-06-151-6/+6
| | | | | | | | | | Move the comment that relates to kMuDesig close to kMuDesig definition. Currently it's placed in between unrelated flags. NFC PiperOrigin-RevId: 540792401 Change-Id: I5f6a928cd9e01664812b2a7c3d9eb087c0723d7f
* Mutex: Remove the deprecated absl::RegisterSymbolizer() hookDerek Mauro2023-05-151-9/+2
| | | | | | | | absl::RegisterSymbolizer() has been deprecated for 5 years. It is being removed following our compatibility policy. <https://abseil.io/about/compatibility> PiperOrigin-RevId: 532174866 Change-Id: Id5c3b86698e389099d3d707c4e57f30f1f155d2e
* Synchronization: Add support for true relative timeouts usingDerek Mauro2023-03-141-19/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | monotonic clocks on Linux when the implementation uses futexes After this change, when synchronization methods that wait are passed an absl::Duration to limit the wait time, these methods will wait for that interval, even if the system clock is changed (subject to any limitations with how CLOCK_MONOTONIC keeps track of time). In other words, an observer measuring the time with a stop watch will now see the correct interval, even if the system clock is changed. Previously, the duration was added to the current time, and methods would wait until that time was reached on the possibly changed realtime system clock. The behavior of the synchronization methods that take an absl::Time is unchanged. These methods always wait until the absolute point in time is reached and respect changes to the system clock. In other words, an observer will always see the timeout occur when a wall clock reaches that time, even if the clock is manipulated externally. Note: ABSL_PREDICT_FALSE was removed from the error case in Futex as timeouts are handled by this case, and timeouts are part of normal operation. PiperOrigin-RevId: 516534869 Change-Id: Ib70b83e4be3f9e3f1727646975a21a1d30acb242
* Rollback Mutex relative timeout support because of internal incompatibilityAbseil Team2023-03-091-26/+19
| | | | | PiperOrigin-RevId: 515427893 Change-Id: I89e8756fcf400459b0226d14785c6511ad3e380b
* Synchronization: Add support for true relative timeouts usingDerek Mauro2023-03-081-19/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | monotonic clocks on Linux when the implementation uses futexes After this change, when synchronization methods that wait are passed an absl::Duration to limit the wait time, these methods will wait for that interval, even if the system clock is changed (subject to any limitations with how CLOCK_MONOTONIC keeps track of time). In other words, an observer measuring the time with a stop watch will now see the correct interval, even if the system clock is changed. Previously, the duration was added to the current time, and methods would wait until that time was reached on the possibly changed realtime system clock. The behavior of the synchronization methods that take an absl::Time is unchanged. These methods always wait until the absolute point in time is reached and respect changes to the system clock. In other words, an observer will always see the timeout occur when a wall clock reaches that time, even if the clock is manipulated externally. Note: ABSL_PREDICT_FALSE was removed from the error case in Futex as timeouts are handled by this case, and timeouts are part of normal operation. PiperOrigin-RevId: 515043788 Change-Id: I151127b588065bd1316273f36d7c946545c2c892
* Rollback because of internal incompatibility.Abseil Team2023-02-281-26/+19
| | | | | PiperOrigin-RevId: 512979517 Change-Id: I7fe38ed246e42e6f8eb322e15c3b299215163168
* Fix out of bounds array access when deadlock detector finds exceptionally ↵Abseil Team2023-02-221-1/+4
| | | | | | | large cycles. PiperOrigin-RevId: 511536497 Change-Id: If70a1c72ef5f7cbb4a80100c4edff459373a5d55
* Synchronization: Add support for true relative timeouts usingDerek Mauro2023-02-171-19/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | monotonic clocks on Linux when the implementation uses futexes After this change, when synchronization methods that wait are passed an absl::Duration to limit the wait time, these methods will wait for that interval, even if the system clock is changed (subject to any limitations with how CLOCK_MONOTONIC keeps track of time). In other words, an observer measuring the time with a stop watch will now see the correct interval, even if the system clock is changed. Previously, the duration was added to the current time, and methods would wait until that time was reached on the possibly changed realtime system clock. The behavior of the synchronization methods that take an absl::Time is unchanged. These methods always wait until the absolute point in time is reached and respect changes to the system clock. In other words, an observer will always see the timeout occur when a wall clock reaches that time, even if the clock is manipulated externally. Note: ABSL_PREDICT_FALSE was removed from the error case in Futex as timeouts are handled by this case, and timeouts are part of normal operation. PiperOrigin-RevId: 510405347 Change-Id: I0b3ea390de97014cfa353079ae2e0c1c637aca69
* Minor formatting: Fix misplaced space.Abseil Team2023-01-191-1/+1
| | | | | PiperOrigin-RevId: 503110285 Change-Id: I59c48b1486386e2db8fb62cf8bfa1a691865f704
* Clean up the XRay annotation leftover on mutex.Abseil Team2022-12-271-7/+7
| | | | | PiperOrigin-RevId: 497998566 Change-Id: I8d43311e280a5ea46c42abed55be62cd70d4d54a
* Replace ABSL_INTERNAL_UNREACHABLE with ABSL_UNREACHABLE()Derek Mauro2022-12-221-2/+3
| | | | | PiperOrigin-RevId: 497197704 Change-Id: I3865a874e04f6f55a1ab374b03451535a86bc5a3
* Remove static initializer from mutex.h.Abseil Team2022-11-301-2/+1
| | | | | PiperOrigin-RevId: 491915718 Change-Id: I7469601857b5a3506163518d29f49792f3053b34
* absl: fix Mutex TSan annotationsAbseil Team2022-11-281-3/+8
| | | | | | | | | TSan misses synchronization around passing PerThreadSynch between threads since it happens inside of the Mutex code (which me mostly ignore), so we need to ignore all accesses to the object. PiperOrigin-RevId: 491297912 Change-Id: I13ea2015dee5c1a3fc4315c85112902ccffccc45
* Update Condition to allocate 24 bytes for MSVC platform pointers to methods.Abseil Team2022-11-161-1/+1
| | | | | PiperOrigin-RevId: 488986942 Change-Id: I2babb7ea30d60c544f55ca9ed02d9aed23051a12
* Force a conservative allocation for pointers to methods in Condition objects.Abseil Team2022-11-071-14/+24
| | | | | | | | | | | In order for Condition to work on Microsoft platforms, it has to store pointers to methods that are larger than we usually expect. MSVC pointers to methods from class hierarchies that employ multiple inheritance or virtual inheritance are strictly larger than pointers to methods in class hierarchies that only employ single inheritance. This change introduces an opaque declaration of a class, which is not fulfilled. This declaration is used to calculate the size of the Condition method pointer allocation. Because the declaration is of unspecified inheritance, the compiler is forced to use a conservatively large allocation, which will thereby accommodate all method pointer sizes. Because the `method_` and `function_` callbacks are only populated in mutually exclusive conditions, they can be allowed to take up the same space in the Condition object. This change combines the `method_` and `function_` fields and renames the new field to `callback_`. The constructor logic is updated to reflect the new field. PiperOrigin-RevId: 486701312 Change-Id: If06832cc26f27d91e295183e44dc29440af5f9db
* Mutex: Fix stall on single-core systemsAbseil Team2022-10-241-8/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | On single-core systems, a thread could be preempted while holding an absl::Mutex, or even worse, the spin lock. If a FIFO thread wakes up and tries to acquire this lock, it might not be able to yield() to the sleeping thread. Within MutexDelay(), a yield() and a sleep(10us) are used to yield the CPU. The yield() would do nothing if the calling thread holds the highest priority in the system. The 10us sleep() may not be able to reach the scheduler either, if the system is slow enough. This code path is known to be reachable in the following scenarios: - a FIFO thread calls LockSlowLoop() with spin lock held by a normal thread - a FIFO thread calls LockWhen*() with the Mutex held by a normal thread for a long time - a FIFO thread calls Await*(), releases the Mutex to be held by a normal thread for a long time This CL adds a mutex global for the sleep time, and sets it using the return time of the a yield() call. Yield() must reach the scheduler even when it fails to yield to anyone, and would allow sleep() to do the same. A small constant multiplier (5) is also applied to overcome uncontrollable factors in the runtime and help sleep() to consistently yield to another thread. Upper and lower bounds for the sleep time is also controlled to block any unreasonable values. PiperOrigin-RevId: 483459711 Change-Id: I14efadbadaf9244a2462f377b515147bda651c89
* Changes mutex unlock profilingAbseil Team2022-10-071-6/+10
| | | | | PiperOrigin-RevId: 479667897 Change-Id: I6085df8bfcfb009806230f8d71b576a1371a4d1f
* Fix "unsafe narrowing" warnings in absl, 9/n.Abseil Team2022-09-081-2/+2
| | | | | | | | | | | | | | | Addresses failures with the following, in some files: -Wshorten-64-to-32 -Wimplicit-int-conversion -Wsign-compare -Wsign-conversion -Wtautological-unsigned-zero-compare (This specific CL focuses on miscellaneous non-test source files.) Bug: chromium:1292951 PiperOrigin-RevId: 473054605 Change-Id: Ifd7b24966613ca915511a3a607095508068200b8
* Changes mutex profilingAbseil Team2022-09-011-1/+4
| | | | | PiperOrigin-RevId: 471545981 Change-Id: I4d2c8b6d4f1e58976915bda78a77178b8bf80da8
* Fix "unsafe narrowing" warnings in absl, 3/n.Abseil Team2022-08-041-12/+16
| | | | | | | | | | | | | | | Addresses failures with the following, in some files: -Wshorten-64-to-32 -Wimplicit-int-conversion -Wsign-compare -Wsign-conversion -Wtautological-unsigned-zero-compare (This specific CL focuses on .cc files in dirs n-t, except string.) Bug: chromium:1292951 PiperOrigin-RevId: 465287204 Change-Id: I0fe98ff78bf3c08d86992019eb626755f8b6803e
* Merge pull request #1223 from ElijahPepe:fix/implement-snprintf-safelyCopybara-Service2022-07-271-1/+6
|\ | | | | | | | | PiperOrigin-RevId: 463581990 Change-Id: I47359d4d2d2fcd2365b5ff9a5c3b61b5751e4ed2
| * fix: properly create the b integerElijah Conners2022-07-211-1/+1
| | | | | | | | Signed-off-by: Elijah Conners <business@elijahpepe.com>
| * fix(mutex): safely call snprintfElijah Conners2022-07-191-1/+5
|/ | | | | | | | | | | In the PostSynchEvent() function, the pos integer uses an implementation of snprintf that is fundamentally unsafe: since the return value of snprintf is the number of characters that would have been written to the buffer, if an operation reaches the end of the buffer with more than one character discarded, the return value will be greater than the buffer size, requiring a check of the buffer's current size. Signed-off-by: Elijah Conners <business@elijahpepe.com>
* absl: fix live-lock in CondVarAbseil Team2022-05-171-0/+17
| | | | | | | | CondVar::WaitWithTimeout can live-lock when timeout is racing with Signal/SignalAll and Signal/SignalAll thread is not scheduled due to priorities, affinity or other scheduler artifacts. This could lead to stalls of up to tens of seconds in some cases. PiperOrigin-RevId: 449159670 Change-Id: I64bbd277c1f91964cfba3306ba8a80eeadf85f64
* Fix typo: "a the condition" -> "a condition".Abseil Team2022-04-221-3/+3
| | | | | PiperOrigin-RevId: 443723710 Change-Id: Ic39b0cf2b289efa9cd9434616949dd08a1a35117