Updated: 2022/Sep/29
Please read Privacy Policy. It's for your privacy.
MUTEX(9) Kernel Developer's Manual MUTEX(9) NAME mutex, mutex_init, mutex_destroy, mutex_enter, mutex_exit, mutex_ownable, mutex_owned, mutex_spin_enter, mutex_spin_exit, mutex_tryenter - mutual exclusion primitives SYNOPSIS #include <sys/mutex.h> void mutex_init(kmutex_t *mtx, kmutex_type_t type, int ipl); void mutex_destroy(kmutex_t *mtx); void mutex_enter(kmutex_t *mtx); void mutex_exit(kmutex_t *mtx); int mutex_ownable(kmutex_t *mtx); int mutex_owned(kmutex_t *mtx); void mutex_spin_enter(kmutex_t *mtx); void mutex_spin_exit(kmutex_t *mtx); int mutex_tryenter(kmutex_t *mtx); options DIAGNOSTIC options LOCKDEBUG DESCRIPTION Mutexes are used in the kernel to implement mutual exclusion among LWPs (lightweight processes) and interrupt handlers. The kmutex_t type provides storage for the mutex object. This should be treated as an opaque object and not examined directly by consumers. Mutexes replace the spl(9) system traditionally used to provide synchronization between interrupt handlers and LWPs. OPTIONS The following kernel options have effect on mutex operations: options DIAGNOSTIC Kernels compiled with the DIAGNOSTIC option perform basic sanity checks on mutex operations. options LOCKDEBUG Kernels compiled with the LOCKDEBUG option perform potentially CPU intensive sanity checks on mutex operations. FUNCTIONS mutex_init(mtx, type, ipl) Dynamically initialize a mutex for use. No other operations can be performed on a mutex until it has been initialized. Once initialized, all types of mutex are manipulated using the same interface. Note that mutex_init() may block in order to allocate memory. The type argument must be given as MUTEX_DEFAULT. Other constants are defined but are for low-level system use and are not an endorsed, stable part of the interface. The type of mutex returned depends on the ipl argument: IPL_NONE, or one of the IPL_SOFT* constants An adaptive mutex will be returned. Adaptive mutexes provide mutual exclusion between LWPs, and between LWPs and soft interrupt handlers. Adaptive mutexes cannot be acquired from a hardware interrupt handler. An LWP may either sleep or busy-wait when attempting to acquire an adaptive mutex that is already held. IPL_VM, IPL_SCHED, IPL_HIGH A spin mutex will be returned. Spin mutexes provide mutual exclusion between LWPs, and between LWPs and interrupt handlers. The ipl argument is used to pass a system interrupt priority level (IPL) that will block all interrupt handlers that may try to acquire the mutex. LWPs that own spin mutexes may not sleep, and therefore must not try to acquire adaptive mutexes or other sleep locks. A processor will always busy-wait when attempting to acquire a spin mutex that is already held. Note: Releasing a spin mutex may not lower the IPL to what it was when entered. If other spin mutexes are held, the IPL will not be lowered until the last one is released. This is usually not a problem because spin mutexes should held only for very short durations anyway, so blocking higher-priority interrupts a little longer doesn't hurt much. But it interferes with writing assertions that the IPL is no higher than a specified level. See spl(9) for further information on interrupt priority levels (IPLs). mutex_destroy(mtx) Release resources used by a mutex. The mutex may not be used after it has been destroyed. mutex_destroy() may block in order to free memory. mutex_enter(mtx) Acquire a mutex. If the mutex is already held, the caller will block and not return until the mutex is acquired. All loads and stores after mutex_enter() will not be reordered before it or served from a prior cache, and hence will happen after any prior mutex_exit() to release the mutex even on another CPU or in an interrupt. Thus, there is a global total ordering on all loads and stores under the same mutex. Mutexes and other types of locks must always be acquired in a consistent order with respect to each other. Otherwise, the potential for system deadlock exists. Adaptive mutexes and other types of lock that can sleep may not be acquired while a spin mutex is held by the caller. When acquiring a spin mutex, the IPL of the current CPU will be raised to the level set in mutex_init() if it is not already equal or higher. mutex_exit(mtx) Release a mutex. The mutex must have been previously acquired by the caller. Mutexes may be released out of order as needed. All loads and stores before mutex_exit() will not be reordered after it or delayed in a write buffer, and hence will happen before any subsequent mutex_enter() to acquire the mutex even on another CPU or in an interrupt. Thus, there is a global total ordering on all loads and stores under the same mutex. mutex_ownable(mtx) When compiled with LOCKDEBUG ensure that the current process can successfully acquire mtx. If mtx is already owned by the current process, the system will panic with a "locking against myself" error. This function is needed because mutex_owned() does not differentiate if a spin mutex is owned by the current process vs owned by another process. mutex_ownable() is reasonably heavy- weight, and should only be used with KDASSERT(9). mutex_owned(mtx) For adaptive mutexes, return non-zero if the current LWP holds the mutex. For spin mutexes, return non-zero if the mutex is held, potentially by the current processor. Otherwise, return zero. mutex_owned() is provided for making diagnostic checks to verify that a lock is held. For example: KASSERT(mutex_owned(&driver_lock)); It should not be used to make locking decisions at run time. For spin mutexes, it must not be used to verify that a lock is not held. mutex_spin_enter(mtx) Equivalent to mutex_enter(), but may only be used when it is known that mtx is a spin mutex. Implies the same memory ordering as mutex_enter(). On some architectures, this can substantially reduce the cost of acquiring a spin mutex. mutex_spin_exit(mtx) Equivalent to mutex_exit(), but may only be used when it is known that mtx is a spin mutex. Implies the same memory ordering as mutex_exit(). On some architectures, this can substantially reduce the cost of releasing a spin mutex. mutex_tryenter(mtx) Try to acquire a mutex, but do not block if the mutex is already held. Returns non-zero if the mutex was acquired, or zero if the mutex was already held. mutex_tryenter() can be used as an optimization when acquiring locks in the wrong order. For example, in a setting where the convention is that first_lock must be acquired before second_lock, the following can be used to optimistically lock in reverse order: /* We hold second_lock, but not first_lock. */ KASSERT(mutex_owned(&second_lock)); if (!mutex_tryenter(&first_lock)) { /* Failed to get it - lock in the correct order. */ mutex_exit(&second_lock); mutex_enter(&first_lock); mutex_enter(&second_lock); /* * We may need to recheck any conditions the code * path depends on, as we released second_lock * briefly. */ } CODE REFERENCES The core of the mutex implementation is in sys/kern/kern_mutex.c. The header file sys/sys/mutex.h describes the public interface, and interfaces that machine-dependent code must provide to support mutexes. SEE ALSO atomic_ops(3), membar_ops(3), options(4), lockstat(8), condvar(9), kpreempt(9), rwlock(9), spl(9) Jim Mauro and Richard McDougall, Solaris Internals: Core Kernel Architecture, Prentice Hall, 2001, ISBN 0-13-022496-0. HISTORY The mutex primitives first appeared in NetBSD 5.0. mutex_ownable() first appeared in NetBSD 8.0. NetBSD 10.99 December 8, 2017 NetBSD 10.99