Updated: 2022/Sep/29

Please read Privacy Policy. It's for your privacy.


KERNEL_LOCK(9)             Kernel Developer's Manual            KERNEL_LOCK(9)

NAME
     KERNEL_LOCK - compatibility with legacy uniprocessor code

SYNOPSIS
     #include <sys/systm.h>

     void
     KERNEL_LOCK(int nlocks, struct lwp *l);

     void
     KERNEL_UNLOCK_ONE(struct lwp *l);

     void
     KERNEL_UNLOCK_ALL(struct lwp *l, int *nlocksp);

     void
     KERNEL_UNLOCK_LAST(struct lwp *l);

     bool
     KERNEL_LOCKED_P();

DESCRIPTION
     The KERNEL_LOCK facility serves to gradually transition software from the
     kernel's legacy uniprocessor execution model, where the kernel runs on
     only a single CPU and never in parallel on multiple CPUs, to a
     multiprocessor system.

     New code should not use KERNEL_LOCK.  KERNEL_LOCK is meant only for
     gradual transition of NetBSD to natively MP-safe code, which uses
     mutex(9) or other locking(9) facilities to synchronize between threads
     and interrupt handlers.  Use of KERNEL_LOCK hurts system performance and
     responsiveness.  This man page exists only to document the legacy API in
     order to make it easier to transition away from.

     The kernel lock, sometimes also known as `giant lock' or `big lock', is a
     recursive exclusive spin-lock that can be held by a CPU at any interrupt
     priority level and is dropped while sleeping.  This means:

     recursive      If a CPU already holds the kernel lock, it can be acquired
                    again and again, as long as it is released an equal number
                    of times.

     exclusive      Only one CPU at a time can hold the kernel lock.

     spin-lock      When one CPU holds the kernel lock and another CPU wants
                    to hold it, the second CPU `spins', i.e., repeatedly
                    executes instructions to see if the kernel lock is
                    available yet, until the first CPU releases it.  During
                    this time, no other threads can run on the spinning CPU.

                    This means holding the kernel lock for long periods of
                    time, such as nontrivial computation, must be avoided.
                    Under LOCKDEBUG kernels, holding the kernel lock for too
                    long can lead to `spinout' crashes.

     held by a CPU  The kernel lock is held by a CPU, not by a process,
                    kthread, LWP, or interrupt handler.  It may be shared by a
                    kthread LWP and several softint LWPs at the same time, for
                    example, if the softints interrupted the thread on a CPU.

     any interrupt priority level
                    The kernel lock does not block interrupts; subsystems
                    running with the kernel lock use spl(9) to synchronize
                    with interrupt handlers.

                    Interrupt handlers that are not marked MP-safe are always
                    run with the kernel lock.  If the interrupt arrives on a
                    CPU where the kernel lock is already held, it is simply
                    taken again recursively on interrupt entry and released to
                    its original recursion depth on interrupt exit.

     dropped while sleeping
                    Any time the kernel sleeps to let other threads run, for
                    any reason including tsleep(9) or condvar(9) or even
                    adaptive mutex(9) locks, it releases the kernel lock
                    before going to sleep and then reacquires it afterward.

                    This means, for instance, that although data structures
                    accessed only under the kernel lock won't be changed
                    before the sleep, they may be changed by another thread
                    during the sleep.  For example, the following program may
                    crash on an assertion failure because the sleep in
                    mutex_enter(9) can allow another CPU to run and change the
                    global variable x:

                            KERNEL_LOCK(1, NULL);
                            x = 42;
                            mutex_enter(...);
                            ...
                            mutex_exit(...);
                            KASSERT(x == 42);
                            KERNEL_UNLOCK_ONE(NULL);

                    This means simply introducing calls to mutex_enter(9) and
                    mutex_exit(9) can break kernel-locked assumptions.
                    Subsystems need to be consistently converted from
                    KERNEL_LOCK and spl(9) to mutex(9), condvar(9), etc.;
                    mixing mutex(9) and KERNEL_LOCK usually doesn't work.

     Holding the kernel lock does not prevent other code from running on other
     CPUs at the same time.  It only prevents other kernel-locked code from
     running on other CPUs at the same time.

FUNCTIONS
     KERNEL_LOCK(nlocks, l)
           Acquire nlocks recursive levels of kernel lock.

           If the kernel lock is already held by another CPU, spins until it
           can be acquired by this one.  If the kernel lock is already held by
           this CPU, records the kernel lock recursion depth and returns
           immediately.

           Most of the time nlocks is 1, but code that deliberately releases
           all of the kernel locks held by the current CPU in order to sleep
           and later reacquire the same number of kernel locks will pass a
           value of nlocks obtained from KERNEL_UNLOCK_ALL().

     KERNEL_UNLOCK_ONE(l)
           Release one level of the kernel lock.  Equivalent to
           KERNEL_UNLOCK(1, l, NULL).

     KERNEL_UNLOCK_ALL(l, nlocksp)
           Store the kernel lock recursion depth at nlocksp and release all
           recursive levels of the kernel lock.

           This is often used inside logic implementing sleep, around a call
           to mi_switch(9), so that the same number of recursive kernel locks
           can be reacquired afterward once the thread is reawoken:

                   int nlocks;

                   KERNEL_UNLOCK_ALL(l, &nlocks);
                   ... mi_switch(l) ...
                   KERNEL_LOCK(nlocks, l);

     KERNEL_UNLOCK_LAST(l)
           Release the kernel lock, which must be held at exactly one level.

           This is normally used at the end of a non-MP-safe thread, which was
           known to have started with exactly one level of the kernel lock,
           and is now about to exit.

     KERNEL_LOCKED_P()
           True if the kernel lock is held.

           To be used only in diagnostic assertions with KASSERT(9).

     The legacy argument l must be NULL or curlwp, which mean the same thing.

NOTES
     Some NetBSD kernel abstractions execute caller-specified functions with
     the kernel lock held by default, for compatibility with legacy code, but
     can be explicitly instructed not to hold the kernel lock by passing an
     MP-safe flag:

        callout(9), CALLOUT_MPSAFE

        kfilter_register(9) and knote(9), FILTEROPS_MPSAFE

        kthread(9), KTHREAD_MPSAFE

        pci_intr(9), PCI_INTR_MPSAFE

        scsipi(9), SCSIPI_ADAPT_MPSAFE

        softint(9), SOFTINT_MPSAFE

        usbdi(9) pipes, USBD_MPSAFE

        usbdi(9) tasks, USB_TASKQ_MPSAFE

        vnode(9), VV_MPSAFE

        workqueue(9), WQ_MPSAFE

     The following NetBSD subsystems are still kernel-locked and need re-
     engineering to take advantage of parallelism on multiprocessor systems:

        ata(4), atapi(4), wd(4)

        video(4)

        autoconf(9)

        most of the network stack by default, unless the option NET_MPSAFE is
         enabled

        ...

     All interrupt handlers at IPL_VM, or lower (spl(9)) run with the kernel
     lock on most ports.

SEE ALSO
     locking(9), mutex(9), spl(9)

NetBSD 10.99                   February 13, 2022                  NetBSD 10.99