HPlogo HP-UX MultiProcessing: White Paper > Chapter 1 MultiProcessing

Performance Considerations and Locking

» 

Technical documentation

Complete book in PDF

 » Table of Contents

Consider the following when designing your code to run on a multiprocessing system:

  • Spinlocks execute faster than semaphores when they do get the lock.

  • Spinlocks waste CPU time by spinning if they cannot get the lock.

  • There is a trade-off of efficiency when using semaphores, depending on how long a lock is held before you get it:

    • Semaphores might waste CPU time by switching to another process if they cannot get the lock, because if the lock had been free, the switch would have been unnecessary.

    • Semaphores might save CPU time by switching to another process if they cannot get the lock, because one process can do useful work while the process is waiting for the lock.

    If the lock will be held for a long time (compared to a context switch), switching is preferable; but if held briefly, spinning might be better.

  • Because spinlocks are busy waiting, they can immediately get the lock when it comes free.

  • With semaphores the waiting process must be context-switched in its sleep state. This represents a high latency in getting the lock.

Deadlocks

Consider the following example:

Table 1-17 Sample deadlock situation

Processor 0Processor 1
spinlock(lockA)spinlock(lockB)
spinlock(lockB)spinlock(lockA)
[do work][do work]
spinunlock(lockB);spinunlock(lockA);
spinunlock(lockA);spinunlock(lockB);

 

Deadlocks occur when two processors (or processes or threads of control) have locked resources in different orders, and each has something needed by the other. As a result, they wait for each other to relinquish what they need. There can be complex chains of these dependencies amongs multiple processors and processes.

The sample code works most of the time. But when both processors fall through their respective code at the same time, a problem occurs. When machines execute 100 million instructions per second (or more), such coincidences happen all too frequently, however.

Ordering Strategy for Deadlock Avoidance

  • Locks are always locked in the same order.

  • Each lock is given its own order (a positive integer).

  • Instrumented kernels are run to ensure that locks are always taken in the correct order.

Maintaining an ordering strategy guarantees that each locking sequence is done in just one order, no matter where the code is executing.