Formal Definition: Lamport’s Distributed Mutual Exclusion Algorithm
Lamport’s algorithm is a non-token based distributed mutual exclusion algorithm that ensures mutual exclusion, fairness, and deadlock-freedom by using logical clocks and message passing.
Each site maintains a request queue of critical section requests ordered by their logical timestamps. The algorithm works as follows:
-
Requesting the Critical Section:
When a site wants to enter the critical section (CS), it broadcasts a REQUEST(ts, i) message to all other sites, where is its logical timestamp, and places the request in its local queue. Upon receiving the REQUEST, each site places it in its queue and sends back a REPLY message. -
Entering the Critical Section:
Site can enter the CS only if:- (L1) It has received messages with timestamps greater than from all other sites.
- (L2) Its own request is at the top of its request queue.
-
Releasing the Critical Section:
When a site exits the CS, it removes its request from the top of its queue and broadcasts a RELEASE message. Upon receiving RELEASE, other sites remove that request from their queues.
The algorithm guarantees that requests are executed in timestamp order and no two sites can be in the CS simultaneously. The message complexity per CS execution is 3(N − 1) (optimized to between 2(N − 1) and 3(N − 1)).
Lamport’s Algorithm (Non-Token Based Distributed Mutual Exclusion)
Lamport developed one of the earliest non-token based distributed mutual exclusion algorithms as an illustration of his logical clock synchronization scheme. In this algorithm, requests for the critical section (CS) are executed in the increasing order of timestamps, ensuring fairness. Communication channels are assumed to be FIFO-ordered and reliable. Each site maintains a request queue ordered by timestamps of pending mutual exclusion requests.
Working of the Algorithm
-
Requesting the Critical Section
- When a site wants to enter the CS, it broadcasts a REQUEST(ts, i) message to all other sites and places the request in its own request queue.
- Here, denotes the logical timestamp of the request.
- When another site receives the REQUEST message, it places it in its own queue and replies with a timestamped REPLY message to .
-
Executing the Critical Section
- A site can enter the CS only when:
- L1: It has received a message with a timestamp greater than from all other sites.
- L2: Its own request is at the top of the request queue.
- A site can enter the CS only when:
-
Releasing the Critical Section
- Upon exiting the CS, site removes its request from the top of its queue and broadcasts a timestamped RELEASE message to all other sites.
- When another site receives the RELEASE, it removes that request from its queue. This may bring its own request to the top, enabling it to enter the CS.
Correctness
-
Mutual Exclusion: Suppose two sites and are in the CS simultaneously. This implies both satisfied L1 and L2 at the same time. Assume ’s request has a smaller timestamp. Then, ’s request must appear ahead of ’s in all queues, contradicting ’s ability to be at the top. Hence, mutual exclusion is guaranteed.
-
Fairness: Requests are executed in the order of timestamps. If a site with a smaller timestamp were bypassed by a later one, it would contradict the queue ordering. Thus, the algorithm is fair.
Example (Illustration)
- Sites and send REQUEST messages with timestamps (1,1) and (1,2).
- Both receive REPLIES, but only ’s request is at the top, so it enters the CS first.
- On exiting, sends RELEASE messages, allowing ’s request to move to the top.
- now enters the CS.
Performance
- Each CS execution requires:
- REQUEST messages,
- REPLY messages,
- RELEASE messages.
- Thus, total = 3(N – 1) messages per CS execution.
- Synchronization delay = one message transmission time (T).
Optimization
- REPLY messages can be omitted in some cases.
- If site receives a REQUEST from after sending its own REQUEST with a higher timestamp, it need not send a REPLY to .
- With this optimization, message complexity reduces to between 2(N – 1) and 3(N – 1) messages per CS execution.
Evaluation
- Deadlock-free: No site waits indefinitely.
- Fairness: Requests are granted in increasing timestamp order.
- Fault tolerance:
- If a process fails before sending requests → no issue.
- If it fails after requesting → may need timeout or failure detection to remove it from queues.
- If it fails inside CS → external recovery mechanisms required.