Introduction
-
Type of algorithm: Non-token based / Permission-based
-
Based on Lamport’s logical clock
-
-
System Model
-
N sites (S₁, S₂, …, Sₙ)
-
Reliable FIFO channels
-
-
Messages Used
-
REQUEST(ts, i)
-
REPLY
-
-
Data Structures
-
Logical Clock (Cᵢ)
-
Request Deferred Array (RD[i][j])
-
-
Algorithm Steps
-
Requesting the Critical Section
-
Receiving a REQUEST Message
-
Executing the Critical Section
-
Releasing the Critical Section
-
-
Conditions for Sending REPLY
-
Not requesting nor executing
-
Request timestamp comparison
-
-
Correctness Properties
-
Safety (Mutual Exclusion)
-
Liveness (No deadlock/starvation)
-
Fairness (Timestamp ordering)
-
-
Performance Analysis
-
Message complexity = 2(N−1)
-
Synchronization delay = 1 message round trip
-
-
Advantages
-
Simplicity
-
Fair ordering
-
Optimal message count
-
-
Disadvantages
-
High message overhead for large N
-
Assumes no failures
-
Diagram Reference
-
Figure 9.7 – 9.10 (Distributed Mutual Exclusion — Ricart–Agrawala Algorithm)
Message complexity = 2 × (N − 1) per CS execution. It requires 2 × (N − 1) messages per CS execution.
In a distributed system of N sites, let each site maintain a logical clock and follow the rule that a process can enter its Critical Section (CS) only after it has received a REPLY message from every other site to its REQUEST message. Each REQUEST message contains a timestamp .
For any two requests and :
If , then is granted permission before .
If , the tie is broken by comparing process identifiers (lower ID wins).
A site defers replying to a request if it is currently requesting or executing the CS and its own request has a lower (earlier) timestamp; otherwise, it sends an immediate REPLY.
Mutual Exclusion Condition:
∀i=j,CSi∩CSj=∅
That is, no two sites execute their critical sections simultaneously.
Message Complexity:
Synchronization Delay: 1 message round-trip time.
System Model
-
Consists of N sites (S₁, S₂, …, Sₙ).
-
Each site runs one process (P₁, P₂, …, Pₙ).
-
Channels are reliable and FIFO-ordered.
-
Each process may be in one of three states: Requesting, Executing, or Idle.
Messages Used
-
REQUEST(ts, i): Request to enter the CS with timestamp.
-
REPLY: Permission to enter CS.
Data Structures (Per Process Pi)
-
RD[i][j]: = 1 if Pi has deferred REPLY to Pj; otherwise 0.
-
Logical clock (Cᵢ): Maintains Lamport timestamp to order events.
Algorithm Steps
1. Requesting the Critical Section
-
Increment clock
Cᵢ. -
Broadcast
REQUEST(tsᵢ, i)to all other sites (N − 1). -
Wait until REPLY received from all N − 1 sites.
2. Upon Receiving REQUEST(tsⱼ, j)
-
Update clock:
Cᵢ = max(Cᵢ, tsⱼ) + 1. -
Send REPLY immediately if:
-
Pi is neither requesting nor executing CS, or
-
Pi is requesting and tsⱼ < tsᵢ (earlier timestamp has priority).
-
-
Else defer REPLY and set
RD[i][j] = 1.
3. Executing the Critical Section
-
Enter CS after receiving all (N − 1) REPLYs.
4. Releasing the Critical Section
-
Send REPLY to all sites
jfor whichRD[i][j] = 1. -
Reset
RD[i][j] = 0.
Pseudocode (Exam-Ready)
(Use tuple ordering (timestamp, processID) to break ties.)
Example (Figure 9.7–9.10)
-
S₁ and S₂ issue requests with timestamps (1,1) and (1,2).
-
S₁’s timestamp is smaller → enters CS first.
-
S₂ defers REPLY; after S₁ exits, it sends REPLY to S₂.
-
Then S₂ enters CS.
(Ref: Kshemkalyani p. 314–315, Figs. 9.7–9.10)
Correctness Proof (Summary)
-
Mutual Exclusion:
Two sites cannot be in CS simultaneously because one request is always deferred based on timestamp comparison. -
Fairness:
CS access follows Lamport’s timestamp ordering.
Performance Analysis
| Messages per CS entry | 2 × (N − 1) |
| Synchronization Delay | 1 message round trip ≈ T |
| Message Complexity | O(N) |
Advantages
-
Simple and optimal in message count.
-
No central coordinator required.
-
Fair – executes CS in increasing timestamp order.
Disadvantages
-
Poor scalability – each site communicates with all others.
-
Requires reliable FIFO channels and no process failure.