dc u3 richard agarwala

  1. Introduction

    • Type of algorithm: Non-token based / Permission-based

    • Based on Lamport’s logical clock

  2. System Model

    • N sites (S₁, S₂, …, Sₙ)

    • Reliable FIFO channels

  3. Messages Used

    • REQUEST(ts, i)

    • REPLY

  4. Data Structures

    • Logical Clock (Cᵢ)

    • Request Deferred Array (RD[i][j])

  5. Algorithm Steps

    1. Requesting the Critical Section

    2. Receiving a REQUEST Message

    3. Executing the Critical Section

    4. Releasing the Critical Section

  6. Conditions for Sending REPLY

    • Not requesting nor executing

    • Request timestamp comparison

  7. Correctness Properties

    • Safety (Mutual Exclusion)

    • Liveness (No deadlock/starvation)

    • Fairness (Timestamp ordering)

  8. Performance Analysis

    • Message complexity = 2(N−1)

    • Synchronization delay = 1 message round trip

  9. Advantages

    • Simplicity

    • Fair ordering

    • Optimal message count

  10. Disadvantages

  • High message overhead for large N

  • Assumes no failures

  1. Diagram Reference

  • Figure 9.7 – 9.10 (Distributed Mutual Exclusion — Ricart–Agrawala Algorithm)






The Ricart–Agrawala Algorithm (1981) is a non-token-based / permission-based distributed mutual exclusion algorithm that uses REQUEST and REPLY messages with Lamport’s logical timestamps to ensure that only one process enters the Critical Section (CS) at a time.

Message complexity = 2 × (N − 1) per CS execution. It requires 2 × (N − 1) messages per CS execution.

In a distributed system of N sites, let each site SiS_i maintain a logical clock CiC_i and follow the rule that a process can enter its Critical Section (CS) only after it has received a REPLY message from every other site to its REQUEST message. Each REQUEST message contains a timestamp ti=(Ci,i)t_i = (C_i, i).

For any two requests (ti,i)(t_i, i) and (tj,j)(t_j, j):

  • If ti<tjt_i < t_j, then SiS_i is granted permission before SjS_j.

  • If ti=tjt_i = t_j, the tie is broken by comparing process identifiers (lower ID wins).

A site defers replying to a request if it is currently requesting or executing the CS and its own request has a lower (earlier) timestamp; otherwise, it sends an immediate REPLY.

Mutual Exclusion Condition:

i=j,CSiCSj=

That is, no two sites execute their critical sections simultaneously.

Message Complexity: 2(N1)2(N - 1)
Synchronization Delay: 1 message round-trip time.



System Model

  • Consists of N sites (S₁, S₂, …, Sₙ).

  • Each site runs one process (P₁, P₂, …, Pₙ).

  • Channels are reliable and FIFO-ordered.

  • Each process may be in one of three states: Requesting, Executing, or Idle.


Messages Used

  1. REQUEST(ts, i): Request to enter the CS with timestamp.

  2. REPLY: Permission to enter CS.


Data Structures (Per Process Pi)

  • RD[i][j]: = 1 if Pi has deferred REPLY to Pj; otherwise 0.

  • Logical clock (Cᵢ): Maintains Lamport timestamp to order events.


Algorithm Steps

1. Requesting the Critical Section

  • Increment clock Cᵢ.

  • Broadcast REQUEST(tsᵢ, i) to all other sites (N − 1).

  • Wait until REPLY received from all N − 1 sites.

2. Upon Receiving REQUEST(tsⱼ, j)

  • Update clock: Cᵢ = max(Cᵢ, tsⱼ) + 1.

  • Send REPLY immediately if:

    • Pi is neither requesting nor executing CS, or

    • Pi is requesting and tsⱼ < tsᵢ (earlier timestamp has priority).

  • Else defer REPLY and set RD[i][j] = 1.

3. Executing the Critical Section

  • Enter CS after receiving all (N − 1) REPLYs.

4. Releasing the Critical Section

  • Send REPLY to all sites j for which RD[i][j] = 1.

  • Reset RD[i][j] = 0.



Pseudocode (Exam-Ready)

RequestCS(): C_i := C_i + 1 t_req := (C_i, i) send REQUEST(t_req) to all j ≠ i wait until REPLY received from all j ≠ i enter CS OnReceive REQUEST(t_j, j): C_i := max(C_i, t_j.time) + 1 if (not requesting and not in CS) or (requesting and (t_j, j) < (t_req, i)): send REPLY to j else RD[j] := 1 ReleaseCS(): for each j with RD[j] == 1: send REPLY to j RD[j] := 0

(Use tuple ordering (timestamp, processID) to break ties.)


Example (Figure 9.7–9.10)

  • S₁ and S₂ issue requests with timestamps (1,1) and (1,2).

  • S₁’s timestamp is smaller → enters CS first.

  • S₂ defers REPLY; after S₁ exits, it sends REPLY to S₂.

  • Then S₂ enters CS.
    (Ref: Kshemkalyani p. 314–315, Figs. 9.7–9.10)


Correctness Proof (Summary)

  • Mutual Exclusion:
    Two sites cannot be in CS simultaneously because one request is always deferred based on timestamp comparison.

  • Fairness:
    CS access follows Lamport’s timestamp ordering.



Performance Analysis



Messages per CS entry2 × (N − 1)
Synchronization Delay1 message round trip ≈ T
Message ComplexityO(N)



Advantages

  • Simple and optimal in message count.

  • No central coordinator required.

  • Fair – executes CS in increasing timestamp order.

Disadvantages

  • Poor scalability – each site communicates with all others.

  • Requires reliable FIFO channels and no process failure.