Quorum in Distributed Systems Explained

intermediate 11 min read Updated 2026-02-11

After this topic, you will be able to:

  • Calculate quorum values (W, R, N) for different consistency requirements
  • Apply quorum consensus to design fault-tolerant read/write operations
  • Compare strict quorum versus sloppy quorum trade-offs

TL;DR

Quorum consensus ensures data consistency in distributed systems by requiring a minimum number of nodes (a quorum) to agree on read and write operations. The relationship between W (write quorum), R (read quorum), and N (total replicas) determines consistency guarantees: W + R > N provides strong consistency, while W + R ≤ N allows eventual consistency. Sloppy quorum with hinted handoff trades strict consistency for higher availability during node failures.

Cheat Sheet: Strong consistency = W + R > N | High availability = W + R ≤ N | Typical config: N=3, W=2, R=2 | Sloppy quorum = write to any N healthy nodes

The Analogy

Think of a board meeting where 5 directors must approve a budget change. You could require all 5 signatures (strong consistency but slow), or just 3 out of 5 (quorum-based). If you need 3 signatures to approve (W=3) and always check with 3 directors before reading the decision (R=3), you’re guaranteed that at least one director in your read group participated in the write—ensuring you see the latest decision. If a director is sick, sloppy quorum lets you get a temporary signature from an alternate, with a note to get the real director’s signature later (hinted handoff).

Why This Matters in Interviews

Quorum consensus is fundamental to understanding how systems like Cassandra, DynamoDB, and Riak achieve tunable consistency. Interviewers expect you to calculate quorum values for different scenarios, explain the W + R > N formula, and discuss trade-offs between consistency and availability. This topic bridges CAP theorem theory with practical implementation—showing you understand not just what consistency means, but how to engineer it. Senior candidates should connect quorum to real outage scenarios and explain when sloppy quorum prevents cascading failures.


Core Concept

Quorum consensus is a voting mechanism that determines how many replicas must participate in read and write operations to guarantee consistency in a distributed system. Instead of requiring all replicas to agree (which fails if any node is down) or allowing any single replica to serve requests (which risks stale reads), quorum-based systems require a configurable majority. This approach emerged from distributed systems research in the 1970s and became practical with Amazon’s Dynamo paper in 2007, which showed how tunable quorums enable different consistency-availability trade-offs within the same system. The core insight is mathematical: if your write quorum and read quorum overlap, you’re guaranteed to read the latest write.

Quorum Overlap Guarantees Strong Consistency

graph TB
    subgraph "N=5 Replicas"
        R1["Replica 1"]
        R2["Replica 2"]
        R3["Replica 3"]
        R4["Replica 4"]
        R5["Replica 5"]
    end

    subgraph "Write Quorum (W=3)"
        W1["Replica 1<br/><i>Has v2</i>"]
        W2["Replica 2<br/><i>Has v2</i>"]
        W3["Replica 3<br/><i>Has v2</i>"]
    end

    subgraph "Read Quorum (R=3)"
        RD1["Replica 3<br/><i>Returns v2</i>"]
        RD2["Replica 4<br/><i>Returns v1</i>"]
        RD3["Replica 5<br/><i>Returns v1</i>"]
    end

    subgraph "Overlap Zone"
        Overlap["Replica 3<br/><b>Guaranteed Overlap</b><br/><i>W + R - N = 1 node</i>"]
    end

    W1 -.->|"Write"| Overlap
    Overlap -.->|"Read"| RD1

    Note1["Formula: W + R > N<br/>3 + 3 > 5 ✓<br/><br/>At least 1 node must be<br/>in both quorums"]

With N=5, W=3, R=3, the mathematical guarantee W + R - N = 1 ensures at least one replica participates in both write and read quorums. This overlap node has the latest write (v2), so the coordinator can identify and return the most recent value even if other replicas are stale.

Multi-Region Quorum Architecture (DynamoDB-style)

graph TB
    subgraph "Region: us-east-1"
        subgraph "AZ-1a"
            R1["Replica 1<br/><i>Primary</i>"]
        end
        subgraph "AZ-1b"
            R2["Replica 2"]
        end
        subgraph "AZ-1c"
            R3["Replica 3"]
        end
        LB1["Load Balancer<br/><i>Coordinator</i>"]
    end

    subgraph "Region: eu-west-1 (Async Replication)"
        subgraph "AZ-2a"
            R4["Replica 4"]
        end
        subgraph "AZ-2b"
            R5["Replica 5"]
        end
        subgraph "AZ-2c"
            R6["Replica 6"]
        end
        LB2["Load Balancer<br/><i>Coordinator</i>"]
    end

    Client1["Client<br/><i>US User</i>"] -->|"1. Write request"| LB1
    LB1 -->|"2. Write (W=2)"| R1
    LB1 -->|"2. Write (W=2)"| R2
    LB1 -->|"2. Write (W=2)"| R3
    R1 & R2 -.->|"3. ACK (quorum met)"| LB1
    LB1 -.->|"4. Success"| Client1

    R1 -.->|"5. Async replication<br/>(cross-region)"| R4
    R1 -.->|"5. Async replication"| R5
    R1 -.->|"5. Async replication"| R6

    Client2["Client<br/><i>EU User</i>"] -->|"6. Read request"| LB2
    LB2 -->|"7. Read (R=2)"| R4
    LB2 -->|"7. Read (R=2)"| R5
    R4 & R5 -.->|"8. Return data"| LB2
    LB2 -.->|"9. Response"| Client2

    Note1["Local Quorum: N=3, W=2, R=2<br/>within each region<br/><br/>Cross-region: Async replication<br/>for global availability"]

DynamoDB uses local quorum (N=3, W=2, R=2) within a region across three availability zones, providing strong consistency and tolerating one AZ failure. Cross-region replication is asynchronous for global availability, accepting eventual consistency between regions. This design ensures low-latency local operations while maintaining high availability during regional outages.

How It Works

A quorum system operates with three parameters: N (total number of replicas), W (write quorum—minimum replicas that must acknowledge a write), and R (read quorum—minimum replicas that must respond to a read). When a client writes data, the coordinator node sends the write to all N replicas but waits for only W acknowledgments before confirming success to the client. The remaining replicas receive the write asynchronously. For reads, the coordinator queries R replicas and returns the value with the highest timestamp or version number. The magic happens in the overlap: if W + R > N, then any read quorum must include at least one node from the most recent write quorum, guaranteeing you’ll see the latest data. For example, with N=3, W=2, R=2, any two nodes you read from must include at least one node that participated in the last write to those two nodes.

Quorum Write and Read Flow with N=3, W=2, R=2

sequenceDiagram
    participant Client
    participant Coordinator
    participant Replica1
    participant Replica2
    participant Replica3

    Note over Client,Replica3: Write Operation (W=2)
    Client->>Coordinator: 1. Write request (key=user123, value=v2)
    Coordinator->>Replica1: 2. Write v2 (timestamp=t2)
    Coordinator->>Replica2: 2. Write v2 (timestamp=t2)
    Coordinator->>Replica3: 2. Write v2 (timestamp=t2)
    Replica1-->>Coordinator: 3. ACK
    Replica2-->>Coordinator: 3. ACK
    Note over Coordinator: W=2 achieved, return success
    Coordinator-->>Client: 4. Write confirmed
    Replica3-->>Coordinator: 5. ACK (async, after client response)

    Note over Client,Replica3: Read Operation (R=2)
    Client->>Coordinator: 6. Read request (key=user123)
    Coordinator->>Replica1: 7. Read key=user123
    Coordinator->>Replica2: 7. Read key=user123
    Coordinator->>Replica3: 7. Read key=user123
    Replica1-->>Coordinator: 8. Return v2 (t2)
    Replica3-->>Coordinator: 8. Return v2 (t2)
    Note over Coordinator: R=2 achieved, return latest value
    Coordinator-->>Client: 9. Return v2
    Replica2-->>Coordinator: 10. Return v2 (t2, async)

Write operations wait for W=2 acknowledgments before confirming to the client, while the third replica updates asynchronously. Read operations query all replicas but return after R=2 responses, selecting the value with the highest timestamp. This ensures at least one replica in the read set participated in the most recent write.

Key Principles

principle: Quorum Intersection Guarantees Consistency explanation: When W + R > N, read and write quorums must overlap by at least one node. This mathematical guarantee means you cannot read stale data because at least one node in your read set has the latest write. The coordinator resolves conflicts using timestamps or vector clocks. example: With N=5, W=3, R=3, the overlap is W + R - N = 1 node. Even if 2 nodes fail, you can still read from 3 healthy nodes, and at least one participated in the last write. Netflix uses this configuration for critical metadata stores where consistency matters more than latency.

principle: Tunable Consistency Through Quorum Configuration explanation: Different W and R values create different consistency-latency-availability profiles. W=1, R=1 maximizes availability and performance but provides only eventual consistency. W=N, R=1 ensures all replicas have data before write confirmation but makes writes slow and fragile. W=R=⌈(N+1)/2⌉ balances read and write costs. example: Cassandra lets you set consistency per query: LOCAL_QUORUM for fast reads in one datacenter, QUORUM for cross-datacenter consistency, or ONE for high-throughput logging where eventual consistency suffices. Twitter’s timeline service uses ONE for writes (fast ingestion) and LOCAL_QUORUM for reads (reasonable consistency).

principle: Availability Through Redundancy explanation: Quorum systems tolerate node failures as long as enough replicas remain to form a quorum. The system can survive N - W write failures and N - R read failures. This is why N is typically odd (3, 5, 7)—it maximizes fault tolerance for a given quorum size. example: With N=3, W=2, R=2, you tolerate 1 node failure for both reads and writes. With N=5, W=3, R=3, you tolerate 2 failures. Amazon DynamoDB uses N=3 across availability zones, allowing one entire AZ to fail while maintaining quorum. Increasing to N=5 would tolerate two AZ failures but doubles storage costs.


Deep Dive

Types / Variants

Strict Quorum requires exactly W replicas to acknowledge writes and R replicas to respond to reads, with W + R > N for strong consistency. The coordinator blocks until quorum is met or times out. This is the default in systems prioritizing correctness. Sloppy Quorum relaxes the requirement that quorum nodes must be from the designated replica set. If a designated replica is unavailable, the coordinator writes to any healthy node in the cluster, storing a hint about the intended recipient. Once the original node recovers, the temporary node hands off the data (hinted handoff). This trades strict consistency for availability—writes never fail due to temporary node failures, but reads might miss recent writes until handoff completes. Cassandra and Riak use sloppy quorum by default. Last-Write-Wins (LWW) quorum uses timestamps to resolve conflicts when multiple versions exist. The coordinator returns the value with the highest timestamp. This is simple but loses concurrent updates—if two clients write simultaneously, one write silently disappears. Vector Clock Quorum tracks causality using vector clocks, allowing the application to resolve conflicts when concurrent writes create siblings. Riak exposes siblings to clients for application-level merge logic, while Cassandra uses LWW for simplicity.

Sloppy Quorum with Hinted Handoff

sequenceDiagram
    participant Client
    participant Coordinator
    participant R1 as Replica 1<br/>(Designated)
    participant R2 as Replica 2<br/>(Designated)
    participant R3 as Replica 3<br/>(Designated, DOWN)
    participant R4 as Replica 4<br/>(Temporary)

    Note over Client,R4: Normal Write (All designated replicas healthy)
    Client->>Coordinator: Write request
    Coordinator->>R1: Write data
    Coordinator->>R2: Write data
    Coordinator->>R3: Write data
    R1-->>Coordinator: ACK
    R2-->>Coordinator: ACK
    R3-->>Coordinator: ACK
    Coordinator-->>Client: Success (W=3)

    Note over R3: Replica 3 fails
    R3->>R3: ❌ Node down

    Note over Client,R4: Sloppy Quorum Write (R3 unavailable)
    Client->>Coordinator: Write request
    Coordinator->>R1: Write data
    Coordinator->>R2: Write data
    Coordinator-xR3: Timeout (node down)
    Coordinator->>R4: Write data + hint<br/>(intended for R3)
    R1-->>Coordinator: ACK
    R2-->>Coordinator: ACK
    R4-->>Coordinator: ACK (stored hint)
    Coordinator-->>Client: Success (W=3, sloppy)

    Note over R3: Replica 3 recovers
    R3->>R3: ✓ Node back online

    Note over R4,R3: Hinted Handoff
    R4->>R3: Transfer hinted data
    R3-->>R4: ACK
    R4->>R4: Delete hint

Strict quorum would fail the write when Replica 3 is down (cannot achieve W=3 from designated replicas). Sloppy quorum allows the coordinator to write to any healthy node (Replica 4) with a hint indicating the data belongs to Replica 3. Once Replica 3 recovers, Replica 4 hands off the data and deletes the hint, eventually achieving consistency.

Trade-offs

Consistency vs Latency: Higher W and R values increase consistency but add latency. W=3, R=3 with N=5 means waiting for 3 network round-trips on every operation. W=1, R=1 is fast but risks stale reads. The decision depends on whether your system values correctness (financial transactions use W=QUORUM, R=QUORUM) or speed (social media feeds use W=1, R=1). Availability vs Consistency: Strict quorum fails writes when fewer than W nodes are reachable, prioritizing consistency over availability (CP in CAP theorem). Sloppy quorum accepts writes to any N healthy nodes, prioritizing availability over consistency (AP in CAP theorem). During network partitions, strict quorum may reject writes to maintain correctness, while sloppy quorum accepts writes that might conflict later. Storage Cost vs Fault Tolerance: Higher N increases fault tolerance and read throughput (more replicas to query) but multiplies storage costs linearly. N=3 is common for cost efficiency, N=5 for critical data, N=7+ for systems requiring extreme durability like Uber’s Schemaless storage. Each additional replica also increases write amplification—every write hits N nodes.

Consistency vs Availability Trade-offs in Quorum Configurations

graph LR
    subgraph "Strong Consistency (W + R > N)"
        SC1["W=3, R=3, N=5<br/><i>Overlap: 1 node</i>"]
        SC2["W=2, R=2, N=3<br/><i>Overlap: 1 node</i>"]
        SC3["W=ALL, R=1, N=3<br/><i>Overlap: 3 nodes</i>"]
    end

    subgraph "Eventual Consistency (W + R ≤ N)"
        EC1["W=1, R=1, N=3<br/><i>No overlap guarantee</i>"]
        EC2["W=2, R=1, N=3<br/><i>Possible stale reads</i>"]
    end

    subgraph "Trade-off Spectrum"
        High["High Consistency<br/>Low Availability<br/>High Latency"]
        Balanced["Balanced<br/>W=QUORUM<br/>R=QUORUM"]
        Fast["High Availability<br/>Low Latency<br/>Eventual Consistency"]
    end

    SC3 --> High
    SC1 --> Balanced
    SC2 --> Balanced
    EC2 --> Fast
    EC1 --> Fast

    High -.->|"Use case:<br/>Financial transactions"| SC3
    Balanced -.->|"Use case:<br/>User profiles"| SC2
    Fast -.->|"Use case:<br/>Social media feeds"| EC1

Different quorum configurations create different consistency-availability profiles. W=ALL, R=1 provides strongest consistency but fails writes if any node is down. W=QUORUM, R=QUORUM balances consistency and availability. W=1, R=1 maximizes availability and speed but only guarantees eventual consistency. The choice depends on whether your system prioritizes correctness or availability.

Common Pitfalls

pitfall: Ignoring Network Partition Behavior why_it_happens: Developers assume quorum always provides consistency, forgetting that sloppy quorum sacrifices consistency during partitions. If a network split isolates 2 nodes from 3 nodes in an N=5 system, both sides might accept writes with W=3 using sloppy quorum, creating divergent data that requires manual reconciliation. how_to_avoid: Understand your system’s quorum mode. Use strict quorum for financial data where conflicts are unacceptable. Use sloppy quorum for high-availability systems with conflict resolution strategies (CRDTs, vector clocks, or application-level merge). Monitor hinted handoff queues—if they grow unbounded, you have a partition that needs attention.

pitfall: Misconfiguring W and R for Consistency Requirements why_it_happens: Teams set W=1, R=1 for performance without realizing this provides zero consistency guarantees. Or they set W=ALL, R=1 thinking it ensures consistency, but W=ALL makes writes fail if any replica is down, violating availability. how_to_avoid: Use the formula W + R > N for strong consistency. For most systems, W=QUORUM, R=QUORUM (where QUORUM = ⌊N/2⌋ + 1) balances consistency and availability. Test failure scenarios: kill W-1 nodes and verify writes fail gracefully, kill R-1 nodes and verify reads still work. Document your consistency model clearly—eventual consistency is fine if the application handles it.

pitfall: Forgetting About Read Repair and Anti-Entropy why_it_happens: Quorum reads return as soon as R replicas respond, but the other N-R replicas might have stale data. Without background repair, replicas diverge over time, and future reads with different R values might return stale data. how_to_avoid: Enable read repair (coordinator updates stale replicas after returning to client) and anti-entropy (background Merkle tree comparison to sync replicas). Cassandra does read repair on a percentage of reads and runs anti-entropy via nodetool repair. Schedule regular repairs—weekly for active data, monthly for cold data. Monitor repair lag as a health metric.


Math & Calculations

Formula

Quorum Formulas:

  • Write Quorum: W = number of replicas that must acknowledge write
  • Read Quorum: R = number of replicas that must respond to read
  • Total Replicas: N = replication factor
  • Strong Consistency: W + R > N
  • Majority Quorum: QUORUM = ⌊N/2⌋ + 1
  • Fault Tolerance: Tolerate min(N - W, N - R) failures

Latency: Write latency = max(latency of W slowest replicas), Read latency = max(latency of R slowest replicas)

Variables

N: Total number of replicas, typically 3-7. Higher N increases storage cost and write amplification but improves fault tolerance and read throughput.

W: Write quorum size. W=1 is fast but risky, W=QUORUM balances speed and consistency, W=ALL ensures all replicas have data but fails if any node is down.

R: Read quorum size. R=1 is fast but may return stale data, R=QUORUM provides consistency when W=QUORUM, R=ALL reads from all replicas (rarely used).

Consistency Level: W + R > N guarantees strong consistency (read sees latest write), W + R ≤ N allows eventual consistency (read might miss recent writes).

Worked Example

Scenario: Design a user profile service for 100M users requiring strong consistency for profile updates but tolerating 1 datacenter failure.

Solution: Use N=3 replicas across 3 availability zones. For strong consistency, we need W + R > 3. Choose W=2, R=2 (QUORUM for both).

Verification: W + R = 2 + 2 = 4 > 3 ✓ Strong consistency guaranteed. Fault tolerance: min(3-2, 3-2) = 1 node failure tolerated ✓

Latency Calculation: Assume replica latencies are 5ms, 10ms, 50ms (one replica in distant AZ). Write latency = max(5ms, 10ms) = 10ms (wait for 2 fastest). Read latency = max(5ms, 10ms) = 10ms. The slow replica doesn’t block operations.

Capacity: 100M users × 10KB/profile × 3 replicas = 3TB total storage. Each write hits 3 nodes, so write throughput = (node write capacity) / 3.

Alternative for High Availability: If availability matters more than consistency, use W=1, R=1 with N=3. This provides eventual consistency but never blocks on node failures. Profile updates might briefly show stale data, but the system stays available during outages. Add read repair to converge replicas within seconds.


Real-World Examples

company: Cassandra (Apache) system: Wide-Column NoSQL Database usage_detail: Cassandra implements tunable consistency with configurable quorum levels per query. Netflix uses Cassandra with N=3, LOCAL_QUORUM (W=2, R=2 within one datacenter) for user viewing history, providing strong consistency for recent watches while tolerating datacenter failures. For less critical data like A/B test assignments, they use W=ONE, R=ONE for maximum throughput. Cassandra’s sloppy quorum with hinted handoff ensures writes succeed even when nodes are down—hints are stored for 3 hours by default, then dropped if the node doesn’t recover. This design prioritizes availability (AP in CAP) while allowing operators to choose consistency when needed.

company: Amazon DynamoDB system: Managed NoSQL Database usage_detail: DynamoDB uses N=3 replicas across availability zones with W=2, R=2 for strongly consistent reads (optional) and W=2, R=1 for eventually consistent reads (default). The system automatically handles hinted handoff and anti-entropy through background repair. During the 2015 outage, DynamoDB’s quorum design allowed it to continue serving requests even when one AZ became unreachable—writes succeeded with 2 out of 3 replicas, and reads returned data from the 2 healthy AZs. DynamoDB’s global tables use multi-region replication with last-write-wins conflict resolution, accepting that concurrent writes across regions might lose updates in exchange for low-latency local writes.

company: Uber system: Schemaless (MySQL-based sharded storage) usage_detail: Uber’s Schemaless uses a quorum-like approach over MySQL replicas for critical trip data. Each shard has 3 MySQL replicas, and writes require 2 acknowledgments before confirming to the application. Reads query 2 replicas and use the version with the highest timestamp. This design survived multiple datacenter failures during Uber’s growth phase—when one replica lagged due to replication delay, the quorum read from the other 2 replicas ensured drivers and riders saw consistent trip state. Uber later added cross-region replication with asynchronous quorum (writes confirm locally, then replicate globally) to support international expansion while maintaining low write latency.


Interview Expectations

Mid-Level

Calculate quorum values for given scenarios. Explain why W + R > N provides strong consistency using a simple example (N=3, W=2, R=2). Describe the trade-off between W=1, R=1 (fast, eventual consistency) and W=QUORUM, R=QUORUM (slower, strong consistency). Know that quorum helps systems tolerate node failures. Expect questions like: ‘If we have 5 replicas and want to tolerate 2 failures, what should W and R be?’ Answer: W=3, R=3 ensures quorum even with 2 nodes down.

Senior

Design quorum configurations for specific consistency and availability requirements. Explain sloppy quorum and hinted handoff, including when they’re appropriate (high availability systems) versus when strict quorum is required (financial transactions). Discuss read repair and anti-entropy mechanisms. Calculate latency implications of different quorum sizes. Handle questions like: ‘Our system has 100ms p99 latency with W=2, R=2, N=3. One replica is consistently slow. How do you fix it?’ Answer: The slow replica shouldn’t affect p99 since we only wait for 2 fastest replicas. Investigate why it’s slow (network, disk, load) and consider replacing it, but quorum already protects latency.

Staff+

Architect multi-region quorum systems with cross-datacenter consistency trade-offs. Explain how quorum interacts with network partitions and split-brain scenarios. Design conflict resolution strategies for sloppy quorum (vector clocks, CRDTs, application-level merge). Discuss operational challenges like monitoring hinted handoff queues, tuning repair schedules, and handling quorum loss during cascading failures. Propose solutions for edge cases: ‘During a network partition, both sides of a 5-node cluster accept writes with sloppy quorum. How do you reconcile when the partition heals?’ Answer: Use vector clocks to detect conflicts, expose siblings to application for merge, or use LWW with timestamp synchronization (NTP). Consider fencing tokens to prevent split-brain writes.

Common Interview Questions

How does quorum consensus ensure strong consistency? (Answer: W + R > N guarantees overlap)

What’s the difference between strict and sloppy quorum? (Answer: Strict requires designated replicas, sloppy allows any healthy nodes)

How do you choose W and R for a system that needs high availability? (Answer: W=1, R=1 for availability, accept eventual consistency)

What happens during a network partition with quorum? (Answer: Depends on strict vs sloppy—strict may reject writes, sloppy accepts with hinted handoff)

How does read repair work? (Answer: Coordinator detects stale replicas during read, updates them asynchronously)

Red Flags to Avoid

Claiming quorum always provides strong consistency (ignoring W + R ≤ N case)

Not knowing the formula W + R > N or unable to calculate quorum values

Confusing quorum with consensus algorithms like Raft or Paxos (quorum is simpler, no leader election)

Ignoring latency implications (not realizing quorum waits for slowest of W or R replicas)

Assuming all replicas must be updated synchronously (missing the point of quorum)


Key Takeaways

Quorum consensus requires W replicas to acknowledge writes and R replicas to respond to reads. The formula W + R > N guarantees strong consistency by ensuring read and write quorums overlap.

Tunable consistency: W=1, R=1 maximizes availability with eventual consistency; W=QUORUM, R=QUORUM balances consistency and availability; W=ALL, R=1 ensures all replicas have data but sacrifices availability.

Sloppy quorum with hinted handoff trades strict consistency for higher availability—writes succeed to any N healthy nodes during failures, with hints replayed when original nodes recover.

Fault tolerance: A quorum system tolerates min(N - W, N - R) node failures. Typical configuration N=3, W=2, R=2 tolerates 1 failure while providing strong consistency.

Real-world systems like Cassandra and DynamoDB use quorum to achieve tunable consistency, allowing applications to choose the right trade-off between consistency, availability, and latency per operation.

Prerequisites

CAP Theorem - Understanding consistency vs availability trade-offs that quorum addresses

Replication - How data is copied across nodes before applying quorum logic

Eventual Consistency - What happens when W + R ≤ N in quorum systems

Consensus Algorithms - Stronger consistency guarantees than quorum (Raft, Paxos)

Next Steps

Vector Clocks - Conflict resolution mechanism for sloppy quorum

Distributed Transactions - Coordinating writes across multiple quorum groups