Write-Through Cache: Sync Writes to DB & Cache

intermediate 9 min read Updated 2026-02-11

After this topic, you will be able to:

  • Implement the write-through caching pattern for strong consistency requirements
  • Analyze the latency vs consistency trade-offs of write-through caching
  • Compare write-through with write-behind and justify the choice for specific workloads
  • Design error handling and rollback strategies for write-through implementations

TL;DR

Write-through caching synchronously writes data to both cache and database before confirming success to the application. This pattern guarantees strong consistency between cache and database at the cost of higher write latency. Use it when data consistency is more critical than write performance, such as financial transactions or user profile updates.

Cheat Sheet:

  • Pattern: App → Cache → Database → Confirm (synchronous)
  • Consistency: Strong (cache always matches DB)
  • Write Latency: High (sum of cache + DB write times)
  • Use When: Consistency > speed, frequently read after write
  • Avoid When: Write-heavy workloads, latency-sensitive writes

The Problem It Solves

Distributed systems face a fundamental challenge: how do you keep a cache and database synchronized without creating inconsistent states? When you write data to a database and separately update a cache, you create a window where they can diverge. The database write might succeed while the cache update fails, or vice versa. This dual-write problem becomes especially painful in systems where stale cache data causes incorrect business logic—imagine showing a user their old account balance after a deposit, or displaying outdated permissions after a role change.

The problem intensifies with read-after-write scenarios. Users expect to immediately see their updates reflected in the system. If you write to the database but the cache still holds old data, subsequent reads return stale information. You could invalidate the cache on every write, but then you lose the performance benefits of caching for the most common case: users checking data they just modified. Write-through caching solves this by treating the cache as the authoritative write path, ensuring the cache and database move together as a single atomic unit.

Solution Overview

Write-through caching makes the cache responsible for database writes, creating a single coordinated write path. When the application wants to update data, it writes to the cache, which then synchronously writes to the database before acknowledging success. The cache acts as a write-through proxy, guaranteeing that both storage layers are updated together or neither is updated at all.

This approach eliminates the dual-write problem by removing the dual part—there’s only one write operation from the application’s perspective. The cache becomes the single source of truth for write operations, coordinating with the database behind the scenes. If either the cache or database write fails, the entire operation fails, maintaining consistency. The trade-off is straightforward: you pay higher write latency in exchange for guaranteed consistency and simplified application logic. Your application code becomes cleaner because it doesn’t need complex cache invalidation logic or eventual consistency handling.

Write-Through Architecture: Cache as Write Coordinator

graph LR
    App1[Application Service 1] -->|write| Cache
    App2[Application Service 2] -->|write| Cache
    App3[Application Service 3] -->|write| Cache
    
    subgraph Write-Through Cache Layer
        Cache[Cache Coordinator]
        Cache -->|1. Validate| Validator[Data Validator]
        Validator -->|2. Write| DBWriter[DB Writer]
        DBWriter -->|3. Update| CacheStore[Cache Storage]
    end
    
    DBWriter -->|synchronous write| DB[(Database<br/>Source of Truth)]
    
    App1 & App2 & App3 -.->|read| Cache
    Cache -.->|cache miss| DB
    
    Note[Single Write Path:<br/>Apps only write to cache<br/>Cache coordinates DB writes<br/>No dual-write problem]

Write-through eliminates the dual-write problem by making the cache the single write coordinator. Applications write to one place, and the cache handles synchronization with the database, simplifying application logic while guaranteeing consistency.

How It Works

Let’s walk through a write-through operation step by step, using a user profile update as an example:

Step 1: Application initiates write The application calls cache.set(user_id, profile_data) to update a user’s profile. Notice the application only talks to the cache—it doesn’t directly touch the database. This is the key architectural decision that enables write-through.

Step 2: Cache validates and prepares The cache layer receives the write request and validates the data. It might check data types, enforce size limits, or apply business rules. This validation happens before touching the database, failing fast if the data is invalid.

Step 3: Synchronous database write The cache executes db.update(user_id, profile_data) and waits for confirmation. This is a blocking operation—the cache doesn’t return to the application until the database acknowledges the write. If you’re using a relational database, this includes waiting for the transaction to commit and replicate to any standby nodes.

Step 4: Cache update Only after the database confirms success does the cache update its own storage with the new data. Some implementations do this before the database write (write-to-cache-first), but the safer pattern is database-first because databases are typically more durable than cache nodes.

Step 5: Acknowledge to application The cache returns success to the application. At this point, both the cache and database contain the new data, and any subsequent read from either source will return consistent results.

Example implementation:

def update_user_profile(user_id, profile_data):
    # Application code - single write operation
    cache.set(user_id, profile_data)
    return {"status": "success"}

# Inside the cache layer
class WriteThroughCache:
    def set(self, key, value):
        try:
            # Step 1: Write to database first (durability)
            self.db.execute(
                "UPDATE users SET profile = ? WHERE id = ?",
                (value, key)
            )
            # Step 2: Update cache only after DB success
            self.cache_store.put(key, value)
            return True
        except DatabaseError as e:
            # Database write failed - don't update cache
            raise CacheWriteError(f"Write-through failed: {e}")

The beauty of this pattern is its simplicity from the application’s perspective. The application makes one call and gets one response. All the complexity of coordinating two storage systems lives inside the cache layer, where it can be tested, monitored, and optimized independently.

Write-Through Cache Flow with Step-by-Step Execution

sequenceDiagram
    participant App as Application
    participant Cache as Write-Through Cache
    participant DB as Database
    
    App->>Cache: 1. set(user_id, profile_data)
    activate Cache
    Note over Cache: Validate data<br/>(types, size, rules)
    Cache->>DB: 2. UPDATE users SET profile=...
    activate DB
    Note over DB: Execute transaction<br/>Wait for commit<br/>Replicate to standby
    DB-->>Cache: 3. Write confirmed
    deactivate DB
    Note over Cache: Update cache storage<br/>only after DB success
    Cache->>Cache: 4. cache_store.put(key, value)
    Cache-->>App: 5. Success response
    deactivate Cache
    Note over App,DB: Both cache and DB now consistent<br/>Subsequent reads return new data

Write-through executes as a synchronous sequence where the cache coordinates both storage layers. The application waits for the entire chain to complete, ensuring strong consistency but paying the latency cost of both cache and database writes.

Consistency Guarantees

Write-through provides strong consistency between cache and database, meaning reads always return the most recently written value. This guarantee comes from the synchronous write path—the application only receives success after both systems are updated.

However, partial failures require careful handling. If the database write succeeds but the cache update fails, you have a consistency problem: the database has new data but the cache still serves old data. The correct response is to fail the entire operation and return an error to the application, even though the database write succeeded. This might seem wasteful, but it maintains the consistency guarantee. The application can retry, and on retry, the database write becomes an update rather than an insert.

The reverse scenario—cache write succeeds but database write fails—is simpler. You can roll back the cache update before returning an error. Some implementations use a two-phase approach: write to cache first (fast), then database (durable), then confirm cache. If the database write fails, evict the key from cache.

Transaction coordination becomes critical in distributed deployments. If you’re using Redis as your cache and PostgreSQL as your database, you don’t have distributed transactions. The pattern relies on ordering: database write first (source of truth), then cache write (performance layer). If the cache write fails, you’ve lost performance but not correctness—the next read will hit the database and repopulate the cache. This is why many write-through implementations combine with cache-aside for reads: writes go through the cache to the database, but cache misses on reads fetch from the database and populate the cache.

Write-Through Failure Handling and Rollback Strategies

graph TB
    subgraph Scenario 1: Database Write Fails
        A1[App writes to cache] --> B1[Cache validates]
        B1 --> C1[Write to database]
        C1 --> D1{DB write<br/>succeeds?}
        D1 -->|No| E1[Return error to app]
        D1 -->|Yes| F1[Update cache]
        F1 --> G1[Return success]
        E1 -.->|Don't update cache| H1[Cache remains unchanged]
    end
    
    subgraph Scenario 2: Cache Write Fails After DB Success
        A2[App writes to cache] --> B2[Write to database]
        B2 --> C2{DB write<br/>succeeds?}
        C2 -->|Yes| D2[Update cache]
        D2 --> E2{Cache write<br/>succeeds?}
        E2 -->|No| F2[Return error to app]
        E2 -->|Yes| G2[Return success]
        F2 -.->|DB has new data<br/>Cache has old data| H2[Inconsistent state]
    end
    
    Note1[Solution: Database-first ordering<br/>If cache update fails, next read<br/>will miss cache and fetch from DB]

Failure scenarios reveal why database-first ordering is critical. If the database write succeeds but cache update fails, the system loses performance (cache miss) but maintains correctness. The alternative—cache-first ordering—can create true inconsistency where cache serves stale data indefinitely.

Trade-offs

Write Latency vs. Consistency

Write-through adds the cache write time to your database write time. If your database write takes 10ms and cache write takes 2ms, your total write latency is 12ms plus network overhead. Compare this to write-behind (see Write-Behind), where the application only waits for the cache write (2ms) and the database write happens asynchronously. For write-heavy workloads, this latency difference compounds. A system handling 10,000 writes per second pays an extra 100 seconds of cumulative latency with write-through versus write-behind.

The decision framework: Choose write-through when consistency matters more than write speed. Financial transactions, user authentication, and permission systems cannot tolerate stale reads. Choose write-behind when write throughput is critical and you can tolerate eventual consistency, such as analytics events or activity logs.

Operational Complexity vs. Application Simplicity

Write-through pushes complexity into the cache layer, making the cache implementation more sophisticated. You need error handling, retry logic, monitoring, and potentially transaction coordination. Your cache becomes a critical write path component—if the cache goes down, writes fail even though the database is healthy. This creates an additional failure mode.

However, application code becomes dramatically simpler. Developers write to one place and trust the data is consistent. No cache invalidation logic, no eventual consistency bugs, no race conditions between cache updates and database writes. For teams building multiple services, this simplicity multiplies—every service benefits from the consistent write path without implementing its own cache management.

Cold Start Performance

When you add new cache nodes (scaling up or replacing failed nodes), write-through naturally warms them up. Every write populates the cache, so frequently updated data automatically lives in cache. This contrasts with cache-aside, where new nodes start empty and only populate on cache misses. The trade-off: you’re caching data that might never be read again. A user updates their profile once and never logs in again—you’ve cached useless data. Combine write-through with TTLs to expire rarely-accessed data and reclaim memory.

Write-Through vs Write-Behind Latency Comparison

graph LR
    subgraph Write-Through Pattern
        WT_App[Application] -->|1. write| WT_Cache[Cache Layer]
        WT_Cache -->|2. sync write<br/>10ms| WT_DB[(Database)]
        WT_DB -->|3. confirm| WT_Cache
        WT_Cache -->|2ms cache write| WT_Cache_Store[Cache Storage]
        WT_Cache -->|4. success<br/>Total: 12ms| WT_App
    end
    
    subgraph Write-Behind Pattern
        WB_App[Application] -->|1. write| WB_Cache[Cache Layer]
        WB_Cache -->|2ms cache write| WB_Cache_Store[Cache Storage]
        WB_Cache -->|2. success<br/>Total: 2ms| WB_App
        WB_Cache -.->|3. async write<br/>10ms| WB_DB[(Database)]
        WB_DB -.->|eventual| WB_Cache
    end
    
    Comparison["Write-Through: 12ms latency, strong consistency<br/>Write-Behind: 2ms latency, eventual consistency<br/><br/>At 10,000 writes/sec:<br/>Write-Through: 120 seconds cumulative latency<br/>Write-Behind: 20 seconds cumulative latency"]

The latency trade-off is quantifiable: write-through adds cache and database write times together, while write-behind only waits for the cache write. For write-heavy workloads, this 6x latency difference compounds significantly, making write-behind the better choice when eventual consistency is acceptable.

When to Use (and When Not To)

Use write-through when:

  1. Strong consistency is required: Financial systems, user authentication, authorization, and any domain where stale data causes incorrect behavior. If reading old data after a write creates bugs, you need write-through.

  2. Read-after-write is common: Users frequently read data immediately after updating it. Profile updates, settings changes, and shopping cart modifications all exhibit this pattern. Write-through ensures the cache is already populated for the subsequent read.

  3. Write volume is manageable: Your system handles hundreds or low thousands of writes per second, not tens of thousands. The synchronous write latency is acceptable for your SLA.

  4. Data durability matters more than write speed: You’re willing to sacrifice write performance to guarantee data is safely persisted before acknowledging success.

Avoid write-through when:

  1. Write-heavy workloads dominate: Analytics ingestion, logging systems, or IoT data collection where writes far outnumber reads. The synchronous database write becomes a bottleneck.

  2. Eventual consistency is acceptable: Social media likes, view counts, or recommendation signals where being a few seconds stale doesn’t matter. Use write-behind instead.

  3. Write latency is critical: Real-time gaming, high-frequency trading, or any system with sub-10ms write latency requirements. The database write time makes write-through too slow.

  4. Cache is not on the critical path: If your cache is purely a performance optimization and the system works fine with cache misses, write-through adds unnecessary complexity. Stick with cache-aside and invalidation.

Write-Through Decision Framework

flowchart TB
    Start([Write Operation Decision]) --> Q1{Strong consistency<br/>required?}
    Q1 -->|No| Q2{Write volume<br/>>10k/sec?}
    Q1 -->|Yes| Q3{Read-after-write<br/>common?}
    
    Q2 -->|Yes| WB[Use Write-Behind<br/>+ Eventual Consistency]
    Q2 -->|No| CA[Use Cache-Aside<br/>+ Invalidation]
    
    Q3 -->|Yes| Q4{Write latency<br/>acceptable?}
    Q3 -->|No| Q5{Can tolerate<br/>cache misses?}
    
    Q4 -->|Yes| WT[✓ Use Write-Through<br/>Strong consistency<br/>Cache pre-warmed]
    Q4 -->|No| Hybrid[Use Hybrid:<br/>Critical data: Write-Through<br/>High-volume: Write-Behind]
    
    Q5 -->|Yes| CA
    Q5 -->|No| WT
    
    
    Examples1[Examples:<br/>• Financial transactions<br/>• User authentication<br/>• Permission systems<br/>• Profile updates]
    Examples2[Examples:<br/>• Analytics events<br/>• Activity logs<br/>• View counts<br/>• Social media likes]
    Examples3[Examples:<br/>• Product catalog<br/>• Content pages<br/>• Search results<br/>• Recommendations]
    
    WT -.-> Examples1
    WB -.-> Examples2
    CA -.-> Examples3

Choose write-through when strong consistency and read-after-write patterns dominate, and write volume is manageable. The decision tree helps identify when write-through’s latency cost is justified by consistency requirements versus when write-behind or cache-aside patterns are more appropriate.

Real-World Examples

Twitter’s Timeline Consistency

Twitter uses write-through caching for timeline consistency in their home timeline service. When a user tweets, the system writes to both the cache (Redis) and the database (Manhattan, Twitter’s distributed database) synchronously. This ensures that when the tweet appears in the author’s timeline, it’s also durably stored. The write-through pattern prevents the scenario where a user sees their tweet in their timeline but it’s lost if the cache node fails before the asynchronous database write completes. The trade-off is acceptable because tweet creation is relatively infrequent compared to timeline reads, and users expect their tweets to be immediately visible and permanent.

Stripe’s Payment Processing

Stripe uses write-through caching for payment state transitions. When a payment moves from “pending” to “succeeded,” the state change writes through the cache to the database before returning success to the merchant. This guarantees that the merchant’s webhook receives a payment confirmation only after the state is durably stored. If Stripe used eventual consistency here, a cache node failure could cause a payment to appear successful in cache but revert to pending after the node restarts, creating financial discrepancies. The extra 5-10ms of write latency is negligible compared to the network round-trip time for the API call.

GitHub’s Repository Metadata

GitHub uses write-through for repository metadata like star counts and fork counts. When you star a repository, the write goes through cache to the database synchronously. This ensures the star count you see immediately after starring matches the database state. For a social coding platform, showing incorrect counts after user actions would erode trust. The write volume is manageable—millions of stars per day across all repositories, but distributed across time and repositories, resulting in a few hundred writes per second that write-through handles comfortably.


Interview Essentials

Mid-Level

Explain the write-through flow clearly: application writes to cache, cache writes to database synchronously, both must succeed before returning. Understand the consistency guarantee—cache and database are always in sync. Know the primary trade-off: higher write latency for strong consistency. Be able to compare with cache-aside (read pattern) and explain why write-through is better for read-after-write scenarios. Discuss failure handling: what happens if the database write fails? (Return error, don’t update cache.)

Senior

Design a write-through implementation with proper error handling and rollback. Explain transaction coordination challenges when cache and database are separate systems—you don’t have distributed transactions, so you rely on ordering (database first, then cache). Discuss monitoring: what metrics indicate write-through is causing problems? (P99 write latency, write error rate, cache-database divergence after failures.) Compare write-through with write-behind quantitatively: if database writes take 10ms and cache writes take 2ms, write-through adds 12ms total latency while write-behind adds only 2ms. Justify when each trade-off makes sense based on consistency requirements and write volume.

Staff+

Architect a hybrid approach: write-through for critical data (user profiles, permissions) and write-behind for high-volume data (analytics events). Explain how to handle cache node failures without losing writes—do you fail writes when cache is down, or fall back to database-only mode? Discuss capacity planning: write-through doubles your write load on the database (once from cache, once from application if you’re not careful). Design a migration strategy from cache-aside to write-through without downtime. Explain how write-through interacts with database replication—do you wait for replication to complete, or just the primary write? What’s the consistency model if you have read replicas?

Common Interview Questions

Why not just write to the database and invalidate the cache? (Write-through is simpler and handles read-after-write better—the cache is already populated.)

What happens if the cache write succeeds but the database write fails? (Roll back the cache write and return an error. The entire operation must be atomic.)

How does write-through handle high write volume? (It doesn’t—write-through is not suitable for write-heavy workloads. Use write-behind or queue-based approaches.)

Can you combine write-through with cache-aside? (Yes, use write-through for writes and cache-aside for reads. This is a common pattern.)

How do you monitor write-through effectiveness? (Track write latency P50/P99, cache hit rate after writes, write error rate, and cache-database consistency checks.)

Red Flags to Avoid

Suggesting write-through for write-heavy workloads without acknowledging the latency cost

Not understanding the consistency guarantee—thinking write-through is eventually consistent

Ignoring failure scenarios—what if the database write fails after cache write succeeds?

Not comparing with write-behind or explaining when each pattern is appropriate

Proposing write-through without considering the operational complexity of making cache a critical write path


Key Takeaways

Write-through synchronously writes to both cache and database, guaranteeing strong consistency at the cost of higher write latency (sum of cache + database write times).

Use write-through when consistency matters more than write speed: financial transactions, user authentication, and read-after-write scenarios where stale data causes bugs.

Failure handling is critical: if either cache or database write fails, the entire operation must fail to maintain consistency. Database-first ordering is safer than cache-first.

Write-through is not suitable for write-heavy workloads—the synchronous database write becomes a bottleneck. Use write-behind for high write volume with eventual consistency.

Combine write-through with cache-aside for optimal performance: write-through for writes (consistency), cache-aside for reads (populate cache on miss). This hybrid approach is common in production systems.