Synchronization in Java: Locks & Monitors Guide
TL;DR
Synchronization mechanisms prevent race conditions when multiple threads access shared resources. Locks and mutexes ensure only one thread executes critical sections at a time, while semaphores control access to limited resources and read-write locks optimize for read-heavy workloads.
Core Concept
What is Synchronization?
Synchronization is the coordination of multiple threads to ensure safe access to shared resources. Without synchronization, threads can interfere with each other, causing race conditions where the outcome depends on unpredictable thread timing.
Core Synchronization Primitives
Lock (Mutex)
A lock (also called mutex for “mutual exclusion”) is the simplest synchronization primitive. It has two states: locked and unlocked. Only one thread can hold the lock at a time. When a thread acquires a lock, other threads attempting to acquire it will block (wait) until it’s released.
Why it matters: Locks protect critical sections — code segments that access shared data. Without locks, two threads might read-modify-write the same variable simultaneously, losing updates.
Semaphore
A semaphore is a counter that controls access to a limited number of resources. Unlike a lock (binary semaphore with count 1), a semaphore can allow N threads to access a resource simultaneously.
Why it matters: Use semaphores when you have a pool of resources (like database connections) and need to limit concurrent access to N instances.
Reentrant Lock
A reentrant lock (recursive lock) allows the same thread to acquire it multiple times without deadlocking itself. The thread must release it the same number of times it acquired it.
Why it matters: Prevents deadlock when a thread calls a synchronized method that calls another synchronized method.
Read-Write Lock
A read-write lock allows multiple readers OR one writer. Readers don’t block each other, but writers have exclusive access.
Why it matters: Optimizes performance for read-heavy workloads where data is read frequently but modified rarely.
The Critical Section Problem
The goal of synchronization is to ensure:
- Mutual Exclusion: Only one thread in the critical section at a time
- Progress: If no thread is in the critical section, one waiting thread should be able to enter
- Bounded Waiting: No thread waits forever
Visual Guide
Lock Acquisition Flow
sequenceDiagram
participant T1 as Thread 1
participant L as Lock
participant T2 as Thread 2
participant R as Shared Resource
T1->>L: acquire()
activate L
L-->>T1: acquired
T1->>R: read/write
T2->>L: acquire()
Note over T2,L: Thread 2 blocks
T1->>R: read/write
T1->>L: release()
deactivate L
L-->>T2: acquired
activate L
T2->>R: read/write
T2->>L: release()
deactivate L
Thread 2 blocks when trying to acquire a lock held by Thread 1. Once Thread 1 releases the lock, Thread 2 can proceed.
Semaphore with Count 2
graph TD
A[Semaphore Count: 2] --> B[Thread 1 acquires]
B --> C[Count: 1]
C --> D[Thread 2 acquires]
D --> E[Count: 0]
E --> F[Thread 3 tries to acquire]
F --> G[Thread 3 BLOCKS]
E --> H[Thread 1 releases]
H --> I[Count: 1]
I --> J[Thread 3 unblocks and acquires]
J --> K[Count: 0]
A semaphore with count 2 allows two threads concurrent access. The third thread must wait until one releases.
Read-Write Lock Behavior
stateDiagram-v2
[*] --> Unlocked
Unlocked --> ReadLocked: Reader acquires
ReadLocked --> ReadLocked: More readers acquire
ReadLocked --> Unlocked: All readers release
Unlocked --> WriteLocked: Writer acquires
WriteLocked --> Unlocked: Writer releases
ReadLocked --> WriteLocked: Last reader releases,\nwriter waiting
WriteLocked --> ReadLocked: Writer releases,\nreaders waiting
note right of ReadLocked
Multiple readers allowed
end note
note right of WriteLocked
Exclusive access
end note
Read-write locks allow concurrent readers but give writers exclusive access.
Examples
Example 1: Race Condition Without Lock
import threading
# Shared counter - UNSAFE
counter = 0
def increment():
global counter
for _ in range(100000):
counter += 1 # NOT atomic: read, add, write
# Create two threads
t1 = threading.Thread(target=increment)
t2 = threading.Thread(target=increment)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Counter: {counter}")
# Expected: 200000
# Actual: varies (e.g., 156789, 198234) - RACE CONDITION!
Why this fails: The operation counter += 1 involves three steps: read counter, add 1, write back. Two threads can interleave these steps, causing lost updates.
Example 2: Using a Lock (Mutex)
import threading
counter = 0
lock = threading.Lock() # Create a lock
def increment_safe():
global counter
for _ in range(100000):
with lock: # Acquire lock (context manager auto-releases)
counter += 1 # Critical section
t1 = threading.Thread(target=increment_safe)
t2 = threading.Thread(target=increment_safe)
t1.start()
t2.start()
t1.join()
t2.join()
print(f"Counter: {counter}")
# Output: 200000 (always correct)
Key points:
with lock:acquires the lock and automatically releases it when the block exits- Only one thread can execute the critical section at a time
- The
withstatement ensures the lock is released even if an exception occurs
Java equivalent:
private static int counter = 0;
private static final Object lock = new Object();
public static void increment() {
synchronized(lock) { // Acquire lock
counter++;
} // Auto-release
}
Example 3: Semaphore for Resource Pool
import threading
import time
from threading import Semaphore
# Only 2 threads can access database connections simultaneously
db_semaphore = Semaphore(2)
active_connections = 0
lock = threading.Lock() # For printing safely
def database_query(thread_id):
global active_connections
with db_semaphore: # Acquire semaphore slot
with lock:
active_connections += 1
print(f"Thread {thread_id}: Connected (active: {active_connections})")
time.sleep(1) # Simulate database work
with lock:
active_connections -= 1
print(f"Thread {thread_id}: Disconnected (active: {active_connections})")
# Create 5 threads competing for 2 connection slots
threads = [threading.Thread(target=database_query, args=(i,)) for i in range(5)]
for t in threads:
t.start()
for t in threads:
t.join()
# Output:
# Thread 0: Connected (active: 1)
# Thread 1: Connected (active: 2)
# Thread 0: Disconnected (active: 1)
# Thread 2: Connected (active: 2) # Thread 2 waited for slot
# Thread 1: Disconnected (active: 1)
# Thread 3: Connected (active: 2)
# Thread 2: Disconnected (active: 1)
# Thread 4: Connected (active: 2)
# Thread 3: Disconnected (active: 1)
# Thread 4: Disconnected (active: 0)
Key points:
- Maximum 2 threads hold the semaphore simultaneously
- Threads 2, 3, 4 block until earlier threads release their slots
- Useful for connection pools, thread pools, rate limiting
Example 4: Reentrant Lock
import threading
lock = threading.RLock() # Reentrant lock
def outer_function():
with lock:
print("Outer acquired lock")
inner_function() # Calls another function that needs the same lock
print("Outer releasing lock")
def inner_function():
with lock: # Same thread acquires lock again - OK with RLock
print("Inner acquired lock")
print("Inner releasing lock")
outer_function()
# Output:
# Outer acquired lock
# Inner acquired lock
# Inner releasing lock
# Outer releasing lock
With regular Lock (would deadlock):
lock = threading.Lock() # Non-reentrant
def outer_function():
with lock:
print("Outer acquired lock")
inner_function() # DEADLOCK! Same thread tries to acquire lock it already holds
C++ equivalent:
#include <mutex>
std::recursive_mutex rmutex; // Reentrant
void inner() {
std::lock_guard<std::recursive_mutex> lock(rmutex);
// ...
}
void outer() {
std::lock_guard<std::recursive_mutex> lock(rmutex);
inner(); // OK - same thread can reacquire
}
Example 5: Read-Write Lock
import threading
import time
class ReadWriteLock:
def __init__(self):
self.readers = 0
self.writer = False
self.read_ready = threading.Condition(threading.Lock())
def acquire_read(self):
self.read_ready.acquire()
while self.writer: # Wait if writer is active
self.read_ready.wait()
self.readers += 1
self.read_ready.release()
def release_read(self):
self.read_ready.acquire()
self.readers -= 1
if self.readers == 0:
self.read_ready.notifyAll() # Wake up waiting writers
self.read_ready.release()
def acquire_write(self):
self.read_ready.acquire()
while self.writer or self.readers > 0: # Wait for no readers/writers
self.read_ready.wait()
self.writer = True
self.read_ready.release()
def release_write(self):
self.read_ready.acquire()
self.writer = False
self.read_ready.notifyAll() # Wake up all waiting threads
self.read_ready.release()
rw_lock = ReadWriteLock()
shared_data = {"value": 0}
def reader(thread_id):
rw_lock.acquire_read()
print(f"Reader {thread_id}: Reading value = {shared_data['value']}")
time.sleep(0.1) # Simulate read operation
rw_lock.release_read()
def writer(thread_id):
rw_lock.acquire_write()
shared_data['value'] += 1
print(f"Writer {thread_id}: Wrote value = {shared_data['value']}")
time.sleep(0.1) # Simulate write operation
rw_lock.release_write()
# Create multiple readers and one writer
threads = []
for i in range(3):
threads.append(threading.Thread(target=reader, args=(i,)))
threads.append(threading.Thread(target=writer, args=(0,)))
for i in range(3, 6):
threads.append(threading.Thread(target=reader, args=(i,)))
for t in threads:
t.start()
for t in threads:
t.join()
# Output (order may vary):
# Reader 0: Reading value = 0
# Reader 1: Reading value = 0 # Multiple readers concurrent
# Reader 2: Reading value = 0
# Writer 0: Wrote value = 1 # Writer waits for readers, gets exclusive access
# Reader 3: Reading value = 1
# Reader 4: Reading value = 1
# Reader 5: Reading value = 1
Try it yourself: Modify Example 2 to use manual lock.acquire() and lock.release() instead of the with statement. Add a try-finally block to ensure the lock is always released. What happens if you forget the finally block and an exception occurs?
Common Mistakes
1. Forgetting to Release Locks
lock = threading.Lock()
def bad_function():
lock.acquire()
if some_condition:
return # BUG: Lock never released!
lock.release()
Fix: Always use context managers (with lock:) or try-finally blocks:
def good_function():
with lock: # Auto-releases even on early return or exception
if some_condition:
return
2. Deadlock from Lock Ordering
lock_a = threading.Lock()
lock_b = threading.Lock()
def thread1():
with lock_a:
time.sleep(0.1)
with lock_b: # Waits for lock_b
pass
def thread2():
with lock_b:
time.sleep(0.1)
with lock_a: # Waits for lock_a - DEADLOCK!
pass
Fix: Always acquire locks in the same order across all threads:
def thread1():
with lock_a: # Both threads acquire lock_a first
with lock_b:
pass
def thread2():
with lock_a: # Same order
with lock_b:
pass
3. Using Regular Lock Instead of Reentrant Lock
lock = threading.Lock()
def recursive_function(n):
with lock:
if n > 0:
recursive_function(n - 1) # DEADLOCK on second call!
Fix: Use RLock for recursive or nested calls:
lock = threading.RLock() # Reentrant
def recursive_function(n):
with lock:
if n > 0:
recursive_function(n - 1) # OK
4. Holding Locks Too Long
def slow_operation():
with lock:
data = shared_resource.read()
result = expensive_computation(data) # Holds lock during slow work!
shared_resource.write(result)
Fix: Only hold locks for the minimum critical section:
def fast_operation():
with lock:
data = shared_resource.read()
result = expensive_computation(data) # Lock released during computation
with lock:
shared_resource.write(result)
5. Race Condition with Check-Then-Act
if len(shared_list) > 0: # Check outside lock
with lock:
item = shared_list.pop() # Another thread might have emptied list!
Fix: Perform check and action atomically:
with lock:
if len(shared_list) > 0: # Check and act inside same critical section
item = shared_list.pop()
6. Not Protecting All Shared State
class Counter:
def __init__(self):
self.count = 0
self.lock = threading.Lock()
def increment(self):
with self.lock:
self.count += 1
def get_and_reset(self):
value = self.count # BUG: Read without lock!
with self.lock:
self.count = 0
return value
Fix: Protect ALL accesses to shared state:
def get_and_reset(self):
with self.lock:
value = self.count
self.count = 0
return value
Interview Tips
What Interviewers Look For
1. Identifying Race Conditions
Interviewers often present code and ask “Is this thread-safe?” Practice identifying:
- Read-modify-write operations on shared variables
- Check-then-act patterns without synchronization
- Unprotected access to shared collections
Example question: “Two threads call this method simultaneously. What could go wrong?”
2. Choosing the Right Synchronization Primitive
Be ready to explain when to use:
- Lock/Mutex: Simple mutual exclusion for critical sections
- Semaphore: Limiting access to N resources (connection pools)
- Reentrant Lock: When same thread needs to reacquire (recursive calls)
- Read-Write Lock: Read-heavy workloads with occasional writes
3. Deadlock Prevention
Know the four conditions for deadlock:
- Mutual exclusion
- Hold and wait
- No preemption
- Circular wait
Be able to explain how to prevent deadlock:
- Lock ordering (always acquire in same order)
- Timeouts on lock acquisition
- Lock-free data structures
4. Performance Considerations
Discuss trade-offs:
- Coarse-grained locking: One lock for entire data structure (simple, less concurrent)
- Fine-grained locking: Multiple locks for different parts (complex, more concurrent)
- Lock contention: What happens when many threads compete for same lock
5. Common Interview Patterns
Producer-Consumer: Use semaphores or condition variables
# Interviewer might ask: "Implement a bounded buffer with synchronization"
Singleton with thread safety:
class Singleton:
_instance = None
_lock = threading.Lock()
def __new__(cls):
if cls._instance is None:
with cls._lock: # Double-checked locking
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
Dining Philosophers: Classic deadlock scenario
Quick Response Framework
When asked about synchronization:
- Identify shared state: “What data is accessed by multiple threads?”
- Find critical sections: “Where do we read/modify shared data?”
- Choose primitive: “We need a [lock/semaphore/RWLock] because…”
- Prevent deadlock: “To avoid deadlock, we’ll…”
- Consider performance: “This approach trades [X] for [Y]…”
Red Flags to Avoid
- Don’t say “just add synchronized everywhere” — shows lack of understanding of performance
- Don’t ignore deadlock possibilities — always mention lock ordering
- Don’t forget to discuss releasing locks on exceptions
- Don’t claim something is “thread-safe” without explaining why
Practice Problems
- Implement a thread-safe counter with increment, decrement, and get methods
- Design a rate limiter that allows N requests per second using semaphores
- Fix a deadlock in given code by reordering lock acquisition
- Implement a cache with read-write locks for concurrent reads
Be prepared to write code on a whiteboard or in a shared editor, explaining your synchronization choices as you go.
Key Takeaways
- Locks (mutexes) provide mutual exclusion — only one thread can hold a lock at a time, protecting critical sections from race conditions
- Always use context managers (
with lock:) or try-finally blocks to ensure locks are released, even when exceptions occur - Choose the right primitive for your use case: locks for simple mutual exclusion, semaphores for resource pools, reentrant locks for recursive calls, read-write locks for read-heavy workloads
- Prevent deadlocks through consistent lock ordering — always acquire multiple locks in the same order across all threads
- Minimize critical sections — hold locks only as long as necessary to reduce contention and improve concurrency