Support >
  About cybersecurity >
  Why are read-write locks faster than mutexes in Java?
Why are read-write locks faster than mutexes in Java?
Time : 2025-12-26 15:14:07
Edit : Jtti

In Java high-concurrency programming, locks are a fundamental tool for coordinating multi-threaded access and ensuring data consistency. However, in real-world projects, especially in scenarios where read operations far outnumber write operations, such as caching systems or configuration center clients, simply adding a large lock to all data access can immediately become a performance bottleneck. In such cases, understanding the difference between mutexes and read-write locks, and choosing the appropriate tool based on the scenario, becomes crucial for improving concurrency capabilities. This isn't a simple either/or choice, but rather finding a precise balance between security and performance.

Let's start with the most basic mutex. In Java, the `synchronized` keyword and the `ReentrantLock` class are typical examples of mutex implementations. Their core logic is simple: at any given time, only one thread is allowed to hold the lock and access the protected code block or resource. This is like a locked room with only one key; whether you want to observe (read) or rearrange (write), you must wait in line, get the key to enter, and then pass the key to the next thread after exiting.

Below is an example of using `ReentrantLock` to protect a simple data object:

``java

import java.util.concurrent.locks.ReentrantLock;

public class DataWithMutex {

private String data = “Initial Data”;

private final ReentrantLock lock = new ReentrantLock();

public String readData() {

lock.lock(); // Read operations also require acquiring the mutex

try {

// Simulate read latency

Thread.sleep(10);

return data;

} catch (InterruptedException e) {

Thread.currentThread().interrupt();

return “”;

} finally {

lock.unlock();

}

}

public void writeData(String newData) {

lock.lock(); // Write operations acquire the mutex

try {

// Simulating Write Time

Thread.sleep(50);

this.data = newData;

} catch (InterruptedException e) {

Thread.currentThread().interrupt();

} finally {

lock.unlock();

}

}

}

This model is absolutely safe; the data will never be modified by another thread while being read ("dirty read"). However, its cost is obvious: inefficiency. Assume the `readData` method takes an average of 10 milliseconds, and `writeData` takes 50 milliseconds. Under a pure mutex lock, even if 100 threads only want to read data simultaneously, they must execute serially one after another, with a total time potentially exceeding 1 second. During this time, the CPU spends a significant amount of time idling, severely wasting resources. This is the biggest problem with mutex locks: they do not distinguish between operation types, forcibly serializing read operations that could be parallelized.

To solve this dilemma, read-write locks were developed. Read-write locks, or `ReadWriteLock`, are based on the intuitive idea that since multiple threads can read shared data simultaneously without conflict (because the data hasn't been modified), why not allow them to do so concurrently? Read-write locks divide lock access into two categories: read locks (shared locks) and write locks (exclusive locks). Their core rules can be summarized in three points: First, multiple threads can hold read locks simultaneously for concurrent reading; second, write locks are exclusive, meaning only one thread can hold a write lock at a time, and no read locks can exist at that moment; third, write locks typically have higher priority to prevent "write starvation" (i.e., too many read threads preventing write threads from acquiring the lock). Java provides the `ReentrantReadWriteLock` implementation in the `java.util.concurrent.locks` package.

Let's refactor the example above using read-write locks:

``java

import java.util.concurrent.locks.ReentrantReadWriteLock;

public class DataWithReadWriteLock {

private String data = “Initial Data”;

private final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();

private final ReentrantReadWriteLock.ReadLock readLock = rwLock.readLock();

private final ReentrantReadWriteLock.WriteLock writeLock = rwLock.writeLock();

public String readData() {

readLock.lock(); // Acquire the read lock, multiple threads can enter simultaneously

try {

Thread.sleep(10);

return data;

} catch (InterruptedException e) {

Thread.currentThread().interrupt();

return “”;

} finally {

readLock.unlock();

}

}

public void writeData(String newData) {

writeLock.lock(); // Acquire the write lock, which is exclusive

try {

Thread.sleep(50);

this.data = newData;

} catch (InterruptedException e) {

Thread.currentThread().interrupt();

} finally {

writeLock.unlock();

}

}

}

The performance improvement brought by this change can be huge. In the scenario of 100 read threads, they can now acquire the read lock and execute almost simultaneously, with the total execution time possibly only slightly longer than the read time of a single thread (a little over 10 milliseconds). The throughput of the entire system is thus improved by orders of magnitude. The beauty of read-write locks lies in the fact that by distinguishing operation types, they greatly relax the restrictions on concurrent reads, applying strict mutual exclusion only when necessary (either when a write operation is being performed or when a read operation happens to occur).

Of course, read-write locks are not a silver bullet; they also have their own overhead and limitations. First, the implementation of read-write locks is more complex than that of mutexes. Maintaining a count of read threads and coordinating read-write state transitions both incur costs. Therefore, in scenarios with low contention or fast read/write operations, the performance gains from using read-write locks may not outweigh the additional overhead, and they may even be slower than mutexes. Second, you need to ensure that your scenario is truly "read-heavy and write-light," and that read operations themselves are relatively time-consuming (e.g., involving I/O, network, or complex calculations). If write operations are very frequent, the lock will spend most of its time in a write-lock or waiting-for-write-lock state, degenerating into something similar to a mutex.

In practical application, an effective evaluation method is to conduct performance stress testing. You can try two different lock implementations and observe the system's throughput (QPS) and average response time under simulated real load. If the data clearly shows that read-write locks have a significant advantage, then adopt them. Furthermore, note that the constructor of `ReentrantReadWriteLock` supports creating a fair lock. In fair mode, threads acquire locks in the order they request, preventing starvation but reducing throughput. In unfair mode (the default), "jumping the queue" is allowed, resulting in higher throughput but potentially causing some threads to wait for too long. This again requires a choice based on business characteristics.

Furthermore, in modern Java development with extremely high concurrency, some more advanced concurrency tools are beginning to play an important role. For example, `StampedLock`, introduced in Java 8, is a more powerful lock that provides an optimistic read mode. Optimistic reads assume that few writes occur during the read process, so it first acquires a "stamp," and after reading, verifies whether the "stamp" was invalid during the read (i.e., whether a write operation occurred). If invalid, it escalates to a pessimistic read lock. This provides better performance than `ReentrantReadWriteLock` in scenarios with very many reads and very few writes, but its API is also more complex and more prone to errors.

Pre-sales consultation
JTTI-Ellis
JTTI-Amano
JTTI-Coco
JTTI-Jean
JTTI-Selina
JTTI-Defl
JTTI-Eom
Technical Support
JTTI-Noc
Title
Email Address
Type
Sales Issues
Sales Issues
System Problems
After-sales problems
Complaints and Suggestions
Marketing Cooperation
Information
Code
Submit