SQL Locking: Optimistic vs Pessimistic Locking
When multiple users access the same database records simultaneously, race conditions can corrupt your data. Consider a simple banking scenario: two ATM transactions withdraw from the same account at...
Key Insights
- Pessimistic locking prevents conflicts by acquiring locks upfront but reduces concurrency and can cause deadlocks, while optimistic locking allows concurrent access but requires retry logic when conflicts occur
- Choose pessimistic locking for high-contention scenarios with frequent writes (inventory systems, financial transactions) and optimistic locking for read-heavy workloads where conflicts are rare (content management, user profiles)
- Most production systems benefit from a hybrid approach: use pessimistic locking for critical financial operations and optimistic locking for user-facing features to balance data consistency with system responsiveness
The Concurrency Problem
When multiple users access the same database records simultaneously, race conditions can corrupt your data. Consider a simple banking scenario: two ATM transactions withdraw from the same account at the exact same moment.
-- Transaction 1 (ATM A)
SELECT balance FROM accounts WHERE id = 123; -- Returns $1000
-- Calculate new balance: $1000 - $100 = $900
-- Transaction 2 (ATM B) - happens simultaneously
SELECT balance FROM accounts WHERE id = 123; -- Also returns $1000
-- Calculate new balance: $1000 - $50 = $950
-- Transaction 1 commits
UPDATE accounts SET balance = 900 WHERE id = 123;
-- Transaction 2 commits
UPDATE accounts SET balance = 950 WHERE id = 123;
The result? The account balance is $950 instead of $850. The first withdrawal was completely lost. This is the classic “lost update” problem, and it’s why locking mechanisms exist.
Pessimistic Locking: Lock First, Ask Questions Later
Pessimistic locking takes a defensive approach: acquire locks before reading data, preventing other transactions from modifying it until you’re done. The database enforces this through row-level or table-level locks.
There are two primary lock types:
- Shared locks (S): Multiple transactions can read the same data simultaneously
- Exclusive locks (X): Only one transaction can modify the data; blocks all other access
Here’s how to implement pessimistic locking with SELECT ... FOR UPDATE:
BEGIN TRANSACTION;
-- Acquire exclusive lock on the row
SELECT balance FROM accounts WHERE id = 123 FOR UPDATE;
-- Other transactions trying to read this row FOR UPDATE will block here
-- Safe to modify - no one else can change this data
UPDATE accounts SET balance = balance - 100 WHERE id = 123;
COMMIT;
The FOR UPDATE clause tells the database: “I’m going to modify this data, don’t let anyone else touch it.” Any other transaction attempting to lock the same row will wait until your transaction completes.
Transaction isolation levels control locking behavior. Here’s how REPEATABLE READ works:
-- Transaction 1
BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE id = 123; -- Returns $1000, acquires shared lock
-- Transaction 2 (blocked if trying to modify)
BEGIN TRANSACTION;
UPDATE accounts SET balance = 500 WHERE id = 123; -- Waits for Transaction 1
-- Transaction 1
SELECT balance FROM accounts WHERE id = 123; -- Still returns $1000 (repeatable read)
COMMIT; -- Now Transaction 2 can proceed
The dark side of pessimistic locking? Deadlocks. When two transactions wait for each other’s locks, the database must kill one:
-- Transaction 1
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 123; -- Locks account 123
-- Now tries to lock account 456
UPDATE accounts SET balance = balance + 100 WHERE id = 456; -- Waits...
-- Transaction 2
BEGIN;
UPDATE accounts SET balance = balance - 50 WHERE id = 456; -- Locks account 456
-- Now tries to lock account 123
UPDATE accounts SET balance = balance + 50 WHERE id = 123; -- DEADLOCK!
The database detects this circular wait and aborts one transaction. Your application must handle this with retry logic.
Optimistic Locking: Trust, Then Verify
Optimistic locking flips the script: assume conflicts are rare, so don’t lock anything upfront. Instead, detect conflicts at commit time and retry if necessary. This is implemented using version numbers or timestamps.
Here’s the version column approach:
-- Add version column to your table
ALTER TABLE accounts ADD COLUMN version INT DEFAULT 0;
-- Transaction 1: Read data with version
SELECT balance, version FROM accounts WHERE id = 123;
-- Returns: balance=$1000, version=5
-- Calculate new balance in application code
-- new_balance = $900
-- Update only if version hasn't changed
UPDATE accounts
SET balance = 900, version = version + 1
WHERE id = 123 AND version = 5;
-- Check affected rows
-- If 0 rows updated, someone else modified the record - conflict detected!
The magic is in the WHERE version = 5 clause. If another transaction modified the record, the version will be 6, and your UPDATE affects zero rows. You detect the conflict and retry.
Here’s how this looks in application code (Java with JPA):
@Entity
public class Account {
@Id
private Long id;
private BigDecimal balance;
@Version
private Long version; // JPA automatically manages this
}
// In your service layer
public void withdraw(Long accountId, BigDecimal amount) {
int maxRetries = 3;
for (int i = 0; i < maxRetries; i++) {
try {
Account account = accountRepository.findById(accountId);
account.setBalance(account.getBalance().subtract(amount));
accountRepository.save(account);
return; // Success!
} catch (OptimisticLockException e) {
if (i == maxRetries - 1) throw e;
// Retry with exponential backoff
Thread.sleep((long) Math.pow(2, i) * 100);
}
}
}
The timestamp approach works similarly:
ALTER TABLE accounts ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
-- Read with timestamp
SELECT balance, updated_at FROM accounts WHERE id = 123;
-- Returns: balance=$1000, updated_at='2024-01-15 10:30:00'
-- Update only if timestamp matches
UPDATE accounts
SET balance = 900, updated_at = CURRENT_TIMESTAMP
WHERE id = 123 AND updated_at = '2024-01-15 10:30:00';
Timestamps are less reliable than versions because of clock skew and precision issues, but they work for many use cases.
Performance Trade-offs
Pessimistic locking reduces throughput. When transactions hold locks, other transactions wait. In high-concurrency scenarios, this creates queuing delays:
Pessimistic Locking:
- 1000 concurrent requests
- Average lock hold time: 100ms
- Throughput: ~10 requests/second (serialized)
- Latency: 50-5000ms (depending on queue position)
Optimistic Locking:
- 1000 concurrent requests
- Conflict rate: 5%
- Throughput: ~950 successful requests/second
- Latency: 10ms (no waiting), but 50 requests need retry
Pessimistic locking provides predictable latency (you always wait for the lock) but lower throughput. Optimistic locking provides higher throughput but unpredictable latency (retries add variance).
Lock contention is the killer for pessimistic locking. Monitor these metrics:
-- PostgreSQL: Check for lock waits
SELECT relation::regclass, mode, granted
FROM pg_locks
WHERE NOT granted;
-- MySQL: Check InnoDB lock waits
SELECT * FROM information_schema.innodb_lock_waits;
Optimistic locking has retry overhead. Each conflict requires re-reading data and re-executing business logic:
def optimistic_retry(func, max_attempts=3):
for attempt in range(max_attempts):
try:
return func()
except OptimisticLockException:
if attempt == max_attempts - 1:
raise
time.sleep(0.1 * (2 ** attempt)) # Exponential backoff
When to Use Each Strategy
Use pessimistic locking when:
- High write contention: Multiple users frequently modify the same records (inventory management, ticket booking)
- Long transactions: Business logic takes time, and you can’t afford to retry
- Critical consistency: Financial transactions where lost updates are unacceptable
- Low concurrency: Few users accessing the system simultaneously
Example - E-commerce inventory:
-- Pessimistic: Prevent overselling
BEGIN;
SELECT stock_quantity FROM products WHERE id = 789 FOR UPDATE;
-- Check if quantity >= order_quantity
UPDATE products SET stock_quantity = stock_quantity - 5 WHERE id = 789;
COMMIT;
Use optimistic locking when:
- Read-heavy workloads: Most operations are reads; writes are rare
- Low conflict probability: Users typically modify different records
- Short transactions: Quick operations where retries are cheap
- High concurrency needs: Maximum throughput is critical
Example - Document editing system:
-- Optimistic: Users rarely edit the same document simultaneously
UPDATE documents
SET content = 'new content', version = version + 1
WHERE id = 456 AND version = 3;
-- If conflict, reload and let user merge changes
Hybrid approach for complex systems:
public class OrderService {
// Pessimistic for inventory (high contention)
@Transactional
public void reserveInventory(Long productId, int quantity) {
Product product = productRepo.findByIdForUpdate(productId);
product.decrementStock(quantity);
}
// Optimistic for order details (low contention)
@Transactional
public void updateShippingAddress(Long orderId, Address address) {
Order order = orderRepo.findById(orderId); // Uses @Version
order.setShippingAddress(address);
}
}
Implementation Best Practices
For pessimistic locking:
Set lock timeouts to prevent indefinite waits:
-- PostgreSQL
SET lock_timeout = '5s';
-- MySQL
SET innodb_lock_wait_timeout = 5;
Always acquire locks in consistent order to prevent deadlocks:
-- Good: Always lock accounts in ascending ID order
SELECT * FROM accounts WHERE id IN (123, 456) ORDER BY id FOR UPDATE;
-- Bad: Random order can cause deadlocks
SELECT * FROM accounts WHERE id IN (456, 123) FOR UPDATE;
For optimistic locking:
Implement robust retry logic with exponential backoff:
public <T> T executeWithOptimisticRetry(Supplier<T> operation) {
int maxAttempts = 5;
int baseDelay = 100; // milliseconds
for (int attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return operation.get();
} catch (OptimisticLockException e) {
if (attempt == maxAttempts) {
throw new MaxRetriesExceededException(
"Failed after " + maxAttempts + " attempts", e);
}
int delay = baseDelay * (int) Math.pow(2, attempt - 1);
delay += ThreadLocalRandom.current().nextInt(50); // Add jitter
Thread.sleep(delay);
}
}
throw new IllegalStateException("Unreachable");
}
Monitor conflict rates and adjust strategy:
-- Track optimistic lock failures
CREATE TABLE lock_metrics (
timestamp TIMESTAMP,
entity_type VARCHAR(50),
conflict_count INT
);
-- Alert when conflict rate exceeds threshold
SELECT entity_type, SUM(conflict_count) as total_conflicts
FROM lock_metrics
WHERE timestamp > NOW() - INTERVAL '1 hour'
GROUP BY entity_type
HAVING SUM(conflict_count) > 100;
The right locking strategy depends on your specific workload. Start with optimistic locking for simplicity, then switch to pessimistic for hotspots identified through monitoring. Most importantly, always test under realistic concurrency to validate your approach.