How to Use Transactions in SQLite
Transactions are fundamental to maintaining data integrity in SQLite. A transaction groups multiple database operations into a single atomic unit—either all operations succeed and are committed, or...
Key Insights
- Wrapping bulk operations in explicit transactions can improve SQLite write performance by 10-100x compared to autocommit mode, as each individual statement outside a transaction forces an expensive disk sync.
- SQLite offers three transaction types (DEFERRED, IMMEDIATE, EXCLUSIVE) with different locking behaviors—use IMMEDIATE for write operations to avoid “database is locked” errors in concurrent scenarios.
- Savepoints enable partial rollbacks within transactions, allowing you to implement sophisticated error handling without losing all work when a single operation fails.
Introduction to SQLite Transactions
Transactions are fundamental to maintaining data integrity in SQLite. A transaction groups multiple database operations into a single atomic unit—either all operations succeed and are committed, or all fail and are rolled back. This ensures your database never ends up in an inconsistent state.
SQLite transactions follow ACID properties: Atomicity (all-or-nothing execution), Consistency (database moves from one valid state to another), Isolation (concurrent transactions don’t interfere), and Durability (committed changes survive crashes). While these sound academic, they solve real problems. Without transactions, a power failure during a multi-step bank transfer could deduct money from one account without crediting the other.
Beyond data integrity, transactions dramatically improve performance for bulk operations. SQLite’s default autocommit mode treats each statement as its own transaction, forcing a disk sync after every write. Wrapping thousands of inserts in a single transaction eliminates this overhead.
Basic Transaction Syntax
SQLite transactions use three fundamental commands: BEGIN, COMMIT, and ROLLBACK. Here’s the basic pattern:
BEGIN;
INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com');
INSERT INTO accounts (user_id, balance) VALUES (last_insert_rowid(), 1000.00);
COMMIT;
If something goes wrong, use ROLLBACK instead:
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE user_id = 1;
UPDATE accounts SET balance = balance + 100 WHERE user_id = 2;
-- Oops, user 2 doesn't exist
ROLLBACK; -- Neither account is modified
SQLite automatically rolls back transactions when a connection closes without committing. This prevents partial updates from persisting if your application crashes mid-transaction.
Transaction Types and Behavior
SQLite offers three transaction types that control when locks are acquired:
DEFERRED (default): Acquires no locks initially. Upgrades to a read lock on first SELECT, then to a write lock on first INSERT/UPDATE/DELETE. This lazy locking can cause “database is locked” errors if another connection grabs a write lock first.
IMMEDIATE: Acquires a write lock immediately, preventing other connections from writing (but allowing reads). Use this for transactions that will modify data.
EXCLUSIVE: Acquires an exclusive lock, blocking all other connections. Rarely needed except for schema changes or when you need complete isolation.
Here’s the practical difference:
import sqlite3
import threading
import time
def deferred_transaction():
conn = sqlite3.connect('test.db')
conn.execute('BEGIN DEFERRED')
time.sleep(1) # Simulate work
try:
conn.execute('INSERT INTO logs (message) VALUES (?)', ('deferred',))
conn.commit()
print("Deferred: Success")
except sqlite3.OperationalError as e:
print(f"Deferred: {e}")
conn.rollback()
conn.close()
def immediate_transaction():
conn = sqlite3.connect('test.db')
conn.execute('BEGIN IMMEDIATE')
time.sleep(1) # Simulate work
try:
conn.execute('INSERT INTO logs (message) VALUES (?)', ('immediate',))
conn.commit()
print("Immediate: Success")
except sqlite3.OperationalError as e:
print(f"Immediate: {e}")
conn.rollback()
conn.close()
# Run two DEFERRED transactions concurrently - one will likely fail
t1 = threading.Thread(target=deferred_transaction)
t2 = threading.Thread(target=deferred_transaction)
t1.start()
t2.start()
t1.join()
t2.join()
# Run two IMMEDIATE transactions - second waits or times out cleanly
t3 = threading.Thread(target=immediate_transaction)
t4 = threading.Thread(target=immediate_transaction)
t3.start()
t4.start()
With DEFERRED, both threads start their transactions, but when they try to write, one fails. With IMMEDIATE, the second thread knows immediately that it must wait, leading to more predictable behavior.
Handling Transaction Errors
Proper error handling ensures transactions are always resolved—either committed or rolled back. Never leave transactions hanging:
import sqlite3
def transfer_money(from_account, to_account, amount):
conn = sqlite3.connect('bank.db')
conn.execute('BEGIN IMMEDIATE')
try:
# Withdraw from source
cursor = conn.execute(
'UPDATE accounts SET balance = balance - ? WHERE id = ? AND balance >= ?',
(amount, from_account, amount)
)
if cursor.rowcount == 0:
raise ValueError("Insufficient funds or account not found")
# Deposit to destination
cursor = conn.execute(
'UPDATE accounts SET balance = balance + ? WHERE id = ?',
(amount, to_account)
)
if cursor.rowcount == 0:
raise ValueError("Destination account not found")
conn.commit()
return True
except (sqlite3.Error, ValueError) as e:
conn.rollback()
print(f"Transaction failed: {e}")
return False
finally:
conn.close()
# Usage
if transfer_money(1, 2, 50.00):
print("Transfer successful")
else:
print("Transfer failed")
The finally block ensures the connection closes regardless of success or failure. Always wrap transaction code in try/except blocks and explicitly roll back on errors.
Transactions for Bulk Operations
The performance difference between autocommit and explicit transactions is dramatic:
import sqlite3
import time
conn = sqlite3.connect('benchmark.db')
conn.execute('CREATE TABLE IF NOT EXISTS items (id INTEGER PRIMARY KEY, value TEXT)')
conn.execute('DELETE FROM items')
# WITHOUT transaction (autocommit mode)
start = time.time()
for i in range(1000):
conn.execute('INSERT INTO items (value) VALUES (?)', (f'item_{i}',))
without_transaction = time.time() - start
# WITH transaction
conn.execute('DELETE FROM items')
start = time.time()
conn.execute('BEGIN')
for i in range(1000):
conn.execute('INSERT INTO items (value) VALUES (?)', (f'item_{i}',))
conn.execute('COMMIT')
with_transaction = time.time() - start
print(f"Without transaction: {without_transaction:.2f}s")
print(f"With transaction: {with_transaction:.2f}s")
print(f"Speedup: {without_transaction/with_transaction:.1f}x")
conn.close()
On typical hardware, you’ll see results like:
Without transaction: 3.45s
With transaction: 0.08s
Speedup: 43.1x
The speedup comes from batching disk syncs. Without a transaction, SQLite must ensure each insert is durably written to disk before proceeding. With a transaction, it can buffer changes in memory and sync once at commit.
Savepoints and Nested Transactions
Savepoints let you create checkpoints within a transaction, enabling partial rollbacks:
import sqlite3
conn = sqlite3.connect('orders.db')
conn.execute('BEGIN')
try:
# Create order
conn.execute('INSERT INTO orders (customer_id, total) VALUES (?, ?)', (123, 0))
order_id = conn.execute('SELECT last_insert_rowid()').fetchone()[0]
total = 0
items = [
('Widget', 10.00, 5),
('Gadget', 25.00, 2),
('Invalid-Item', -5.00, 1), # This will fail validation
]
for name, price, quantity in items:
# Create savepoint before each item
conn.execute(f'SAVEPOINT item_{name}')
try:
if price < 0:
raise ValueError(f"Invalid price for {name}")
conn.execute(
'INSERT INTO order_items (order_id, item_name, price, quantity) VALUES (?, ?, ?, ?)',
(order_id, name, price, quantity)
)
total += price * quantity
# Release savepoint on success
conn.execute(f'RELEASE item_{name}')
except ValueError as e:
# Rollback just this item
print(f"Skipping item: {e}")
conn.execute(f'ROLLBACK TO item_{name}')
conn.execute(f'RELEASE item_{name}')
# Update order total
conn.execute('UPDATE orders SET total = ? WHERE id = ?', (total, order_id))
conn.commit()
print(f"Order created with total: ${total}")
except sqlite3.Error as e:
conn.rollback()
print(f"Order failed: {e}")
conn.close()
This pattern is invaluable for processing batches where some items may fail validation but you want to save the valid ones.
Best Practices and Common Pitfalls
Keep transactions short. Long-running transactions hold locks and can block other connections. This is an anti-pattern:
# BAD: Transaction held open during slow operations
conn.execute('BEGIN')
for file in large_file_list:
data = process_large_file(file) # Takes seconds
conn.execute('INSERT INTO files (name, data) VALUES (?, ?)', (file, data))
conn.commit()
Better approach:
# GOOD: Prepare data first, then transact
processed_data = []
for file in large_file_list:
data = process_large_file(file)
processed_data.append((file, data))
# Quick transaction for database writes only
conn.execute('BEGIN')
conn.executemany('INSERT INTO files (name, data) VALUES (?, ?)', processed_data)
conn.commit()
Set appropriate timeouts. The default timeout (5 seconds in Python’s sqlite3) may be too short for busy databases:
conn = sqlite3.connect('app.db', timeout=30.0)
Use WAL mode for concurrency. Write-Ahead Logging mode allows readers to access the database while a writer has a transaction open:
conn = sqlite3.connect('app.db')
conn.execute('PRAGMA journal_mode=WAL')
Always handle transactions explicitly in production code. Don’t rely on autocommit for anything beyond simple scripts. Explicit BEGIN/COMMIT makes your intentions clear and gives you control over transaction boundaries.
Transactions are SQLite’s most powerful feature for ensuring data integrity and performance. Master these patterns, and you’ll write more reliable and efficient database code.