System Design: Event Sourcing Pattern
Traditional applications store current state. When a user updates their profile, you overwrite the old values with new ones. When an order ships, you flip a status flag. The previous state disappears...
Key Insights
- Event sourcing stores every state change as an immutable event, giving you a complete audit trail and the ability to reconstruct state at any point in time
- The pattern works best for domains with complex business logic, audit requirements, or temporal query needs—but adds significant complexity for simple CRUD applications
- Projections decouple your read and write models, enabling optimized query patterns at the cost of eventual consistency
Introduction: What is Event Sourcing?
Traditional applications store current state. When a user updates their profile, you overwrite the old values with new ones. When an order ships, you flip a status flag. The previous state disappears forever.
Event sourcing inverts this model. Instead of storing what things are, you store what happened. Every state change becomes an immutable event appended to a log. Current state becomes a derived value—computed by replaying events from the beginning.
This isn’t just an academic exercise. Event sourcing gives you a complete audit trail by design, enables temporal queries (“what did this order look like last Tuesday?”), and naturally supports complex domain logic where understanding how you got somewhere matters as much as where you are.
The trade-off? Significant complexity. Event sourcing isn’t a pattern you adopt lightly.
Core Concepts and Terminology
Before diving into implementation, let’s establish vocabulary.
Events represent facts that happened in your system. They’re immutable and past-tense: OrderCreated, ItemAdded, PaymentReceived. Once written, they never change.
Event Store is your append-only database of events. Unlike traditional databases, you never update or delete records—only append new ones.
Aggregates are your domain objects that events apply to. An Order aggregate might have events like OrderCreated, ItemAdded, and OrderShipped. Each aggregate has a unique ID, and events are grouped by that ID.
Projections (or read models) are derived views built by processing event streams. They transform your event log into queryable structures optimized for specific use cases.
Eventual Consistency is the reality you accept. Since projections are built asynchronously from events, reads may lag slightly behind writes.
Here’s what events look like in practice:
interface DomainEvent {
eventId: string;
aggregateId: string;
aggregateType: string;
eventType: string;
timestamp: Date;
version: number;
payload: Record<string, unknown>;
metadata: {
correlationId: string;
causationId: string;
userId?: string;
};
}
// Concrete event types
interface OrderCreated extends DomainEvent {
eventType: 'OrderCreated';
payload: {
customerId: string;
currency: string;
};
}
interface ItemAdded extends DomainEvent {
eventType: 'ItemAdded';
payload: {
productId: string;
quantity: number;
unitPrice: number;
};
}
interface OrderShipped extends DomainEvent {
eventType: 'OrderShipped';
payload: {
carrier: string;
trackingNumber: string;
shippedAt: Date;
};
}
Notice the metadata fields. correlationId tracks related operations across services. causationId links events to the command or event that caused them. These become invaluable for debugging distributed systems.
Building an Event Store
Your event store needs to guarantee three things: events are immutable, ordering is preserved within an aggregate, and concurrent writes don’t corrupt data.
interface EventStore {
append(
aggregateId: string,
events: DomainEvent[],
expectedVersion: number
): Promise<void>;
getEvents(
aggregateId: string,
fromVersion?: number
): Promise<DomainEvent[]>;
getAllEvents(
fromPosition?: number
): AsyncIterable<DomainEvent>;
}
The expectedVersion parameter enables optimistic concurrency. If another process appended events since you loaded the aggregate, the append fails. This prevents lost updates without heavy locking.
Here’s a PostgreSQL implementation:
class PostgresEventStore implements EventStore {
constructor(private pool: Pool) {}
async append(
aggregateId: string,
events: DomainEvent[],
expectedVersion: number
): Promise<void> {
const client = await this.pool.connect();
try {
await client.query('BEGIN');
// Check current version with row lock
const result = await client.query(
`SELECT COALESCE(MAX(version), 0) as current_version
FROM events
WHERE aggregate_id = $1
FOR UPDATE`,
[aggregateId]
);
const currentVersion = result.rows[0].current_version;
if (currentVersion !== expectedVersion) {
throw new ConcurrencyError(
`Expected version ${expectedVersion}, found ${currentVersion}`
);
}
// Append events
for (let i = 0; i < events.length; i++) {
const event = events[i];
const version = expectedVersion + i + 1;
await client.query(
`INSERT INTO events
(event_id, aggregate_id, aggregate_type, event_type,
version, payload, metadata, timestamp)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)`,
[
event.eventId,
aggregateId,
event.aggregateType,
event.eventType,
version,
JSON.stringify(event.payload),
JSON.stringify(event.metadata),
event.timestamp
]
);
}
await client.query('COMMIT');
} catch (error) {
await client.query('ROLLBACK');
throw error;
} finally {
client.release();
}
}
async getEvents(
aggregateId: string,
fromVersion: number = 0
): Promise<DomainEvent[]> {
const result = await this.pool.query(
`SELECT * FROM events
WHERE aggregate_id = $1 AND version > $2
ORDER BY version ASC`,
[aggregateId, fromVersion]
);
return result.rows.map(this.rowToEvent);
}
}
For the table schema, ensure you have a unique constraint on (aggregate_id, version) and an index on aggregate_id. Consider partitioning by aggregate type or time range as your event count grows.
Reconstructing State from Events
Aggregates rebuild their state by replaying events through handler methods. Each event type has a corresponding apply method that mutates internal state.
class OrderAggregate {
private id: string;
private customerId: string;
private items: Map<string, OrderItem> = new Map();
private status: OrderStatus = 'pending';
private version: number = 0;
private changes: DomainEvent[] = [];
static rehydrate(events: DomainEvent[]): OrderAggregate {
const order = new OrderAggregate();
for (const event of events) {
order.applyEvent(event, false);
}
return order;
}
addItem(productId: string, quantity: number, unitPrice: number): void {
if (this.status !== 'pending') {
throw new Error('Cannot modify shipped order');
}
const event: ItemAdded = {
eventId: generateId(),
aggregateId: this.id,
aggregateType: 'Order',
eventType: 'ItemAdded',
timestamp: new Date(),
version: this.version + 1,
payload: { productId, quantity, unitPrice },
metadata: { correlationId: getCorrelationId(), causationId: getCommandId() }
};
this.applyEvent(event, true);
}
private applyEvent(event: DomainEvent, isNew: boolean): void {
switch (event.eventType) {
case 'OrderCreated':
this.applyOrderCreated(event as OrderCreated);
break;
case 'ItemAdded':
this.applyItemAdded(event as ItemAdded);
break;
case 'OrderShipped':
this.applyOrderShipped(event as OrderShipped);
break;
}
this.version = event.version;
if (isNew) {
this.changes.push(event);
}
}
private applyOrderCreated(event: OrderCreated): void {
this.id = event.aggregateId;
this.customerId = event.payload.customerId;
}
private applyItemAdded(event: ItemAdded): void {
const { productId, quantity, unitPrice } = event.payload;
const existing = this.items.get(productId);
if (existing) {
existing.quantity += quantity;
} else {
this.items.set(productId, { productId, quantity, unitPrice });
}
}
private applyOrderShipped(event: OrderShipped): void {
this.status = 'shipped';
}
getUncommittedChanges(): DomainEvent[] {
return [...this.changes];
}
getVersion(): number {
return this.version;
}
}
The isNew flag distinguishes between replaying historical events and applying new commands. Only new events get tracked in changes for persistence.
Projections and Read Models
Event sourcing naturally pairs with CQRS (Command Query Responsibility Segregation). Your aggregates handle writes, while projections build optimized read models.
interface Projection {
name: string;
handle(event: DomainEvent): Promise<void>;
rebuild(): Promise<void>;
}
class OrderSummaryProjection implements Projection {
name = 'order-summary';
constructor(
private db: Database,
private eventStore: EventStore
) {}
async handle(event: DomainEvent): Promise<void> {
if (event.aggregateType !== 'Order') return;
switch (event.eventType) {
case 'OrderCreated':
await this.handleOrderCreated(event as OrderCreated);
break;
case 'ItemAdded':
await this.handleItemAdded(event as ItemAdded);
break;
case 'OrderShipped':
await this.handleOrderShipped(event as OrderShipped);
break;
}
}
private async handleOrderCreated(event: OrderCreated): Promise<void> {
await this.db.query(
`INSERT INTO order_summaries
(order_id, customer_id, status, item_count, total_amount, created_at)
VALUES ($1, $2, 'pending', 0, 0, $3)`,
[event.aggregateId, event.payload.customerId, event.timestamp]
);
}
private async handleItemAdded(event: ItemAdded): Promise<void> {
const { quantity, unitPrice } = event.payload;
const lineTotal = quantity * unitPrice;
await this.db.query(
`UPDATE order_summaries
SET item_count = item_count + $2,
total_amount = total_amount + $3,
updated_at = $4
WHERE order_id = $1`,
[event.aggregateId, quantity, lineTotal, event.timestamp]
);
}
private async handleOrderShipped(event: OrderShipped): Promise<void> {
await this.db.query(
`UPDATE order_summaries
SET status = 'shipped',
shipped_at = $2,
tracking_number = $3
WHERE order_id = $1`,
[event.aggregateId, event.payload.shippedAt, event.payload.trackingNumber]
);
}
async rebuild(): Promise<void> {
await this.db.query('TRUNCATE order_summaries');
for await (const event of this.eventStore.getAllEvents()) {
await this.handle(event);
}
}
}
The rebuild() method is crucial. When you change projection logic or need to fix corrupted data, you can replay the entire event stream. This is one of event sourcing’s superpowers—your projections are disposable and regenerable.
Challenges and Trade-offs
Event sourcing introduces real complexity. Don’t adopt it blindly.
Schema Evolution: Events are immutable, but your understanding of the domain evolves. You’ll need upcasting strategies to transform old event formats into current ones during replay.
Storage Growth: Events accumulate forever. A high-volume system can generate terabytes of events. Plan for archival strategies and consider event compaction for non-critical data.
Debugging Complexity: Tracing issues requires understanding event sequences rather than inspecting current state. Invest in tooling for event visualization and replay.
Performance: Replaying thousands of events to load an aggregate is slow. Snapshots solve this:
interface Snapshot {
aggregateId: string;
version: number;
state: Record<string, unknown>;
createdAt: Date;
}
class SnapshotRepository {
async loadAggregate(aggregateId: string): Promise<OrderAggregate> {
const snapshot = await this.getLatestSnapshot(aggregateId);
let order: OrderAggregate;
let fromVersion: number;
if (snapshot) {
order = OrderAggregate.fromSnapshot(snapshot);
fromVersion = snapshot.version;
} else {
order = new OrderAggregate();
fromVersion = 0;
}
const events = await this.eventStore.getEvents(aggregateId, fromVersion);
for (const event of events) {
order.applyEvent(event, false);
}
// Create snapshot every 100 events
if (order.getVersion() - fromVersion > 100) {
await this.saveSnapshot(order.toSnapshot());
}
return order;
}
}
When to Use (and Avoid) Event Sourcing
Use event sourcing when:
- Audit trails are a business requirement (finance, healthcare, legal)
- You need temporal queries (“show me the state as of last month”)
- Your domain has complex state transitions that benefit from explicit modeling
- You’re building collaborative systems where understanding concurrent changes matters
Avoid event sourcing when:
- You’re building simple CRUD applications
- Your team lacks experience with the pattern
- You have extreme low-latency requirements and can’t tolerate eventual consistency
- Your domain genuinely doesn’t care about history
Event sourcing is a powerful tool, but it’s not a default choice. Start with traditional state storage. Migrate to event sourcing when you feel the pain it solves—not before.