System Design: Microservices vs Monolith Architecture
Every engineering team eventually faces this question: should we build a monolith or microservices? The answer shapes your deployment pipeline, team structure, hiring needs, and debugging workflows...
Key Insights
- Monoliths aren’t legacy—they’re often the right choice for teams under 50 engineers and products still finding market fit
- Microservices trade development simplicity for operational complexity; you need mature DevOps practices before making the switch
- Start with a well-structured monolith, extract services only when you have clear scaling bottlenecks or team coordination problems
The Architecture Decision
Every engineering team eventually faces this question: should we build a monolith or microservices? The answer shapes your deployment pipeline, team structure, hiring needs, and debugging workflows for years to come.
Here’s the uncomfortable truth: most teams choose microservices for the wrong reasons. They’ve heard Netflix and Amazon use them, so they assume it’s the “modern” approach. But Netflix has thousands of engineers. Amazon processes millions of transactions per second. Your B2B SaaS with 20 developers and 10,000 users? Different constraints entirely.
This article cuts through the hype. We’ll examine both architectures honestly, compare their trade-offs, and give you a practical framework for making this decision.
Monolith Architecture Explained
A monolith is a single deployable unit containing all your application’s functionality. One codebase, one build process, one deployment artifact. All your business domains—users, orders, payments, inventory—live together and communicate through in-process function calls.
The structure typically looks like this:
ecommerce-app/
├── src/
│ ├── controllers/
│ │ ├── UserController.ts
│ │ ├── OrderController.ts
│ │ └── InventoryController.ts
│ ├── services/
│ │ ├── UserService.ts
│ │ ├── OrderService.ts
│ │ └── InventoryService.ts
│ ├── repositories/
│ │ ├── UserRepository.ts
│ │ ├── OrderRepository.ts
│ │ └── InventoryRepository.ts
│ ├── models/
│ └── app.ts
├── tests/
├── package.json
└── Dockerfile
Here’s what a typical monolith entry point looks like:
// app.ts
import express from 'express';
import { UserController } from './controllers/UserController';
import { OrderController } from './controllers/OrderController';
import { InventoryController } from './controllers/InventoryController';
import { Database } from './database';
const app = express();
const db = new Database(process.env.DATABASE_URL);
// All domains share the same database connection
const userController = new UserController(db);
const orderController = new OrderController(db);
const inventoryController = new InventoryController(db);
app.use('/api/users', userController.router);
app.use('/api/orders', orderController.router);
app.use('/api/inventory', inventoryController.router);
// Cross-domain operations are simple function calls
app.post('/api/checkout', async (req, res) => {
const { userId, items } = req.body;
// Transaction spans multiple domains easily
await db.transaction(async (tx) => {
const user = await userController.validateUser(userId, tx);
await inventoryController.reserveItems(items, tx);
const order = await orderController.createOrder(userId, items, tx);
return order;
});
});
app.listen(3000);
The benefits are real: simple local development (one npm start), straightforward debugging (stack traces work normally), easy refactoring (your IDE can find all references), and ACID transactions across your entire domain.
Microservices Architecture Explained
Microservices decompose your application into independently deployable services, each owning a specific business capability. Services communicate over the network via HTTP, gRPC, or message queues.
# docker-compose.yml
version: '3.8'
services:
user-service:
build: ./services/user-service
ports:
- "3001:3000"
environment:
- DATABASE_URL=postgres://user-db:5432/users
depends_on:
- user-db
order-service:
build: ./services/order-service
ports:
- "3002:3000"
environment:
- DATABASE_URL=postgres://order-db:5432/orders
- USER_SERVICE_URL=http://user-service:3000
- INVENTORY_SERVICE_URL=http://inventory-service:3000
inventory-service:
build: ./services/inventory-service
ports:
- "3003:3000"
environment:
- DATABASE_URL=postgres://inventory-db:5432/inventory
api-gateway:
build: ./gateway
ports:
- "3000:3000"
environment:
- USER_SERVICE_URL=http://user-service:3000
- ORDER_SERVICE_URL=http://order-service:3000
user-db:
image: postgres:15
order-db:
image: postgres:15
inventory-db:
image: postgres:15
Service-to-service communication requires explicit HTTP calls:
// order-service/src/OrderService.ts
import axios from 'axios';
export class OrderService {
private userServiceUrl: string;
private inventoryServiceUrl: string;
constructor() {
this.userServiceUrl = process.env.USER_SERVICE_URL;
this.inventoryServiceUrl = process.env.INVENTORY_SERVICE_URL;
}
async createOrder(userId: string, items: OrderItem[]): Promise<Order> {
// Validate user exists (network call)
const userResponse = await axios.get(
`${this.userServiceUrl}/users/${userId}`
);
if (!userResponse.data) {
throw new Error('User not found');
}
// Reserve inventory (network call)
const reservationResponse = await axios.post(
`${this.inventoryServiceUrl}/reservations`,
{ items, orderId: generateOrderId() }
);
// Create order in local database
const order = await this.orderRepository.create({
userId,
items,
reservationId: reservationResponse.data.id,
});
return order;
}
}
The benefits: independent scaling (scale only the services under load), technology flexibility (use Python for ML, Go for performance-critical paths), team autonomy (teams own their service end-to-end), and isolated failures (one service crashing doesn’t bring down everything).
Key Trade-offs Comparison
Let’s be honest about what you’re trading.
Deployment Complexity: A monolith needs one CI/CD pipeline. Microservices need one per service, plus coordination for breaking changes. You’ll need service discovery, container orchestration (Kubernetes), and careful versioning.
Data Management: Monoliths use database transactions naturally. Microservices require distributed transactions or eventual consistency:
// Monolith: Simple ACID transaction
await db.transaction(async (tx) => {
await tx.query('UPDATE inventory SET quantity = quantity - $1', [qty]);
await tx.query('INSERT INTO orders (user_id, total) VALUES ($1, $2)', [userId, total]);
await tx.query('UPDATE users SET last_order = NOW() WHERE id = $1', [userId]);
});
// Microservices: Saga pattern with compensation
class CreateOrderSaga {
async execute(orderData: OrderData) {
const steps: SagaStep[] = [];
try {
// Step 1: Reserve inventory
const reservation = await this.inventoryService.reserve(orderData.items);
steps.push({
compensate: () => this.inventoryService.release(reservation.id)
});
// Step 2: Charge payment
const payment = await this.paymentService.charge(orderData.userId, orderData.total);
steps.push({
compensate: () => this.paymentService.refund(payment.id)
});
// Step 3: Create order
const order = await this.orderService.create(orderData);
return order;
} catch (error) {
// Compensate in reverse order
for (const step of steps.reverse()) {
await step.compensate();
}
throw error;
}
}
}
Debugging: Stack traces in monoliths show the complete call path. In microservices, you need distributed tracing:
// OpenTelemetry setup for distributed tracing
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter({
url: 'http://jaeger:4318/v1/traces',
}),
instrumentations: [getNodeAutoInstrumentations()],
serviceName: 'order-service',
});
sdk.start();
// Now HTTP calls between services automatically propagate trace context
// You can view the complete request flow in Jaeger/Zipkin
Testing: Monolith integration tests run against one application. Microservices require contract testing, service virtualization, and complex test environments.
When to Choose Each Approach
Use this decision framework:
Choose a monolith when:
- Your team has fewer than 50 engineers
- You’re still discovering your domain boundaries
- You need to move fast and validate product-market fit
- You don’t have mature DevOps practices
- Your scale is under 10,000 requests per second
Choose microservices when:
- Multiple teams need to deploy independently
- Different components have vastly different scaling needs
- You have clear, stable domain boundaries
- You have the infrastructure expertise to operate distributed systems
- Regulatory requirements demand service isolation
The “monolith-first” strategy is usually correct. Build a well-structured monolith with clear internal boundaries. Extract services only when you have concrete evidence that you need them—not because you might need them someday.
Migration Patterns: Monolith to Microservices
When extraction becomes necessary, use the Strangler Fig pattern: incrementally route traffic from the monolith to new services.
// api-gateway/src/router.ts
import express from 'express';
import httpProxy from 'http-proxy-middleware';
import { FeatureFlags } from './feature-flags';
const app = express();
const flags = new FeatureFlags();
// Gradually migrate user endpoints to new service
app.use('/api/users', async (req, res, next) => {
const useNewService = await flags.isEnabled('new-user-service', {
userId: req.headers['x-user-id'],
percentage: 25, // Start with 25% of traffic
});
if (useNewService) {
return httpProxy({
target: process.env.USER_SERVICE_URL,
changeOrigin: true,
})(req, res, next);
}
// Fall back to monolith
return httpProxy({
target: process.env.MONOLITH_URL,
changeOrigin: true,
})(req, res, next);
});
// Orders still go to monolith
app.use('/api/orders', httpProxy({
target: process.env.MONOLITH_URL,
changeOrigin: true,
}));
app.listen(3000);
Identify service boundaries by looking for: natural team ownership lines, components with different scaling characteristics, areas with high change frequency that slow down other teams, and clear data ownership boundaries.
Extract incrementally. Run both implementations in parallel. Compare results. Roll back instantly if issues arise. This isn’t exciting, but it’s how you avoid production disasters.
Making the Right Choice
Architecture decisions should be driven by actual constraints, not industry trends. The best architecture is the one that lets your team ship reliable software quickly given your current scale, team size, and organizational structure.
Monoliths aren’t outdated—they’re appropriate for most teams. Microservices aren’t superior—they’re a trade-off that makes sense at scale. The real skill is recognizing when your constraints have changed enough to warrant architectural evolution.
Start simple. Measure actual bottlenecks. Extract services when you have evidence, not speculation. Your future self will thank you for the reduced operational complexity.