Event Loop: Single-Threaded Concurrency Model

JavaScript runs on a single thread. Yet Node.js servers handle tens of thousands of concurrent connections. React applications respond to user input while fetching data and animating UI elements. How...

Key Insights

  • The event loop enables single-threaded JavaScript to handle thousands of concurrent operations by delegating blocking work to the operating system and processing callbacks when the call stack is empty.
  • Microtasks (Promises, queueMicrotask) always execute before macrotasks (setTimeout, I/O), and a flood of microtasks can starve the event loop, freezing your application.
  • Writing performant async code means understanding that you’re not writing parallel code—you’re scheduling work, and every synchronous operation blocks everything else.

The Concurrency Paradox

JavaScript runs on a single thread. Yet Node.js servers handle tens of thousands of concurrent connections. React applications respond to user input while fetching data and animating UI elements. How does a single-threaded language achieve this level of concurrency?

The answer lies in a fundamental distinction: concurrency is not parallelism. Multi-threaded languages like Java or Go achieve concurrency through parallelism—multiple threads executing simultaneously on different CPU cores. JavaScript achieves concurrency through the event loop—a mechanism that interleaves operations on a single thread, creating the illusion of simultaneous execution.

This model has tradeoffs. You don’t deal with locks, race conditions, or deadlocks. But you also can’t leverage multiple CPU cores without explicit worker threads. Understanding the event loop isn’t optional—it’s the foundation of every performance decision you’ll make in JavaScript.

Anatomy of the Event Loop

The event loop consists of three core components working together: the call stack, the callback queue (also called the task queue or macrotask queue), and the microtask queue.

The Call Stack is where synchronous code executes. Functions push onto the stack when called and pop off when they return. JavaScript can only do one thing at a time because there’s only one call stack.

The Callback Queue holds callbacks from completed asynchronous operations—setTimeout handlers, I/O completions, and event listeners.

The Microtask Queue holds higher-priority callbacks, primarily Promise resolutions and queueMicrotask callbacks.

The event loop follows a simple algorithm:

  1. Execute all synchronous code until the call stack is empty
  2. Process ALL microtasks until the microtask queue is empty
  3. Process ONE macrotask from the callback queue
  4. Repeat

Here’s this process in action:

console.log('1: Script start');

setTimeout(() => {
  console.log('2: setTimeout callback');
}, 0);

Promise.resolve()
  .then(() => {
    console.log('3: Promise 1');
  })
  .then(() => {
    console.log('4: Promise 2');
  });

queueMicrotask(() => {
  console.log('5: queueMicrotask');
});

console.log('6: Script end');

// Output:
// 1: Script start
// 6: Script end
// 3: Promise 1
// 5: queueMicrotask
// 4: Promise 2
// 2: setTimeout callback

Walk through this step by step:

  1. console.log('1: Script start') executes immediately
  2. setTimeout schedules a callback in the macrotask queue
  3. Promise.resolve().then() schedules callbacks in the microtask queue
  4. queueMicrotask schedules a callback in the microtask queue
  5. console.log('6: Script end') executes immediately
  6. Call stack is empty—process all microtasks (Promise 1, queueMicrotask, Promise 2)
  7. Process one macrotask (setTimeout callback)

Notice that the setTimeout with a 0ms delay still runs last. The delay is a minimum, not a guarantee, and macrotasks always wait for microtasks.

Blocking vs Non-Blocking Operations

When you block the main thread, everything stops. The UI freezes. Event handlers don’t fire. Network responses pile up unprocessed.

// BLOCKING: This freezes everything for ~5 seconds
const fs = require('fs');

console.log('Reading file...');
const data = fs.readFileSync('/large-file.txt', 'utf8'); // Blocks here
console.log('File read complete');
processUserClick(); // User has been waiting 5 seconds

// NON-BLOCKING: This returns immediately
console.log('Reading file...');
fs.readFile('/large-file.txt', 'utf8', (err, data) => {
  console.log('File read complete');
});
processUserClick(); // Executes immediately

Non-blocking I/O works because the actual work happens outside JavaScript. When you call fs.readFile, Node.js delegates to libuv, which uses operating system primitives (epoll on Linux, kqueue on macOS, IOCP on Windows) to monitor file descriptors. The JavaScript thread is free to process other work. When the OS signals completion, libuv queues a callback for the event loop.

This is why I/O-bound applications thrive in Node.js. The single thread spends most of its time idle, waiting for callbacks. But CPU-bound work is different—there’s no OS to delegate to. A complex calculation runs on the main thread and blocks everything.

Task Queues and Prioritization

The distinction between microtasks and macrotasks creates subtle execution order issues. Test your understanding:

console.log('A');

setTimeout(() => console.log('B'), 0);

Promise.resolve().then(() => {
  console.log('C');
  setTimeout(() => console.log('D'), 0);
  return Promise.resolve();
}).then(() => console.log('E'));

setTimeout(() => console.log('F'), 0);

Promise.resolve().then(() => console.log('G'));

console.log('H');

// What's the output?

The answer: A, H, C, G, E, B, F, D

Here’s why:

  • A and H are synchronous
  • C, G, E are microtasks (processed before any macrotasks)
  • B, F were queued before D (which was queued inside a microtask)
  • D runs last because it was scheduled after B and F

Starvation Risk: Because all microtasks run before any macrotask, you can accidentally starve the event loop:

// DON'T DO THIS: Infinite microtask loop
function recursiveMicrotask() {
  queueMicrotask(() => {
    doSomeWork();
    recursiveMicrotask(); // Queues another microtask
  });
}
recursiveMicrotask();
// Macrotasks NEVER run. UI freezes. setTimeout callbacks never fire.

Real-World Patterns and Pitfalls

CPU-intensive work requires explicit chunking. Here’s a common scenario: processing a large array.

// BLOCKING: Processes 1 million items, freezing everything
function processAllItems(items) {
  for (const item of items) {
    heavyComputation(item);
  }
  console.log('Done');
}

// NON-BLOCKING: Chunks work across event loop iterations
function processItemsChunked(items, chunkSize = 1000) {
  return new Promise((resolve) => {
    let index = 0;
    
    function processChunk() {
      const end = Math.min(index + chunkSize, items.length);
      
      while (index < end) {
        heavyComputation(items[index]);
        index++;
      }
      
      if (index < items.length) {
        // Yield to the event loop with setTimeout (macrotask)
        setTimeout(processChunk, 0);
      } else {
        resolve();
      }
    }
    
    processChunk();
  });
}

// Usage
await processItemsChunked(millionItems);

Using setTimeout instead of queueMicrotask is deliberate. A macrotask allows other macrotasks (I/O, user events) to interleave. If you used microtasks, you’d still block until all chunks complete.

For truly parallel computation, use Worker Threads (Node.js) or Web Workers (browsers):

// worker.js
const { parentPort, workerData } = require('worker_threads');

const result = heavyComputation(workerData);
parentPort.postMessage(result);

// main.js
const { Worker } = require('worker_threads');

function runInWorker(data) {
  return new Promise((resolve, reject) => {
    const worker = new Worker('./worker.js', { workerData: data });
    worker.on('message', resolve);
    worker.on('error', reject);
  });
}

Event Loop Variations Across Runtimes

Browser and Node.js event loops differ significantly. Node.js uses libuv, which divides the event loop into phases:

  1. Timers: Execute setTimeout/setInterval callbacks
  2. Pending callbacks: Execute I/O callbacks deferred from previous iteration
  3. Idle, prepare: Internal use
  4. Poll: Retrieve new I/O events, execute I/O callbacks
  5. Check: Execute setImmediate callbacks
  6. Close callbacks: Execute close event callbacks

This explains Node-specific behavior:

// Node.js specific: setImmediate vs setTimeout(fn, 0)
setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));

// Order is non-deterministic when called from main module
// But inside an I/O callback, setImmediate always runs first

Deno and Bun implement similar models but with different internal architectures. Deno uses Tokio (Rust async runtime) while Bun uses its own implementation in Zig. The JavaScript-facing behavior remains largely consistent—Promises are microtasks, timers are macrotasks.

Designing for the Event Loop

Writing performant async code comes down to three principles:

Never block the main thread. Any synchronous operation over a few milliseconds needs refactoring—chunk it, move it to a worker, or find an async alternative.

Understand queue priorities. Microtasks run to completion before macrotasks. Use this knowledge to predict execution order and avoid starvation.

Measure, don’t assume. Use performance.now(), Chrome DevTools’ Performance tab, or Node.js’s --inspect flag to identify actual bottlenecks.

The single-threaded model works remarkably well for I/O-bound applications—web servers, API gateways, real-time applications. When you need true parallelism for CPU-bound work, worker threads exist precisely for that purpose. The event loop isn’t a limitation to work around; it’s a design choice that eliminates entire categories of concurrency bugs. Understand it, and you’ll write better async code.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.