PostgreSQL LISTEN/NOTIFY: Real-Time Event System

PostgreSQL's LISTEN/NOTIFY is a built-in asynchronous notification system that enables real-time communication between database sessions. Unlike polling-based approaches that repeatedly query for...

Key Insights

  • PostgreSQL’s LISTEN/NOTIFY provides a lightweight pub/sub mechanism built directly into the database, eliminating the need for external message brokers for simple real-time event systems
  • Notifications are delivered at-most-once with an 8KB payload limit, making them ideal for cache invalidation and UI updates but unsuitable for critical message delivery
  • Combining NOTIFY with database triggers creates a powerful pattern for real-time application updates whenever data changes, without polling overhead

Introduction to LISTEN/NOTIFY

PostgreSQL’s LISTEN/NOTIFY is a built-in asynchronous notification system that enables real-time communication between database sessions. Unlike polling-based approaches that repeatedly query for changes, LISTEN/NOTIFY pushes notifications to subscribed clients immediately when events occur.

This mechanism excels at several use cases: invalidating application caches when data changes, pushing real-time updates to web interfaces, coordinating between microservices, and triggering background jobs. The primary advantage is efficiency—clients receive instant notifications without the overhead of continuous polling, reducing database load and network traffic.

The system operates outside normal table structures. Notifications exist only in memory and aren’t logged or persisted, making them extremely lightweight but also ephemeral.

Core Concepts and Syntax

LISTEN/NOTIFY operates on named channels. Any session can send notifications to a channel using NOTIFY, and any session listening on that channel receives the message. Channels are simple string identifiers—no pre-creation or schema definition required.

The basic syntax is straightforward:

-- Session 1: Subscribe to a channel
LISTEN order_updates;

-- Session 2: Send a notification
NOTIFY order_updates;

-- With a payload (PostgreSQL 9.0+)
NOTIFY order_updates, 'Order #12345 shipped';

To demonstrate this in practice, open two terminal sessions with psql:

Terminal 1:

-- Subscribe to the channel
LISTEN inventory_changes;

-- Wait for notifications (this blocks)
-- Notifications appear automatically when sent

Terminal 2:

-- Send notifications
NOTIFY inventory_changes, '{"product_id": 42, "action": "restock"}';
NOTIFY inventory_changes, '{"product_id": 17, "action": "sold_out"}';

Terminal 1 will immediately display:

Asynchronous notification "inventory_changes" with payload "{"product_id": 42, "action": "restock"}" received from server process with PID 12345.

Notifications are delivered asynchronously. The sending session doesn’t wait for acknowledgment, and receiving sessions get notifications between statements or during explicit waits.

Implementing LISTEN/NOTIFY with Triggers

The real power emerges when combining NOTIFY with database triggers. This creates an event-driven architecture where data changes automatically generate notifications without application code involvement.

Here’s a complete example for notifying on product inventory changes:

-- Create the trigger function
CREATE OR REPLACE FUNCTION notify_inventory_change()
RETURNS trigger AS $$
DECLARE
  payload JSON;
BEGIN
  -- Build different payloads based on operation
  IF (TG_OP = 'DELETE') THEN
    payload = json_build_object(
      'operation', TG_OP,
      'product_id', OLD.id,
      'timestamp', NOW()
    );
  ELSE
    payload = json_build_object(
      'operation', TG_OP,
      'product_id', NEW.id,
      'quantity', NEW.quantity,
      'timestamp', NOW()
    );
  END IF;

  -- Send the notification
  PERFORM pg_notify('inventory_changes', payload::text);
  
  RETURN NULL; -- Result ignored for AFTER triggers
END;
$$ LANGUAGE plpgsql;

-- Attach trigger to the products table
CREATE TRIGGER inventory_change_trigger
AFTER INSERT OR UPDATE OR DELETE ON products
FOR EACH ROW
EXECUTE FUNCTION notify_inventory_change();

Now any INSERT, UPDATE, or DELETE on the products table automatically sends a notification with relevant data:

-- This UPDATE automatically triggers a notification
UPDATE products SET quantity = 150 WHERE id = 42;
-- Sends: {"operation":"UPDATE","product_id":42,"quantity":150,"timestamp":"2024-01-15T10:30:00"}

For high-traffic tables, consider using statement-level triggers (FOR EACH STATEMENT) instead of row-level to batch notifications and avoid notification storms.

Application Integration

Consuming notifications requires maintaining a persistent database connection. Here’s how to implement listeners in different languages.

Node.js with pg:

const { Client } = require('pg');

const client = new Client({
  connectionString: 'postgresql://localhost/mydb'
});

async function startListener() {
  await client.connect();

  // Set up notification handler
  client.on('notification', (msg) => {
    console.log('Received notification:', msg.channel);
    const payload = JSON.parse(msg.payload);
    
    // Handle the event
    if (payload.operation === 'UPDATE') {
      invalidateCache(payload.product_id);
      broadcastToWebSockets(payload);
    }
  });

  // Subscribe to channel
  await client.query('LISTEN inventory_changes');
  console.log('Listening for inventory changes...');
}

startListener().catch(console.error);

Python with psycopg2:

import psycopg2
import select
import json

conn = psycopg2.connect("dbname=mydb")
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)

cursor = conn.cursor()
cursor.execute("LISTEN inventory_changes;")

print("Listening for notifications...")

while True:
    if select.select([conn], [], [], 5) == ([], [], []):
        # Timeout - send keepalive
        print(".", end="", flush=True)
    else:
        conn.poll()
        while conn.notifies:
            notify = conn.notifies.pop(0)
            payload = json.loads(notify.payload)
            print(f"Received: {payload}")
            # Process notification
            handle_inventory_change(payload)

Go with lib/pq:

package main

import (
    "database/sql"
    "encoding/json"
    "log"
    "time"
    
    "github.com/lib/pq"
)

type InventoryChange struct {
    Operation string `json:"operation"`
    ProductID int    `json:"product_id"`
    Quantity  int    `json:"quantity"`
}

func listenForChanges(connStr string) {
    reportProblem := func(ev pq.ListenerEventType, err error) {
        if err != nil {
            log.Printf("Listener error: %v", err)
        }
    }

    listener := pq.NewListener(connStr, 10*time.Second, time.Minute, reportProblem)
    err := listener.Listen("inventory_changes")
    if err != nil {
        log.Fatal(err)
    }

    log.Println("Listening for inventory changes...")

    for {
        select {
        case n := <-listener.Notify:
            if n == nil {
                continue
            }
            
            var change InventoryChange
            if err := json.Unmarshal([]byte(n.Extra), &change); err != nil {
                log.Printf("Error parsing payload: %v", err)
                continue
            }
            
            log.Printf("Received: %+v", change)
            handleInventoryChange(change)
            
        case <-time.After(90 * time.Second):
            go listener.Ping()
        }
    }
}

Real-World Patterns and Best Practices

Payload Size Limits: Notifications are limited to 8KB. For larger data, send an identifier and let the client query for details:

-- Don't send entire record
NOTIFY order_updates, '{"order_id": 12345, "action": "updated"}';
-- Client then queries: SELECT * FROM orders WHERE id = 12345

Channel Naming: Use namespaced conventions to avoid conflicts:

LISTEN app_name.entity.event;  -- e.g., "myapp.orders.created"

Robust Error Handling:

class NotificationListener {
  constructor(connectionString, channel) {
    this.connectionString = connectionString;
    this.channel = channel;
    this.reconnectDelay = 1000;
    this.maxReconnectDelay = 30000;
  }

  async start() {
    while (true) {
      try {
        await this.connect();
        this.reconnectDelay = 1000; // Reset on successful connection
      } catch (err) {
        console.error('Connection failed:', err);
        await this.sleep(this.reconnectDelay);
        this.reconnectDelay = Math.min(
          this.reconnectDelay * 2,
          this.maxReconnectDelay
        );
      }
    }
  }

  async connect() {
    this.client = new Client({ connectionString: this.connectionString });
    
    this.client.on('error', (err) => {
      console.error('Database error:', err);
      this.client.end();
    });

    await this.client.connect();
    await this.client.query(`LISTEN ${this.channel}`);
    
    this.client.on('notification', this.handleNotification.bind(this));
  }

  sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
  }
}

Avoiding Notification Storms: For high-frequency updates, debounce notifications:

CREATE OR REPLACE FUNCTION notify_inventory_change_debounced()
RETURNS trigger AS $$
BEGIN
  -- Only notify if enough time has passed
  IF (NEW.last_notified IS NULL OR 
      NEW.last_notified < NOW() - INTERVAL '5 seconds') THEN
    PERFORM pg_notify('inventory_changes', 
      json_build_object('product_id', NEW.id)::text);
    NEW.last_notified = NOW();
  END IF;
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

Performance Considerations and Limitations

LISTEN/NOTIFY delivers at-most-once semantics. If a client disconnects, it misses notifications sent during downtime. There’s no persistence, replay, or acknowledgment mechanism.

Notifications are transactional—they’re only sent when the triggering transaction commits. A rolled-back transaction never sends its notifications.

Performance-wise, LISTEN/NOTIFY scales well for moderate loads. PostgreSQL can handle thousands of notifications per second, but each listening connection consumes resources. For hundreds of concurrent listeners, consider alternative architectures.

Don’t use LISTEN/NOTIFY for:

  • Critical messages requiring guaranteed delivery
  • Messages that must survive server restarts
  • High-throughput event streams (>10k events/second)
  • Complex routing or message filtering

Alternative Approaches and When to Use Each

Redis Pub/Sub: Better for high-throughput scenarios and when you need multiple subscribers per message. Simpler but lacks persistence.

RabbitMQ/Kafka: Use for guaranteed delivery, message persistence, complex routing, or when events must survive crashes. More operational overhead.

Logical Replication: For replicating entire datasets to other systems. More comprehensive but higher complexity.

Polling: Sometimes simplest for infrequent checks or when real-time isn’t critical. Adds database load but requires no persistent connections.

Use LISTEN/NOTIFY when:

  • You already use PostgreSQL and want lightweight pub/sub
  • Cache invalidation is your primary use case
  • You need real-time updates within a single application
  • At-most-once delivery is acceptable
  • Event volume is moderate (<1000/second)

PostgreSQL LISTEN/NOTIFY occupies a sweet spot: more capable than polling, simpler than message queues, and zero additional infrastructure. For many applications, that’s exactly what you need.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.