How to Use Triggers in PostgreSQL

Triggers are database objects that automatically execute specified functions when certain events occur on a table. They fire in response to INSERT, UPDATE, DELETE, or TRUNCATE operations, either...

Key Insights

  • PostgreSQL triggers are event-driven functions that execute automatically in response to INSERT, UPDATE, DELETE, or TRUNCATE operations, enabling automated data validation, audit logging, and maintaining derived data without application-level code.
  • Row-level triggers fire once per affected row and provide access to OLD and NEW record variables, while statement-level triggers fire once per SQL statement regardless of rows affected—choose based on whether you need per-record logic or bulk operation validation.
  • Trigger functions must return the appropriate record (NEW for BEFORE triggers, NULL for AFTER triggers) and should avoid complex business logic that could create performance bottlenecks or cascading trigger chains that are difficult to debug.

Introduction to PostgreSQL Triggers

Triggers are database objects that automatically execute specified functions when certain events occur on a table. They fire in response to INSERT, UPDATE, DELETE, or TRUNCATE operations, either BEFORE or AFTER the event completes. This automation happens entirely within the database layer, independent of your application code.

Common use cases include audit logging (tracking who changed what and when), enforcing complex business rules that can’t be expressed as simple constraints, automatically updating denormalized data or summary tables, and validating data beyond what CHECK constraints can handle. Triggers excel when you need guaranteed execution regardless of which application or user modifies the data.

The timing matters: BEFORE triggers can modify the data being inserted or updated by changing the NEW record, while AFTER triggers work with the final committed data and are ideal for logging or cascading changes to related tables.

Basic Trigger Syntax and Structure

PostgreSQL triggers require a two-step process. First, you create a trigger function using PL/pgSQL (or another procedural language). Second, you create the trigger itself, which associates that function with a specific table and event.

Trigger functions are special: they take no arguments and must return a TRIGGER type. For BEFORE triggers on row-level operations, returning NEW allows the operation to proceed with potentially modified data, while returning NULL cancels the operation. AFTER triggers should return NULL since the operation has already completed.

Here’s a simple trigger that automatically sets a created_at timestamp:

-- Create the trigger function
CREATE OR REPLACE FUNCTION set_created_at()
RETURNS TRIGGER AS $$
BEGIN
    NEW.created_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Create the trigger
CREATE TRIGGER users_set_created_at
    BEFORE INSERT ON users
    FOR EACH ROW
    EXECUTE FUNCTION set_created_at();

This BEFORE INSERT trigger fires once for each row being inserted. The function modifies the NEW record by setting created_at to the current timestamp, then returns NEW to allow the insert to proceed with the modified data.

Working with Trigger Variables (NEW and OLD)

Trigger functions have access to special record variables: NEW and OLD. These represent the new row being inserted/updated and the old row being updated/deleted, respectively.

  • INSERT operations: Only NEW is available
  • UPDATE operations: Both NEW and OLD are available
  • DELETE operations: Only OLD is available

You can read any column from these records using dot notation (NEW.column_name) and modify NEW in BEFORE triggers to change the data being written.

Here’s a trigger that logs changes to a user’s email address:

CREATE TABLE email_change_log (
    id SERIAL PRIMARY KEY,
    user_id INTEGER,
    old_email TEXT,
    new_email TEXT,
    changed_at TIMESTAMP DEFAULT NOW()
);

CREATE OR REPLACE FUNCTION log_email_changes()
RETURNS TRIGGER AS $$
BEGIN
    -- Only log if email actually changed
    IF OLD.email IS DISTINCT FROM NEW.email THEN
        INSERT INTO email_change_log (user_id, old_email, new_email)
        VALUES (NEW.id, OLD.email, NEW.email);
        
        -- Optionally modify NEW to normalize the email
        NEW.email = LOWER(TRIM(NEW.email));
    END IF;
    
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER users_email_change
    BEFORE UPDATE ON users
    FOR EACH ROW
    EXECUTE FUNCTION log_email_changes();

This demonstrates both reading from OLD and NEW (for comparison) and writing to NEW (normalizing the email). The IS DISTINCT FROM operator properly handles NULL values.

Row-Level vs Statement-Level Triggers

The FOR EACH ROW clause determines trigger granularity. Row-level triggers execute once per affected row, while statement-level triggers (FOR EACH STATEMENT) execute once per SQL statement, regardless of how many rows are affected.

Row-level triggers have access to NEW and OLD records and are essential when you need per-record logic. Statement-level triggers don’t have access to individual row data but are more efficient for bulk operations where you only need to know that something happened.

-- Statement-level: Validate bulk operations
CREATE OR REPLACE FUNCTION prevent_bulk_deletes()
RETURNS TRIGGER AS $$
BEGIN
    -- This fires once, even if deleting 1000 rows
    RAISE NOTICE 'Delete operation executed at %', NOW();
    
    -- Could add logic to prevent deletes during business hours
    IF EXTRACT(HOUR FROM NOW()) BETWEEN 9 AND 17 THEN
        RAISE EXCEPTION 'Bulk deletes not allowed during business hours';
    END IF;
    
    RETURN NULL; -- For AFTER triggers
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER prevent_business_hours_deletes
    BEFORE DELETE ON critical_data
    FOR EACH STATEMENT
    EXECUTE FUNCTION prevent_bulk_deletes();

-- Row-level: Process each record individually
CREATE OR REPLACE FUNCTION archive_deleted_records()
RETURNS TRIGGER AS $$
BEGIN
    INSERT INTO archived_data 
    SELECT OLD.*, NOW() AS archived_at;
    RETURN OLD;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER archive_on_delete
    BEFORE DELETE ON critical_data
    FOR EACH ROW
    EXECUTE FUNCTION archive_deleted_records();

Choose statement-level for validation that applies to the entire operation or when you need better performance with bulk changes. Use row-level when you need to inspect or modify individual records.

Practical Use Cases

A comprehensive audit trail is one of the most valuable trigger applications. Here’s a production-ready implementation:

CREATE TABLE audit_log (
    id BIGSERIAL PRIMARY KEY,
    table_name TEXT NOT NULL,
    operation TEXT NOT NULL,
    user_name TEXT DEFAULT CURRENT_USER,
    changed_at TIMESTAMP DEFAULT NOW(),
    old_data JSONB,
    new_data JSONB
);

CREATE OR REPLACE FUNCTION audit_trigger_function()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'DELETE' THEN
        INSERT INTO audit_log (table_name, operation, old_data)
        VALUES (TG_TABLE_NAME, TG_OP, row_to_json(OLD)::jsonb);
        RETURN OLD;
    ELSIF TG_OP = 'UPDATE' THEN
        INSERT INTO audit_log (table_name, operation, old_data, new_data)
        VALUES (TG_TABLE_NAME, TG_OP, 
                row_to_json(OLD)::jsonb, 
                row_to_json(NEW)::jsonb);
        RETURN NEW;
    ELSIF TG_OP = 'INSERT' THEN
        INSERT INTO audit_log (table_name, operation, new_data)
        VALUES (TG_TABLE_NAME, TG_OP, row_to_json(NEW)::jsonb);
        RETURN NEW;
    END IF;
END;
$$ LANGUAGE plpgsql;

-- Apply to any table needing audit
CREATE TRIGGER audit_users
    AFTER INSERT OR UPDATE OR DELETE ON users
    FOR EACH ROW
    EXECUTE FUNCTION audit_trigger_function();

This single function handles all three operations using the special TG_OP variable, stores complete before/after snapshots as JSON, and automatically captures the database user and timestamp.

Best Practices and Common Pitfalls

Keep trigger logic simple and fast. Complex business logic belongs in your application layer. Triggers execute within the transaction, so slow triggers delay commits and can create lock contention.

Avoid trigger chains that modify other tables with triggers—this creates cascading effects that are difficult to debug and can cause infinite loops. If table A’s trigger updates table B, and table B’s trigger updates table A, you’ll hit PostgreSQL’s recursion limit.

Use conditional logic to prevent unnecessary work:

CREATE OR REPLACE FUNCTION smart_update_trigger()
RETURNS TRIGGER AS $$
BEGIN
    -- Only execute logic if specific columns changed
    IF NEW.status IS DISTINCT FROM OLD.status THEN
        -- Handle status change
        NEW.status_changed_at = NOW();
        
        IF NEW.status = 'active' THEN
            -- Specific logic for activation
            NEW.activated_count = OLD.activated_count + 1;
        END IF;
    END IF;
    
    -- Proper error handling
    IF NEW.email IS NULL THEN
        RAISE EXCEPTION 'Email cannot be null'
            USING HINT = 'Provide a valid email address';
    END IF;
    
    RETURN NEW;
EXCEPTION
    WHEN OTHERS THEN
        -- Log error details
        RAISE WARNING 'Trigger error: %', SQLERRM;
        RAISE;
END;
$$ LANGUAGE plpgsql;

Never assume trigger execution order when multiple triggers exist on the same event. PostgreSQL fires triggers in alphabetical order by trigger name, but relying on this is fragile. If order matters, combine logic into a single trigger function.

For debugging, use RAISE NOTICE to output variable values, and check PostgreSQL logs for trigger execution details.

Managing and Troubleshooting Triggers

View all triggers in your database:

-- List all triggers with their tables and functions
SELECT 
    trigger_name,
    event_object_table AS table_name,
    action_timing,
    event_manipulation AS event,
    action_statement
FROM information_schema.triggers
WHERE trigger_schema = 'public'
ORDER BY event_object_table, trigger_name;

-- More detailed view including trigger function source
SELECT 
    t.tgname AS trigger_name,
    c.relname AS table_name,
    p.proname AS function_name,
    pg_get_triggerdef(t.oid) AS trigger_definition
FROM pg_trigger t
JOIN pg_class c ON t.tgrelid = c.oid
JOIN pg_proc p ON t.tgfoid = p.oid
WHERE NOT t.tgisinternal
ORDER BY c.relname, t.tgname;

Temporarily disable triggers during bulk operations:

-- Disable specific trigger
ALTER TABLE users DISABLE TRIGGER users_email_change;

-- Disable all triggers on a table
ALTER TABLE users DISABLE TRIGGER ALL;

-- Re-enable
ALTER TABLE users ENABLE TRIGGER users_email_change;
ALTER TABLE users ENABLE TRIGGER ALL;

Drop triggers when no longer needed:

DROP TRIGGER IF EXISTS users_email_change ON users;
DROP FUNCTION IF EXISTS log_email_changes();

Triggers are powerful tools for maintaining data integrity and automating database-level logic. Use them judiciously for operations that must happen regardless of how data is modified, but keep the logic simple and monitor performance impact carefully.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.