SQL - GENERATE_SERIES / Sequences

SELECT * FROM GENERATE_SERIES(1, 10);

Key Insights

  • GENERATE_SERIES creates sequential numeric, date, or timestamp values without requiring temporary tables or recursive CTEs, essential for filling gaps in time-series data and creating test datasets
  • PostgreSQL pioneered this function, but modern implementations exist in DuckDB, CockroachDB, and SQL Server (as GENERATE_SERIES in 2022+), while MySQL and Oracle require alternative approaches using recursive CTEs
  • Combining GENERATE_SERIES with joins and aggregations enables powerful patterns like calendar table generation, gap detection, time-bucketing, and data densification for analytics

What GENERATE_SERIES Does

GENERATE_SERIES generates a set of values from a start point to an end point with a specified step interval. Think of it as a programmatic loop that returns a table.

-- PostgreSQL/DuckDB: Generate numbers 1 through 10
SELECT * FROM GENERATE_SERIES(1, 10);

-- Result: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

The function accepts three parameters: start, stop, and optional step:

-- Generate even numbers
SELECT * FROM GENERATE_SERIES(0, 20, 2);
-- Result: 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20

-- Generate countdown
SELECT * FROM GENERATE_SERIES(10, 1, -1);
-- Result: 10, 9, 8, 7, 6, 5, 4, 3, 2, 1

Date and Timestamp Sequences

The real power emerges with temporal data:

-- Generate daily dates for January 2024
SELECT GENERATE_SERIES(
    '2024-01-01'::DATE,
    '2024-01-31'::DATE,
    INTERVAL '1 day'
) AS date;

-- Generate hourly timestamps
SELECT GENERATE_SERIES(
    '2024-01-01 00:00:00'::TIMESTAMP,
    '2024-01-01 23:00:00'::TIMESTAMP,
    INTERVAL '1 hour'
) AS hour;

-- Generate monthly periods
SELECT GENERATE_SERIES(
    '2024-01-01'::DATE,
    '2024-12-01'::DATE,
    INTERVAL '1 month'
) AS month;

Filling Gaps in Time-Series Data

A common analytics problem: your data has gaps, but you need continuous series for visualization or calculations.

-- Sample sales data with gaps
CREATE TABLE sales (
    sale_date DATE,
    amount DECIMAL(10,2)
);

INSERT INTO sales VALUES
    ('2024-01-01', 100),
    ('2024-01-03', 150),
    ('2024-01-05', 200);

-- Fill gaps with zero values
SELECT 
    d.date,
    COALESCE(s.amount, 0) AS amount
FROM GENERATE_SERIES(
    '2024-01-01'::DATE,
    '2024-01-07'::DATE,
    INTERVAL '1 day'
) AS d(date)
LEFT JOIN sales s ON d.date = s.sale_date
ORDER BY d.date;

Result includes all dates with zeros for missing days:

2024-01-01 | 100.00
2024-01-02 | 0.00
2024-01-03 | 150.00
2024-01-04 | 0.00
2024-01-05 | 200.00
2024-01-06 | 0.00
2024-01-07 | 0.00

Building Calendar Tables

Calendar tables are fundamental for analytics. Generate one dynamically:

WITH calendar AS (
    SELECT 
        date,
        EXTRACT(YEAR FROM date) AS year,
        EXTRACT(MONTH FROM date) AS month,
        EXTRACT(DAY FROM date) AS day,
        EXTRACT(DOW FROM date) AS day_of_week,
        TO_CHAR(date, 'Day') AS day_name,
        TO_CHAR(date, 'Month') AS month_name,
        EXTRACT(QUARTER FROM date) AS quarter,
        CASE 
            WHEN EXTRACT(DOW FROM date) IN (0, 6) THEN TRUE 
            ELSE FALSE 
        END AS is_weekend
    FROM GENERATE_SERIES(
        '2024-01-01'::DATE,
        '2024-12-31'::DATE,
        INTERVAL '1 day'
    ) AS date
)
SELECT * FROM calendar
WHERE is_weekend = FALSE
LIMIT 10;

Time Bucketing and Aggregation

Group irregular timestamps into regular intervals:

CREATE TABLE events (
    event_time TIMESTAMP,
    user_id INT,
    event_type VARCHAR(50)
);

-- Sample irregular events
INSERT INTO events VALUES
    ('2024-01-01 08:15:00', 1, 'login'),
    ('2024-01-01 08:47:00', 2, 'login'),
    ('2024-01-01 09:23:00', 1, 'purchase'),
    ('2024-01-01 10:05:00', 3, 'login');

-- Bucket into hourly intervals
SELECT 
    time_bucket,
    COUNT(*) AS event_count,
    COUNT(DISTINCT user_id) AS unique_users
FROM GENERATE_SERIES(
    '2024-01-01 08:00:00'::TIMESTAMP,
    '2024-01-01 11:00:00'::TIMESTAMP,
    INTERVAL '1 hour'
) AS time_bucket
LEFT JOIN events e 
    ON e.event_time >= time_bucket 
    AND e.event_time < time_bucket + INTERVAL '1 hour'
GROUP BY time_bucket
ORDER BY time_bucket;

Database-Specific Implementations

PostgreSQL and DuckDB

Native support with identical syntax:

SELECT * FROM GENERATE_SERIES(1, 5);

SQL Server 2022+

Introduced native GENERATE_SERIES:

SELECT value FROM GENERATE_SERIES(1, 10);

-- Date series requires different approach
SELECT DATEADD(DAY, value, '2024-01-01') AS date
FROM GENERATE_SERIES(0, 30);

SQL Server (Pre-2022)

Use recursive CTE:

WITH NumberSequence AS (
    SELECT 1 AS n
    UNION ALL
    SELECT n + 1
    FROM NumberSequence
    WHERE n < 100
)
SELECT n FROM NumberSequence
OPTION (MAXRECURSION 100);

MySQL

Recursive CTE approach:

WITH RECURSIVE dates AS (
    SELECT DATE('2024-01-01') AS date
    UNION ALL
    SELECT DATE_ADD(date, INTERVAL 1 DAY)
    FROM dates
    WHERE date < '2024-01-31'
)
SELECT * FROM dates;

Oracle

Use CONNECT BY:

SELECT LEVEL AS n
FROM DUAL
CONNECT BY LEVEL <= 100;

-- Date series
SELECT DATE '2024-01-01' + LEVEL - 1 AS date
FROM DUAL
CONNECT BY LEVEL <= 31;

Detecting Gaps in Sequential Data

Identify missing values in sequences:

CREATE TABLE transactions (
    transaction_id INT,
    amount DECIMAL(10,2)
);

INSERT INTO transactions VALUES
    (1, 100), (2, 150), (3, 200), (5, 250), (7, 300);

-- Find missing transaction IDs
SELECT gs.id AS missing_id
FROM GENERATE_SERIES(
    (SELECT MIN(transaction_id) FROM transactions),
    (SELECT MAX(transaction_id) FROM transactions)
) AS gs(id)
LEFT JOIN transactions t ON gs.id = t.transaction_id
WHERE t.transaction_id IS NULL;

-- Result: 4, 6

Creating Test Data

Generate realistic test datasets:

-- Generate 1000 random sales records
INSERT INTO sales_test (sale_date, product_id, quantity, price)
SELECT 
    DATE '2024-01-01' + (random() * 365)::INT AS sale_date,
    (random() * 100)::INT + 1 AS product_id,
    (random() * 10)::INT + 1 AS quantity,
    (random() * 100)::DECIMAL(10,2) + 10 AS price
FROM GENERATE_SERIES(1, 1000);

Performance Considerations

GENERATE_SERIES is efficient for reasonable ranges but can consume significant memory for large sequences:

-- Bad: Generates 1 billion rows
SELECT * FROM GENERATE_SERIES(1, 1000000000);

-- Better: Use WHERE clause to filter early
SELECT * 
FROM GENERATE_SERIES(1, 1000000000)
WHERE value % 1000 = 0;

-- Best: Generate only what you need
SELECT * FROM GENERATE_SERIES(1, 1000000, 1000);

For very large date ranges, consider materializing calendar tables:

CREATE TABLE dim_calendar AS
SELECT 
    date,
    EXTRACT(YEAR FROM date) AS year,
    EXTRACT(MONTH FROM date) AS month
FROM GENERATE_SERIES(
    '2020-01-01'::DATE,
    '2030-12-31'::DATE,
    INTERVAL '1 day'
) AS date;

CREATE INDEX idx_calendar_date ON dim_calendar(date);

Combining with Window Functions

Calculate running totals over generated periods:

WITH daily_series AS (
    SELECT date
    FROM GENERATE_SERIES(
        '2024-01-01'::DATE,
        '2024-01-31'::DATE,
        INTERVAL '1 day'
    ) AS date
)
SELECT 
    ds.date,
    COALESCE(SUM(s.amount), 0) AS daily_amount,
    SUM(COALESCE(s.amount, 0)) OVER (
        ORDER BY ds.date 
        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
    ) AS running_total
FROM daily_series ds
LEFT JOIN sales s ON ds.date = s.sale_date
ORDER BY ds.date;

GENERATE_SERIES transforms complex data problems into straightforward SQL. Master it for time-series analytics, gap analysis, and report generation. The pattern of generating a complete series and left-joining your sparse data is fundamental to production analytics queries.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.