SQL Cursor: Row-by-Row Processing
SQL cursors are database objects that allow you to traverse and manipulate result sets one row at a time. They fundamentally contradict SQL's set-based nature, which is designed to operate on entire...
Key Insights
- Cursors enable row-by-row processing in SQL but come with significant performance overhead compared to set-based operations—use them only when business logic genuinely requires iterative processing that cannot be expressed declaratively
- Understanding cursor types (FORWARD_ONLY, STATIC, DYNAMIC, KEYSET) is critical for performance—FAST_FORWARD cursors are read-only and forward-only, offering the best performance when you must use cursors
- Most cursor use cases have superior set-based alternatives using CTEs, window functions, or temporary tables with batch processing that execute orders of magnitude faster
Introduction to SQL Cursors
SQL cursors are database objects that allow you to traverse and manipulate result sets one row at a time. They fundamentally contradict SQL’s set-based nature, which is designed to operate on entire result sets simultaneously. While the relational database engine excels at set-based operations, certain scenarios require procedural, row-by-row processing.
The performance difference between set-based and cursor-based approaches is substantial. Set-based operations leverage the query optimizer, indexes, and parallel execution plans. Cursors, by contrast, introduce overhead through context switching, row-by-row network round trips, and reduced optimization opportunities.
Here’s a simple comparison:
-- Set-based approach: Updates all matching rows in one operation
UPDATE Employees
SET Salary = Salary * 1.10
WHERE Department = 'Engineering'
AND PerformanceRating >= 4;
-- Cursor-based approach: Processes each row individually
DECLARE @EmployeeID INT, @CurrentSalary DECIMAL(10,2);
DECLARE salary_cursor CURSOR FOR
SELECT EmployeeID, Salary
FROM Employees
WHERE Department = 'Engineering'
AND PerformanceRating >= 4;
OPEN salary_cursor;
FETCH NEXT FROM salary_cursor INTO @EmployeeID, @CurrentSalary;
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Employees
SET Salary = @CurrentSalary * 1.10
WHERE EmployeeID = @EmployeeID;
FETCH NEXT FROM salary_cursor INTO @EmployeeID, @CurrentSalary;
END;
CLOSE salary_cursor;
DEALLOCATE salary_cursor;
The set-based approach executes in milliseconds. The cursor approach might take seconds or minutes depending on row count. Use cursors only when the business logic cannot be expressed as a set operation.
Cursor Lifecycle and Syntax
Every cursor follows a five-stage lifecycle: DECLARE, OPEN, FETCH, CLOSE, and DEALLOCATE. Understanding these stages prevents resource leaks and performance issues.
DECLARE defines the cursor and its SELECT statement but doesn’t execute the query. OPEN executes the query and populates the result set. FETCH retrieves individual rows. CLOSE releases the current result set but keeps the cursor definition in memory. DEALLOCATE removes the cursor definition entirely.
Here’s a complete cursor template:
-- 1. DECLARE: Define cursor and query
DECLARE @EmployeeID INT;
DECLARE @FirstName NVARCHAR(50);
DECLARE @LastName NVARCHAR(50);
DECLARE @Salary DECIMAL(10,2);
DECLARE employee_cursor CURSOR FOR
SELECT EmployeeID, FirstName, LastName, Salary
FROM Employees
WHERE IsActive = 1
ORDER BY EmployeeID;
-- 2. OPEN: Execute query and populate result set
OPEN employee_cursor;
-- 3. FETCH: Retrieve first row
FETCH NEXT FROM employee_cursor
INTO @EmployeeID, @FirstName, @LastName, @Salary;
-- Process rows while fetch is successful
WHILE @@FETCH_STATUS = 0
BEGIN
-- Your row processing logic here
PRINT CONCAT(@EmployeeID, ': ', @FirstName, ' ', @LastName, ' - $', @Salary);
-- Fetch next row
FETCH NEXT FROM employee_cursor
INTO @EmployeeID, @FirstName, @LastName, @Salary;
END;
-- 4. CLOSE: Release result set
CLOSE employee_cursor;
-- 5. DEALLOCATE: Remove cursor definition
DEALLOCATE employee_cursor;
The @@FETCH_STATUS global variable indicates fetch success (0), row not found (1), or missing row (2). Always check this status in your WHILE loop condition.
Cursor Types and Scope
SQL Server offers four cursor types, each with different capabilities and performance characteristics:
FORWARD_ONLY (default): Scrolls only forward through the result set. Most efficient option.
STATIC: Creates a temporary copy of the result set in tempdb. Insensitive to changes in underlying tables.
DYNAMIC: Reflects all changes made to underlying data. Most flexible but slowest.
KEYSET: Reflects updates and deletes but not inserts. Maintains a set of keys in tempdb.
The FAST_FORWARD option combines FORWARD_ONLY with READ_ONLY for optimal performance.
Here’s a comparison of STATIC vs. DYNAMIC behavior:
-- STATIC cursor: Won't see changes made after OPEN
DECLARE static_cursor CURSOR STATIC FOR
SELECT ProductID, Price FROM Products WHERE CategoryID = 1;
OPEN static_cursor;
-- Another session updates a price
-- UPDATE Products SET Price = 29.99 WHERE ProductID = 101;
FETCH NEXT FROM static_cursor; -- Shows original price, not updated value
CLOSE static_cursor;
DEALLOCATE static_cursor;
-- DYNAMIC cursor: Sees all changes in real-time
DECLARE dynamic_cursor CURSOR DYNAMIC FOR
SELECT ProductID, Price FROM Products WHERE CategoryID = 1;
OPEN dynamic_cursor;
-- Another session updates a price
-- UPDATE Products SET Price = 29.99 WHERE ProductID = 101;
FETCH NEXT FROM dynamic_cursor; -- Shows updated price
CLOSE dynamic_cursor;
DEALLOCATE dynamic_cursor;
Cursor scope determines visibility across batches. LOCAL cursors (default) exist only within the current batch or stored procedure. GLOBAL cursors persist across the connection until explicitly deallocated.
Practical Use Cases
Despite performance concerns, cursors are sometimes justified. Here are legitimate scenarios:
Complex business logic with row-level context: Calculations that require maintaining state across rows or accessing external systems per row.
Administrative tasks: Database maintenance operations that must process objects individually.
Data migration with validation: ETL processes where each row requires validation before transformation.
Here’s a realistic example processing orders with complex discount logic:
DECLARE @OrderID INT, @CustomerID INT, @OrderTotal DECIMAL(10,2);
DECLARE @CustomerLifetimeValue DECIMAL(10,2), @DiscountPercent DECIMAL(5,2);
DECLARE @SeasonalMultiplier DECIMAL(3,2);
DECLARE order_cursor CURSOR FAST_FORWARD FOR
SELECT OrderID, CustomerID, OrderTotal
FROM Orders
WHERE OrderDate >= DATEADD(DAY, -30, GETDATE())
AND DiscountApplied IS NULL;
OPEN order_cursor;
FETCH NEXT FROM order_cursor INTO @OrderID, @CustomerID, @OrderTotal;
WHILE @@FETCH_STATUS = 0
BEGIN
-- Calculate customer lifetime value from separate table
SELECT @CustomerLifetimeValue = SUM(OrderTotal)
FROM Orders
WHERE CustomerID = @CustomerID
AND OrderStatus = 'Completed';
-- Get seasonal multiplier from configuration
SELECT @SeasonalMultiplier = Multiplier
FROM SeasonalDiscounts
WHERE GETDATE() BETWEEN StartDate AND EndDate;
-- Complex discount logic based on multiple factors
SET @DiscountPercent =
CASE
WHEN @CustomerLifetimeValue > 10000 THEN 0.15
WHEN @CustomerLifetimeValue > 5000 THEN 0.10
WHEN @CustomerLifetimeValue > 1000 THEN 0.05
ELSE 0.02
END * ISNULL(@SeasonalMultiplier, 1.0);
-- Apply calculated discount
UPDATE Orders
SET DiscountApplied = @OrderTotal * @DiscountPercent,
DiscountPercent = @DiscountPercent,
ProcessedDate = GETDATE()
WHERE OrderID = @OrderID;
FETCH NEXT FROM order_cursor INTO @OrderID, @CustomerID, @OrderTotal;
END;
CLOSE order_cursor;
DEALLOCATE order_cursor;
This scenario justifies a cursor because the discount calculation depends on customer lifetime value, seasonal factors, and tiered logic that would be complex to express as a single set-based operation.
Performance Considerations and Alternatives
Cursors introduce substantial overhead. Each FETCH operation requires a round trip to the storage engine. The query optimizer cannot parallelize cursor operations effectively. Memory consumption increases with STATIC and KEYSET cursors.
Before implementing a cursor, explore these alternatives:
Common Table Expressions (CTEs) with window functions handle most aggregation and ranking scenarios.
Recursive CTEs process hierarchical data without cursors.
Temporary tables with WHILE loops using DELETE TOP(1) pattern for batch processing.
MERGE statements for complex insert/update logic.
Here’s a performance comparison for updating employee bonuses:
-- Cursor approach: ~8 seconds for 100,000 rows
DECLARE @EmpID INT, @Salary DECIMAL(10,2);
DECLARE bonus_cursor CURSOR FAST_FORWARD FOR
SELECT EmployeeID, Salary FROM Employees;
OPEN bonus_cursor;
FETCH NEXT FROM bonus_cursor INTO @EmpID, @Salary;
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Employees
SET Bonus = @Salary * 0.10
WHERE EmployeeID = @EmpID;
FETCH NEXT FROM bonus_cursor INTO @EmpID, @Salary;
END;
CLOSE bonus_cursor;
DEALLOCATE bonus_cursor;
-- Set-based approach: ~0.2 seconds for 100,000 rows
UPDATE Employees
SET Bonus = Salary * 0.10;
The set-based approach is 40x faster. Even complex logic often has set-based solutions using CTEs:
WITH EmployeeMetrics AS (
SELECT
EmployeeID,
Salary,
PERCENT_RANK() OVER (PARTITION BY Department ORDER BY PerformanceScore) AS PercentileRank
FROM Employees
)
UPDATE e
SET Bonus =
CASE
WHEN em.PercentileRank >= 0.9 THEN e.Salary * 0.15
WHEN em.PercentileRank >= 0.7 THEN e.Salary * 0.10
ELSE e.Salary * 0.05
END
FROM Employees e
INNER JOIN EmployeeMetrics em ON e.EmployeeID = em.EmployeeID;
Best Practices and Common Pitfalls
Always clean up cursor resources. Unclosed cursors consume memory and locks. Undeallocated cursors persist until the connection closes.
Use FAST_FORWARD whenever you only need forward-only, read-only access. It’s significantly faster than other cursor types.
Avoid nested cursors. They multiply performance problems exponentially. Refactor to use temporary tables or set-based operations.
Implement proper error handling to ensure cleanup:
BEGIN TRY
DECLARE @ProductID INT, @Price DECIMAL(10,2);
DECLARE product_cursor CURSOR FAST_FORWARD FOR
SELECT ProductID, Price FROM Products WHERE IsActive = 1;
OPEN product_cursor;
FETCH NEXT FROM product_cursor INTO @ProductID, @Price;
WHILE @@FETCH_STATUS = 0
BEGIN
-- Processing logic that might fail
IF @Price < 0
THROW 50000, 'Invalid price detected', 1;
UPDATE Products
SET LastReviewed = GETDATE()
WHERE ProductID = @ProductID;
FETCH NEXT FROM product_cursor INTO @ProductID, @Price;
END;
CLOSE product_cursor;
DEALLOCATE product_cursor;
END TRY
BEGIN CATCH
-- Ensure cursor cleanup even on error
IF CURSOR_STATUS('local', 'product_cursor') >= 0
BEGIN
CLOSE product_cursor;
DEALLOCATE product_cursor;
END;
-- Re-throw the error
THROW;
END CATCH;
Check cursor status with CURSOR_STATUS() before attempting to close or deallocate. This prevents errors when cleanup code runs multiple times.
Batch cursor processing when possible. Instead of committing after each row, process in batches of 100-1000 rows to reduce transaction overhead.
Document why you’re using a cursor. Future developers (including yourself) will appreciate understanding the justification for choosing row-by-row processing over set-based operations.