SQL - Table Variables vs Temp Tables
Table variables and temporary tables serve similar purposes in SQL Server—providing temporary storage for intermediate results—but their internal implementations differ significantly.
Key Insights
- Table variables live in memory with automatic cleanup but lack statistics and indexes beyond primary keys, making them ideal for small datasets under 100 rows
- Temporary tables support full indexing, statistics, and parallel operations, delivering better performance for large datasets and complex queries despite higher overhead
- Transaction behavior differs critically: table variables persist through rollbacks while temp tables respect transaction boundaries, affecting error handling strategies
Understanding the Core Differences
Table variables and temporary tables serve similar purposes in SQL Server—providing temporary storage for intermediate results—but their internal implementations differ significantly.
Table variables are declared using DECLARE and exist in tempdb just like temporary tables, but SQL Server treats them as variables with special scoping rules. Temporary tables are created using CREATE TABLE with a # prefix and behave more like regular tables.
-- Table variable declaration
DECLARE @Orders TABLE (
OrderID INT PRIMARY KEY,
CustomerID INT,
OrderDate DATETIME,
TotalAmount DECIMAL(10,2)
);
-- Temporary table creation
CREATE TABLE #Orders (
OrderID INT PRIMARY KEY,
CustomerID INT,
OrderDate DATETIME,
TotalAmount DECIMAL(10,2)
);
The scoping differs immediately. Table variables exist only within the batch, stored procedure, or function where they’re declared. Temporary tables persist across batches within the same session and can even be accessed by nested stored procedures.
Statistics and Query Optimization
SQL Server maintains statistics on temporary tables, enabling the query optimizer to make informed decisions about execution plans. Table variables receive no statistics—the optimizer always assumes they contain exactly one row.
-- Populate a temp table with 10,000 rows
CREATE TABLE #LargeOrders (
OrderID INT PRIMARY KEY,
CustomerID INT,
OrderDate DATETIME,
TotalAmount DECIMAL(10,2)
);
INSERT INTO #LargeOrders
SELECT TOP 10000
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)),
ABS(CHECKSUM(NEWID())) % 1000,
DATEADD(DAY, -ABS(CHECKSUM(NEWID())) % 365, GETDATE()),
ABS(CHECKSUM(NEWID())) % 10000
FROM sys.all_columns a CROSS JOIN sys.all_columns b;
-- Query using temp table (optimizer knows row count)
SELECT CustomerID, COUNT(*) as OrderCount
FROM #LargeOrders
GROUP BY CustomerID
HAVING COUNT(*) > 10;
Check the execution plan for this query, and you’ll see accurate row count estimates. Now compare with a table variable:
DECLARE @LargeOrders TABLE (
OrderID INT PRIMARY KEY,
CustomerID INT,
OrderDate DATETIME,
TotalAmount DECIMAL(10,2)
);
-- Same data insertion
INSERT INTO @LargeOrders
SELECT TOP 10000
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)),
ABS(CHECKSUM(NEWID())) % 1000,
DATEADD(DAY, -ABS(CHECKSUM(NEWID())) % 365, GETDATE()),
ABS(CHECKSUM(NEWID())) % 10000
FROM sys.all_columns a CROSS JOIN sys.all_columns b;
-- Same query with table variable (optimizer assumes 1 row)
SELECT CustomerID, COUNT(*) as OrderCount
FROM @LargeOrders
GROUP BY CustomerID
HAVING COUNT(*) > 10;
The execution plan shows an estimated row count of 1, leading to suboptimal join strategies and memory grants. For 10,000 rows, this creates measurable performance degradation.
Indexing Capabilities
Temporary tables support the full range of indexing options available to regular tables. You can add non-clustered indexes, included columns, filtered indexes, and columnstore indexes.
CREATE TABLE #SalesData (
SaleID INT IDENTITY(1,1) PRIMARY KEY,
ProductID INT,
CustomerID INT,
SaleDate DATETIME,
Amount DECIMAL(10,2),
Region VARCHAR(50)
);
-- Add multiple indexes
CREATE NONCLUSTERED INDEX IX_Product ON #SalesData(ProductID) INCLUDE (Amount);
CREATE NONCLUSTERED INDEX IX_Customer ON #SalesData(CustomerID);
CREATE NONCLUSTERED INDEX IX_Region_Date ON #SalesData(Region, SaleDate);
Table variables support only constraints defined inline during declaration. You can add a primary key or unique constraint, but you cannot add indexes after declaration:
DECLARE @SalesData TABLE (
SaleID INT IDENTITY(1,1) PRIMARY KEY,
ProductID INT,
CustomerID INT,
SaleDate DATETIME,
Amount DECIMAL(10,2),
Region VARCHAR(50),
INDEX IX_Product NONCLUSTERED (ProductID) -- Inline index definition
);
-- This syntax is invalid - cannot add indexes post-declaration
-- CREATE INDEX IX_Customer ON @SalesData(CustomerID);
Starting with SQL Server 2014, you can define non-clustered indexes inline, but the syntax remains limited compared to temporary tables.
Transaction Behavior
Table variables exhibit unique transaction behavior that can be either advantageous or problematic depending on your use case. Changes to table variables are not rolled back when a transaction is rolled back.
BEGIN TRANSACTION;
DECLARE @AuditLog TABLE (
LogID INT IDENTITY(1,1),
Action VARCHAR(100),
LogTime DATETIME DEFAULT GETDATE()
);
CREATE TABLE #ErrorLog (
LogID INT IDENTITY(1,1),
Action VARCHAR(100),
LogTime DATETIME DEFAULT GETDATE()
);
INSERT INTO @AuditLog (Action) VALUES ('Starting process');
INSERT INTO #ErrorLog (Action) VALUES ('Starting process');
-- Simulate an error
INSERT INTO @AuditLog (Action) VALUES ('Before error');
INSERT INTO #ErrorLog (Action) VALUES ('Before error');
ROLLBACK TRANSACTION;
-- Table variable retains data
SELECT * FROM @AuditLog; -- Returns 2 rows
-- Temp table is empty
SELECT * FROM #ErrorLog; -- Returns 0 rows
This behavior makes table variables useful for logging operations that should persist regardless of transaction outcome, but it can cause unexpected results in error handling scenarios.
Parallel Execution Plans
SQL Server will not generate parallel execution plans for queries that modify table variables. Temporary tables have no such restriction.
-- Temp table allows parallelism
CREATE TABLE #ParallelTest (
ID INT PRIMARY KEY,
Value INT
);
INSERT INTO #ParallelTest (ID, Value)
SELECT TOP 1000000
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)),
ABS(CHECKSUM(NEWID())) % 1000000
FROM sys.all_columns a
CROSS JOIN sys.all_columns b
CROSS JOIN sys.all_columns c;
-- This can use parallel execution
SELECT Value, COUNT(*)
FROM #ParallelTest
GROUP BY Value;
The same operation with a table variable forces serial execution for modifications, though reads may still parallelize in some scenarios.
Practical Decision Framework
Use table variables when:
- Working with small datasets (under 100 rows)
- The data is truly temporary and limited to a single batch
- You need automatic cleanup without explicit
DROPstatements - You want changes to persist through transaction rollbacks
CREATE PROCEDURE ProcessSmallBatch
AS
BEGIN
DECLARE @BatchItems TABLE (
ItemID INT PRIMARY KEY,
Status VARCHAR(20)
);
INSERT INTO @BatchItems
SELECT TOP 50 ItemID, Status
FROM WorkQueue
WHERE ProcessedDate IS NULL;
-- Process items
UPDATE WorkQueue
SET ProcessedDate = GETDATE()
WHERE ItemID IN (SELECT ItemID FROM @BatchItems);
END;
Use temporary tables when:
- Working with larger datasets (hundreds or thousands of rows)
- You need complex indexing strategies
- Query performance depends on accurate statistics
- You need to access the data across multiple batches
- Parallel execution is important
CREATE PROCEDURE ProcessLargeDataset
AS
BEGIN
CREATE TABLE #StagingData (
RecordID INT PRIMARY KEY,
DataValue VARCHAR(MAX),
ProcessedFlag BIT DEFAULT 0
);
CREATE NONCLUSTERED INDEX IX_Processed
ON #StagingData(ProcessedFlag)
WHERE ProcessedFlag = 0;
-- Load large dataset
INSERT INTO #StagingData (RecordID, DataValue)
SELECT RecordID, DataValue
FROM LargeSourceTable
WHERE LoadDate = CAST(GETDATE() AS DATE);
-- Process in chunks with statistics support
WHILE EXISTS (SELECT 1 FROM #StagingData WHERE ProcessedFlag = 0)
BEGIN
-- Complex processing logic
UPDATE TOP (1000) #StagingData
SET ProcessedFlag = 1
WHERE ProcessedFlag = 0;
END;
DROP TABLE #StagingData;
END;
The choice between table variables and temporary tables directly impacts query performance, transaction behavior, and code maintainability. Understanding these differences allows you to select the appropriate construct for each specific scenario rather than defaulting to one approach universally.