R - data.table Package Tutorial

The data.table package addresses fundamental performance limitations in base R. While data.frame operations create full copies of data for each modification, data.table uses reference semantics and...

Key Insights

  • data.table provides 10-100x performance improvements over base R data.frame operations through efficient memory management and parallelization
  • The DT[i, j, by] syntax combines filtering, computation, and grouping in a single operation, reducing code complexity and intermediate object creation
  • Reference semantics with the := operator enable in-place modifications without copying entire datasets, critical for working with large data

Why data.table Over data.frame

The data.table package addresses fundamental performance limitations in base R. While data.frame operations create full copies of data for each modification, data.table uses reference semantics and optimized C code to minimize memory overhead.

library(data.table)

# Create sample data
set.seed(123)
n <- 1000000
df <- data.frame(
  id = 1:n,
  group = sample(letters[1:5], n, replace = TRUE),
  value = rnorm(n)
)

dt <- as.data.table(df)

# Compare subsetting performance
system.time(df_result <- df[df$group == "a", ])
#   user  system elapsed 
#  0.045   0.008   0.053

system.time(dt_result <- dt[group == "a"])
#   user  system elapsed 
#  0.002   0.000   0.002

The performance gap widens dramatically with grouped operations and joins on large datasets.

Core Syntax: i, j, by

The data.table syntax DT[i, j, by] maps to SQL-like operations: WHERE (i), SELECT/UPDATE (j), GROUP BY (by). This unified interface eliminates the need for multiple function calls.

# Sample sales data
sales <- data.table(
  date = as.Date("2024-01-01") + rep(0:29, each = 100),
  product = sample(c("A", "B", "C"), 3000, replace = TRUE),
  region = sample(c("North", "South", "East", "West"), 3000, replace = TRUE),
  revenue = runif(3000, 100, 1000),
  cost = runif(3000, 50, 500)
)

# Filter rows (i)
sales[revenue > 800]

# Select and compute columns (j)
sales[, .(total_revenue = sum(revenue), avg_profit = mean(revenue - cost))]

# Group by (by)
sales[, .(daily_revenue = sum(revenue)), by = date]

# Combine all three
sales[revenue > 500, 
      .(revenue = sum(revenue), transactions = .N), 
      by = .(product, region)]

The .() notation is an alias for list(), creating a list of expressions to evaluate.

Reference Semantics with :=

The := operator modifies data.table columns by reference, avoiding memory copies. This is essential for large datasets where copying would exhaust memory.

# Add columns by reference
sales[, profit := revenue - cost]
sales[, margin := profit / revenue]

# Multiple columns simultaneously
sales[, `:=`(
  profit = revenue - cost,
  margin = (revenue - cost) / revenue,
  date_str = as.character(date)
)]

# Conditional updates
sales[product == "A", premium := TRUE]
sales[is.na(premium), premium := FALSE]

# Delete columns
sales[, date_str := NULL]

# Functional form for programmatic column names
col_name <- "profit_pct"
sales[, (col_name) := margin * 100]

Notice no assignment operator (<-) is needed. The modification happens in-place.

Advanced Grouping with .SD

.SD (Subset of Data) represents each group’s data within a by operation. Combined with .SDcols, it enables powerful column-wise operations.

# Apply function to multiple columns by group
sales[, lapply(.SD, mean), 
      by = product, 
      .SDcols = c("revenue", "cost", "profit")]

# First and last observation per group
sales[, .SD[c(1, .N)], by = product]

# Top 3 revenue days per product
sales[order(-revenue), .SD[1:3], by = product]

# Rolling calculations within groups
sales[order(date), 
      rolling_avg := frollmean(revenue, n = 7), 
      by = product]

# Dynamic column selection
numeric_cols <- names(sales)[sapply(sales, is.numeric)]
sales[, lapply(.SD, function(x) sum(is.na(x))), .SDcols = numeric_cols]

Setting keys enables blazing-fast binary search-based subsetting and joins. Keys automatically sort the data.table.

# Set key for fast filtering
setkey(sales, product, region)

# Binary search (extremely fast)
sales[.("A", "North")]
sales[.(c("A", "B"), "North")]

# Key-based joins
customers <- data.table(
  region = c("North", "South", "East", "West"),
  manager = c("Alice", "Bob", "Carol", "Dave"),
  target = c(50000, 60000, 55000, 65000)
)

setkey(customers, region)
setkey(sales, region)

# Join using keys
result <- sales[customers]

# Multiple key columns
products <- data.table(
  product = c("A", "B", "C"),
  category = c("Electronics", "Clothing", "Food"),
  supplier = c("Supplier1", "Supplier2", "Supplier3")
)

setkey(products, product)
setkey(sales, product)

enriched <- products[sales]

Chaining Operations

data.table supports method chaining for readable multi-step transformations without intermediate variables.

# Complex pipeline
result <- sales[revenue > 300][
  order(-date)
][
  , .(total_revenue = sum(revenue),
      avg_profit = mean(profit),
      transactions = .N),
  by = .(product, region)
][
  total_revenue > 10000
][
  order(-total_revenue)
]

# With intermediate calculations
sales[, profit_margin := profit / revenue][
  profit_margin > 0.3
][
  , .(high_margin_revenue = sum(revenue)), 
  by = product
]

Reshaping Data

data.table provides efficient alternatives to tidyr’s pivot functions through dcast and melt.

# Long to wide
wide <- dcast(sales, 
              date ~ product, 
              value.var = "revenue",
              fun.aggregate = sum)

# Multiple value columns
wide_multi <- dcast(sales,
                    date ~ product,
                    value.var = c("revenue", "profit"),
                    fun.aggregate = sum)

# Wide to long
long <- melt(sales,
             id.vars = c("date", "product"),
             measure.vars = c("revenue", "cost", "profit"),
             variable.name = "metric",
             value.name = "amount")

# Multiple value columns with patterns
melt(wide_multi,
     id.vars = "date",
     measure.vars = patterns("revenue_", "profit_"),
     value.name = c("revenue", "profit"),
     variable.name = "product")

Efficient Joins

data.table implements optimized join operations that outperform base R and dplyr for large datasets.

orders <- data.table(
  order_id = 1:1000,
  product = sample(c("A", "B", "C"), 1000, replace = TRUE),
  quantity = sample(1:10, 1000, replace = TRUE)
)

prices <- data.table(
  product = c("A", "B", "C"),
  unit_price = c(100, 150, 200)
)

# Inner join
orders[prices, on = "product", nomatch = NULL]

# Left join (default)
orders[prices, on = "product"]

# Add price column by reference
orders[prices, unit_price := i.unit_price, on = "product"]

# Non-equi joins
inventory <- data.table(
  product = rep(c("A", "B", "C"), each = 10),
  date = as.Date("2024-01-01") + rep(0:9, 3),
  stock = sample(50:200, 30, replace = TRUE)
)

# Join where order date falls within inventory date range
sales[inventory, 
      on = .(product, date >= date),
      roll = "nearest"]

Performance Optimization Tips

# Use set() for single cell updates in loops (avoid if possible)
for(i in 1:10) {
  set(sales, i = i, j = "revenue", value = sales$revenue[i] * 1.1)
}

# Preallocate columns
sales[, new_col := numeric(.N)]

# Use fread() for fast file reading
large_file <- fread("data.csv", nThread = 4)

# Enable parallelization
setDTthreads(0)  # Use all available threads
getDTthreads()

# Memory efficient aggregation
sales[, .N, by = .(product, region)]  # Count rows
sales[, .(sum = sum(revenue)), keyby = product]  # keyby sorts result

# Avoid repeated column creation
# Bad: Multiple passes
sales[, col1 := func1(x)]
sales[, col2 := func2(col1)]

# Good: Single pass
sales[, `:=`(col1 = func1(x), col2 = func2(func1(x)))]

The data.table package represents a pragmatic approach to data manipulation in R, prioritizing performance without sacrificing code clarity. For production systems processing millions of rows, these optimizations translate directly to reduced compute costs and faster iteration cycles.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.