Pandas - GroupBy with Multiple Aggregations

The most straightforward approach to multiple aggregations uses a dictionary mapping column names to aggregation functions. This method works well when you need different metrics for different...

Key Insights

  • GroupBy with multiple aggregations allows you to compute different statistical functions across columns simultaneously using agg() with dictionaries or named aggregations for cleaner, more maintainable code
  • Named aggregations (introduced in Pandas 0.25.0) provide explicit column naming and better readability compared to MultiIndex column outputs from traditional dictionary-based aggregations
  • Custom aggregation functions, including lambda expressions and user-defined functions, can be combined with built-in aggregators to handle complex business logic within a single GroupBy operation

Basic Multiple Aggregations with Dictionary Syntax

The most straightforward approach to multiple aggregations uses a dictionary mapping column names to aggregation functions. This method works well when you need different metrics for different columns.

import pandas as pd
import numpy as np

# Sample sales data
data = {
    'region': ['North', 'North', 'South', 'South', 'East', 'East'],
    'product': ['A', 'B', 'A', 'B', 'A', 'B'],
    'sales': [100, 150, 200, 120, 180, 90],
    'units': [10, 15, 20, 12, 18, 9],
    'returns': [2, 3, 1, 2, 1, 1]
}

df = pd.DataFrame(data)

# Apply different aggregations to different columns
result = df.groupby('region').agg({
    'sales': 'sum',
    'units': 'mean',
    'returns': 'max'
})

print(result)

Output:

        sales  units  returns
region                      
East      270   13.5        1
North     250   12.5        3
South     320   16.0        2

You can also apply multiple aggregation functions to a single column by passing a list:

result = df.groupby('region').agg({
    'sales': ['sum', 'mean', 'std'],
    'units': ['min', 'max']
})

print(result)

This creates a MultiIndex column structure, which can become unwieldy for complex aggregations.

Named Aggregations for Clean Output

Named aggregations solve the MultiIndex column problem by allowing you to specify explicit output column names. This approach significantly improves code readability and makes downstream processing easier.

result = df.groupby('region').agg(
    total_sales=('sales', 'sum'),
    avg_sales=('sales', 'mean'),
    min_units=('units', 'min'),
    max_units=('units', 'max'),
    total_returns=('returns', 'sum')
)

print(result)

Output:

        total_sales  avg_sales  min_units  max_units  total_returns
region                                                              
East            270      135.0          9         18              2
North           250      125.0         10         15              5
South           320      160.0         12         20              3

Named aggregations use tuples where the first element is the column name and the second is the aggregation function. This pattern scales well for complex analyses:

# More complex business metrics
result = df.groupby(['region', 'product']).agg(
    revenue=('sales', 'sum'),
    avg_transaction=('sales', 'mean'),
    total_volume=('units', 'sum'),
    return_rate=('returns', lambda x: x.sum() / len(x))
)

print(result)

Custom Aggregation Functions

Beyond built-in functions like sum, mean, and std, you can use lambda functions or define custom aggregators for specialized calculations.

def coefficient_of_variation(x):
    """Calculate CV: standard deviation / mean"""
    return x.std() / x.mean() if x.mean() != 0 else 0

def weighted_average(values, weights):
    """Custom function requiring multiple columns"""
    return np.average(values, weights=weights)

# Using lambda and custom functions
result = df.groupby('region').agg(
    total_sales=('sales', 'sum'),
    sales_cv=('sales', coefficient_of_variation),
    sales_range=('sales', lambda x: x.max() - x.min()),
    unit_median=('units', 'median')
)

print(result)

For aggregations requiring multiple columns, use apply() with GroupBy:

def weighted_avg_price(group):
    return np.average(group['sales'] / group['units'], weights=group['units'])

result = df.groupby('region').apply(weighted_avg_price)
print(result)

Or combine with agg() using a wrapper:

result = df.groupby('region').apply(
    lambda x: pd.Series({
        'total_sales': x['sales'].sum(),
        'avg_price': (x['sales'] / x['units']).mean(),
        'weighted_avg_price': np.average(x['sales'] / x['units'], weights=x['units'])
    })
)

print(result)

Conditional Aggregations and Filtering

You can implement conditional logic within aggregations to handle specific business rules or data conditions.

# Create more detailed dataset
np.random.seed(42)
df_detailed = pd.DataFrame({
    'region': np.random.choice(['North', 'South', 'East', 'West'], 100),
    'category': np.random.choice(['Electronics', 'Clothing', 'Food'], 100),
    'sales': np.random.randint(50, 500, 100),
    'discount': np.random.uniform(0, 0.3, 100),
    'profit_margin': np.random.uniform(0.1, 0.4, 100)
})

# Conditional aggregations
result = df_detailed.groupby('region').agg(
    total_sales=('sales', 'sum'),
    high_value_count=('sales', lambda x: (x > 300).sum()),
    avg_discount=('discount', 'mean'),
    high_discount_sales=('sales', lambda x: x[df_detailed.loc[x.index, 'discount'] > 0.2].sum()),
    profitable_items=('profit_margin', lambda x: (x > 0.25).sum())
)

print(result)

For more complex filtering, use filter() before aggregation:

# Filter groups before aggregation
high_performers = df_detailed.groupby('region').filter(
    lambda x: x['sales'].mean() > 250
)

result = high_performers.groupby('region').agg(
    avg_sales=('sales', 'mean'),
    count=('sales', 'size')
)

print(result)

Performance Optimization Techniques

When working with large datasets, aggregation performance matters. Here are optimization strategies:

# Use NumPy functions directly for better performance
result = df_detailed.groupby('region').agg(
    total_sales=('sales', np.sum),  # Faster than 'sum'
    avg_sales=('sales', np.mean),
    std_sales=('sales', np.std)
)

# Avoid repeated calculations - compute once, reuse
sales_by_region = df_detailed.groupby('region')['sales']
result = pd.DataFrame({
    'total': sales_by_region.sum(),
    'mean': sales_by_region.mean(),
    'std': sales_by_region.std()
})

# For very large datasets, consider using numba-compiled functions
from numba import jit

@jit(nopython=True)
def fast_custom_metric(arr):
    total = 0.0
    for val in arr:
        if val > 100:
            total += val * 1.1
        else:
            total += val
    return total

# Note: Direct numba functions need wrapper for GroupBy
result = df_detailed.groupby('region')['sales'].agg(
    lambda x: fast_custom_metric(x.values)
)

Combining Multiple GroupBy Levels

Multi-level grouping with different aggregations per level provides hierarchical insights:

# Multi-level grouping with comprehensive aggregations
result = df_detailed.groupby(['region', 'category']).agg(
    transaction_count=('sales', 'size'),
    total_revenue=('sales', 'sum'),
    avg_transaction=('sales', 'mean'),
    revenue_std=('sales', 'std'),
    max_discount=('discount', 'max'),
    avg_margin=('profit_margin', 'mean')
).round(2)

print(result.head(10))

# Aggregate at different levels
region_summary = result.groupby(level='region').agg({
    'transaction_count': 'sum',
    'total_revenue': 'sum',
    'avg_transaction': 'mean'
})

print(region_summary)

This approach enables drill-down analysis while maintaining aggregated views at multiple granularities, essential for comprehensive data analysis workflows.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.