Pandas - Read CSV File (read_csv)

The `read_csv()` function reads comma-separated value files into DataFrame objects. The simplest invocation requires only a file path:

Key Insights

  • read_csv() handles delimiters, encoding, data types, and missing values through 50+ parameters that control parsing behavior
  • Performance optimization techniques like usecols, dtype specification, and chunking can reduce memory usage by 80%+ on large datasets
  • Understanding index handling, date parsing, and compression support prevents common data loading errors in production pipelines

Basic CSV Reading

The read_csv() function reads comma-separated value files into DataFrame objects. The simplest invocation requires only a file path:

import pandas as pd

df = pd.read_csv('data.csv')
print(df.head())

For files with different delimiters, use the sep parameter:

# Tab-separated values
df = pd.read_csv('data.tsv', sep='\t')

# Pipe-delimited
df = pd.read_csv('data.txt', sep='|')

# Multiple whitespace characters
df = pd.read_csv('data.txt', sep='\s+', engine='python')

The engine parameter switches between C-based (‘c’) and Python-based (‘python’) parsers. The Python engine supports regex separators but runs slower.

Handling Headers and Column Names

Control header behavior with header and names parameters:

# Skip first row, use second as header
df = pd.read_csv('data.csv', header=1)

# No header in file, provide column names
df = pd.read_csv('data.csv', header=None, names=['col1', 'col2', 'col3'])

# Skip header, assign custom names
df = pd.read_csv('data.csv', header=0, names=['new_col1', 'new_col2'])

# Multi-level column headers
df = pd.read_csv('data.csv', header=[0, 1])

Use skiprows to ignore specific rows:

# Skip first 3 rows
df = pd.read_csv('data.csv', skiprows=3)

# Skip specific row numbers
df = pd.read_csv('data.csv', skiprows=[0, 2, 5])

# Skip using callable
df = pd.read_csv('data.csv', skiprows=lambda x: x % 2 == 0)

Data Type Specification

Explicitly defining data types prevents automatic inference errors and improves performance:

dtype_dict = {
    'user_id': 'int32',
    'amount': 'float32',
    'category': 'category',
    'description': 'string'
}

df = pd.read_csv('transactions.csv', dtype=dtype_dict)

For mixed-type columns or problematic data:

# Convert errors to NaN
df = pd.read_csv('data.csv', dtype={'col1': 'float'}, 
                 converters={'col2': lambda x: str(x).strip()})

# Keep as object type for manual processing
df = pd.read_csv('data.csv', dtype={'problematic_col': 'object'})

Using categorical types for low-cardinality columns dramatically reduces memory:

df = pd.read_csv('data.csv', dtype={'status': 'category'})

# Memory comparison
print(df.memory_usage(deep=True))

Date Parsing

Parse date columns during read for immediate datetime operations:

# Single date column
df = pd.read_csv('data.csv', parse_dates=['timestamp'])

# Multiple columns
df = pd.read_csv('data.csv', parse_dates=['created_at', 'updated_at'])

# Combine columns into single datetime
df = pd.read_csv('data.csv', 
                 parse_dates={'datetime': ['date', 'time']})

# Custom date format
df = pd.read_csv('data.csv', 
                 parse_dates=['date'],
                 date_format='%d/%m/%Y %H:%M:%S')

For complex date formats, use converters:

from datetime import datetime

def parse_custom_date(date_str):
    return datetime.strptime(date_str, '%Y%m%d')

df = pd.read_csv('data.csv', 
                 converters={'date': parse_custom_date})

Missing Value Handling

Control how missing values are identified and represented:

# Recognize custom NA values
df = pd.read_csv('data.csv', 
                 na_values=['NA', 'null', 'N/A', '-', ''])

# Different NA values per column
na_dict = {
    'col1': ['NA', 'missing'],
    'col2': ['-999', '0']
}
df = pd.read_csv('data.csv', na_values=na_dict)

# Keep default NA values and add custom ones
df = pd.read_csv('data.csv', 
                 na_values=['custom_na'],
                 keep_default_na=True)

Index Configuration

Set index columns during read instead of post-processing:

# Single column index
df = pd.read_csv('data.csv', index_col='id')

# Multi-level index
df = pd.read_csv('data.csv', index_col=['category', 'subcategory'])

# Use first column as index
df = pd.read_csv('data.csv', index_col=0)

Memory Optimization with Column Selection

Read only required columns to reduce memory footprint:

# Select specific columns
df = pd.read_csv('large_file.csv', 
                 usecols=['user_id', 'amount', 'timestamp'])

# Select using callable
df = pd.read_csv('large_file.csv',
                 usecols=lambda col: col.startswith('metric_'))

# Select by position
df = pd.read_csv('large_file.csv', usecols=[0, 2, 5])

Chunking Large Files

Process files larger than available memory in chunks:

chunk_size = 10000
chunks = []

for chunk in pd.read_csv('huge_file.csv', chunksize=chunk_size):
    # Process each chunk
    filtered = chunk[chunk['amount'] > 1000]
    chunks.append(filtered)

df = pd.concat(chunks, ignore_index=True)

Alternative approach with iterator:

reader = pd.read_csv('huge_file.csv', iterator=True)
chunk = reader.get_chunk(5000)

# Process chunk
print(chunk.describe())

Compression Support

Read compressed files directly without manual decompression:

# Automatic detection from extension
df = pd.read_csv('data.csv.gz')
df = pd.read_csv('data.csv.zip')
df = pd.read_csv('data.csv.bz2')

# Explicit compression type
df = pd.read_csv('data.csv.compressed', compression='gzip')

# Read from ZIP with multiple files
df = pd.read_csv('archive.zip', compression='zip')

Encoding Handling

Specify encoding for non-UTF-8 files:

# Common encodings
df = pd.read_csv('data.csv', encoding='latin1')
df = pd.read_csv('data.csv', encoding='iso-8859-1')
df = pd.read_csv('data.csv', encoding='cp1252')

# Handle encoding errors
df = pd.read_csv('data.csv', 
                 encoding='utf-8',
                 encoding_errors='replace')  # or 'ignore'

Reading from URLs and S3

Read directly from remote sources:

# HTTP/HTTPS
url = 'https://example.com/data.csv'
df = pd.read_csv(url)

# S3 (requires s3fs)
df = pd.read_csv('s3://bucket-name/path/to/file.csv')

# With storage options
df = pd.read_csv('s3://bucket/file.csv',
                 storage_options={'key': 'access_key',
                                'secret': 'secret_key'})

Advanced Parsing Options

Handle edge cases with specialized parameters:

# Skip blank lines
df = pd.read_csv('data.csv', skip_blank_lines=True)

# Handle thousands separator
df = pd.read_csv('data.csv', thousands=',')

# Decimal separator
df = pd.read_csv('data.csv', decimal=',')

# Comment lines
df = pd.read_csv('data.csv', comment='#')

# Quote character
df = pd.read_csv('data.csv', quotechar='"', escapechar='\\')

# Number of rows to read
df = pd.read_csv('data.csv', nrows=1000)

Error Handling

Control behavior when encountering malformed rows:

# Skip bad lines with warning
df = pd.read_csv('data.csv', on_bad_lines='skip')

# Return bad lines as warnings
df = pd.read_csv('data.csv', on_bad_lines='warn')

# Custom error handler
def handle_bad_line(line):
    print(f"Bad line: {line}")
    return None

df = pd.read_csv('data.csv', on_bad_lines=handle_bad_line, engine='python')

Performance Comparison

Optimize read performance with proper parameter selection:

import time

# Baseline
start = time.time()
df1 = pd.read_csv('large_file.csv')
print(f"Basic read: {time.time() - start:.2f}s")

# Optimized
start = time.time()
df2 = pd.read_csv('large_file.csv',
                  usecols=['col1', 'col2', 'col3'],
                  dtype={'col1': 'int32', 'col2': 'category'},
                  parse_dates=['col3'])
print(f"Optimized read: {time.time() - start:.2f}s")

print(f"Memory usage - Basic: {df1.memory_usage(deep=True).sum() / 1024**2:.2f} MB")
print(f"Memory usage - Optimized: {df2.memory_usage(deep=True).sum() / 1024**2:.2f} MB")

These techniques enable efficient CSV processing across datasets of any size while maintaining data integrity and type safety.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.