Python - Append to File
The most straightforward way to append to a file uses the `'a'` mode with a context manager:
Key Insights
- Python offers multiple file modes for appending (
'a','a+','ab') that preserve existing content while adding new data to the end of files - Context managers with
withstatements ensure proper file handling and automatic resource cleanup, preventing data corruption and memory leaks - Appending to files requires understanding encoding, buffering, and atomic operations to maintain data integrity in production environments
Basic Append Operations
The most straightforward way to append to a file uses the 'a' mode with a context manager:
with open('logfile.txt', 'a') as f:
f.write('New log entry\n')
This approach automatically handles file closing, even if exceptions occur. The 'a' mode creates the file if it doesn’t exist and positions the file pointer at the end.
For multiple append operations:
entries = ['Error: Connection timeout', 'Warning: High memory usage', 'Info: Process completed']
with open('application.log', 'a') as f:
for entry in entries:
f.write(f'{entry}\n')
Append Mode Variants
Python provides several append modes with distinct behaviors:
# 'a' - Append text mode
with open('data.txt', 'a') as f:
f.write('Text content\n')
# 'a+' - Append and read
with open('data.txt', 'a+') as f:
f.write('New line\n')
f.seek(0) # Move to beginning to read
content = f.read()
# 'ab' - Append binary mode
with open('data.bin', 'ab') as f:
f.write(b'\x00\x01\x02\x03')
The 'a+' mode enables reading existing content while appending new data. However, you must explicitly seek to the desired position for read operations since the file pointer starts at the end.
Appending with Encoding Control
Explicit encoding prevents issues when working with non-ASCII characters:
import datetime
with open('events.log', 'a', encoding='utf-8') as f:
timestamp = datetime.datetime.now().isoformat()
f.write(f'{timestamp} - User: José García - Action: Login ✓\n')
Without specifying encoding, Python uses the platform default, which causes problems when files move between systems:
# Bad practice - encoding depends on system locale
with open('data.txt', 'a') as f:
f.write('Данные\n') # May fail on some systems
# Good practice - explicit encoding
with open('data.txt', 'a', encoding='utf-8') as f:
f.write('Данные\n') # Works consistently
Buffering Strategies
Control when data physically writes to disk using buffering parameters:
# Line buffering - flushes after each newline
with open('realtime.log', 'a', buffering=1) as f:
f.write('Critical alert\n') # Immediately written to disk
# Full buffering with specific size (bytes)
with open('bulk_data.txt', 'a', buffering=8192) as f:
for i in range(1000):
f.write(f'Record {i}\n') # Buffered until 8KB accumulates
# Unbuffered (binary mode only)
with open('sensor.dat', 'ab', buffering=0) as f:
f.write(b'\x42') # Written immediately
For critical logging, force immediate writes:
with open('transaction.log', 'a') as f:
f.write('Payment processed: $500\n')
f.flush() # Force write to disk immediately
Atomic Append Operations
For concurrent access scenarios, use atomic operations to prevent race conditions:
import os
import tempfile
def atomic_append(filepath, content):
"""Append content atomically using temp file and rename."""
# Create temp file in same directory
dir_name = os.path.dirname(filepath)
with tempfile.NamedTemporaryFile(
mode='w',
dir=dir_name,
delete=False,
encoding='utf-8'
) as tmp_file:
tmp_path = tmp_file.name
# Copy existing content
if os.path.exists(filepath):
with open(filepath, 'r', encoding='utf-8') as original:
tmp_file.write(original.read())
# Append new content
tmp_file.write(content)
# Atomic rename
os.replace(tmp_path, filepath)
# Usage
atomic_append('critical_data.txt', 'New transaction record\n')
Appending Structured Data
When appending JSON lines (JSONL format):
import json
def append_json_record(filepath, record):
"""Append a JSON record to a JSONL file."""
with open(filepath, 'a', encoding='utf-8') as f:
json.dump(record, f, ensure_ascii=False)
f.write('\n')
# Usage
records = [
{'user_id': 101, 'action': 'login', 'timestamp': '2024-01-15T10:30:00'},
{'user_id': 102, 'action': 'purchase', 'amount': 49.99}
]
for record in records:
append_json_record('events.jsonl', record)
For CSV files, use the csv module:
import csv
def append_csv_row(filepath, row_data):
"""Append a row to CSV file."""
with open(filepath, 'a', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(row_data)
# Usage
append_csv_row('sales.csv', ['2024-01-15', 'Product A', 150.00])
Error Handling and Recovery
Implement robust error handling for production systems:
import logging
from pathlib import Path
def safe_append(filepath, content, max_retries=3):
"""Append with retry logic and error handling."""
filepath = Path(filepath)
for attempt in range(max_retries):
try:
# Ensure parent directory exists
filepath.parent.mkdir(parents=True, exist_ok=True)
with open(filepath, 'a', encoding='utf-8') as f:
f.write(content)
f.flush()
os.fsync(f.fileno()) # Force OS-level write
return True
except PermissionError:
logging.error(f'Permission denied: {filepath}')
return False
except IOError as e:
logging.warning(f'Attempt {attempt + 1} failed: {e}')
if attempt == max_retries - 1:
logging.error(f'Failed to append after {max_retries} attempts')
return False
time.sleep(0.1 * (attempt + 1)) # Exponential backoff
return False
Performance Considerations
For high-volume append operations, batch writes instead of opening files repeatedly:
# Inefficient - opens file 10,000 times
for i in range(10000):
with open('output.txt', 'a') as f:
f.write(f'Line {i}\n')
# Efficient - single file open
with open('output.txt', 'a') as f:
for i in range(10000):
f.write(f'Line {i}\n')
Use writelines() for multiple strings:
lines = [f'Record {i}\n' for i in range(1000)]
with open('bulk_output.txt', 'a') as f:
f.writelines(lines) # Faster than looping with write()
File Locking for Concurrent Access
When multiple processes append to the same file, implement file locking:
import fcntl
def append_with_lock(filepath, content):
"""Append with exclusive file lock."""
with open(filepath, 'a', encoding='utf-8') as f:
# Acquire exclusive lock
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
f.write(content)
f.flush()
finally:
# Release lock
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
# Usage in multi-process environment
append_with_lock('shared.log', 'Process 1 log entry\n')
For Windows compatibility, use the portalocker library or platform-specific implementations. Understanding these patterns ensures reliable file append operations across different scenarios, from simple logging to high-concurrency production systems.