Linux Memory Management: free, vmstat, and /proc/meminfo
Linux treats RAM as a resource to be fully utilized, not conserved. This philosophy confuses administrators coming from other operating systems where free memory is considered healthy. The kernel...
Key Insights
- Linux aggressively uses “free” memory for caching, making traditional memory metrics misleading—focus on “available” memory, not “free” memory, to understand actual resource constraints.
- The
freecommand provides snapshots,vmstatreveals memory pressure through swap activity, and/proc/meminfooffers granular data for deep analysis—use all three tools together for complete visibility. - High memory usage is normal Linux behavior; problems manifest as sustained swap I/O (si/so in vmstat), growing Dirty pages, or OOM killer events, not simply low free memory.
Understanding Linux Memory Management Fundamentals
Linux treats RAM as a resource to be fully utilized, not conserved. This philosophy confuses administrators coming from other operating systems where free memory is considered healthy. The kernel uses “unused” memory for page cache and buffers, dramatically improving I/O performance by keeping recently accessed files in RAM. When applications need memory, the kernel instantly reclaims these caches.
This design means you’ll rarely see significant “free” memory on a healthy Linux system. Instead, you need to understand the difference between memory that’s technically in use (cached data) and memory that’s genuinely unavailable for new processes. The three tools we’ll explore—free, vmstat, and /proc/meminfo—each provide different perspectives on this complex picture.
The free Command: Your First Line of Defense
The free command provides a human-readable snapshot of memory usage. Here’s what you’ll typically see:
$ free -h
total used free shared buff/cache available
Mem: 15Gi 8.2Gi 1.1Gi 324Mi 6.5Gi 6.8Gi
Swap: 8.0Gi 512Mi 7.5Gi
Let’s decode each column:
- total: Physical RAM installed in your system
- used: Memory consumed by processes and the kernel (excluding caches)
- free: Completely unused memory (usually small)
- shared: Memory used by tmpfs filesystems
- buff/cache: Memory used for buffers and page cache (reclaimable)
- available: Realistic estimate of memory available for new processes without swapping
The critical metric is available, not free. In the example above, despite only 1.1GB being technically “free,” the system has 6.8GB available because the kernel can instantly reclaim most of the cache.
Monitor memory over time with continuous updates:
$ free -h -s 5
This refreshes every 5 seconds, helping you spot trends. For scripting and precise monitoring, use -m for megabytes or -g for gigabytes:
$ free -m
total used free shared buff/cache available
Mem: 15872 8396 1089 324 6386 6945
Swap: 8192 512 7680
The swap line shows swap space usage. Swap presence doesn’t indicate problems—swap I/O activity does. A system might have allocated swap without actively swapping pages in and out.
Using vmstat for Dynamic Memory Analysis
While free provides snapshots, vmstat reveals memory behavior over time. This tool shows system-wide statistics including memory, swap, I/O, and CPU activity.
$ vmstat 1 10
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 524288 1115648 102400 6553600 0 0 5 12 150 320 5 2 93 0 0
1 0 524288 1112576 102400 6556672 0 0 0 128 245 580 8 3 89 0 0
0 0 524288 1109504 102400 6559744 0 0 0 0 198 445 4 1 95 0 0
This runs for 10 iterations at 1-second intervals. Focus on these memory-related fields:
- swpd: Virtual memory used (in KB)
- free: Idle memory
- buff: Buffer cache
- cache: Page cache
- si: Memory swapped in from disk (KB/s)
- so: Memory swapped out to disk (KB/s)
The swap I/O columns (si/so) are critical. Non-zero values indicate memory pressure—the kernel is moving pages between RAM and disk. Occasional small values are normal, but sustained swap activity kills performance.
For a comprehensive memory summary:
$ vmstat -s
16273408 K total memory
8597504 K used memory
9235456 K active memory
5632000 K inactive memory
1115648 K free memory
102400 K buffer memory
6557856 K swap cache
8388608 K total swap
524288 K used swap
7864320 K free swap
View active versus inactive memory pages:
$ vmstat -a
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free inact active si so bi bo in cs us sy id wa st
1 0 524288 1115648 5632000 9235456 0 0 5 12 150 320 5 2 93 0 0
Active memory contains pages recently accessed; inactive pages are candidates for reclamation. High inactive memory with low free memory is perfectly healthy—it’s the kernel’s cache working as designed.
Deep Dive into /proc/meminfo
Both free and vmstat read from /proc/meminfo, the kernel’s raw memory statistics interface. Accessing it directly provides maximum detail:
$ cat /proc/meminfo
MemTotal: 16273408 kB
MemFree: 1115648 kB
MemAvailable: 7110656 kB
Buffers: 102400 kB
Cached: 6455456 kB
SwapCached: 98304 kB
Active: 9235456 kB
Inactive: 5632000 kB
Active(anon): 5312000 kB
Inactive(anon): 2048000 kB
Active(file): 3923456 kB
Inactive(file): 3584000 kB
Dirty: 51200 kB
Writeback: 0 kB
Slab: 524288 kB
SReclaimable: 312000 kB
SUnreclaim: 212288 kB
Key fields explained:
- MemAvailable: Kernel’s estimate of available memory (added in Linux 3.14)
- Buffers: Metadata for block devices
- Cached: Page cache for file contents
- Dirty: Modified pages waiting to be written to disk
- Writeback: Pages currently being written to disk
- Slab: Kernel data structures (SReclaimable can be freed under pressure)
Extract specific metrics with grep:
$ grep -E 'MemTotal|MemFree|MemAvailable|Dirty|Writeback' /proc/meminfo
MemTotal: 16273408 kB
MemFree: 1115648 kB
MemAvailable: 7110656 kB
Dirty: 51200 kB
Writeback: 0 kB
Calculate memory usage percentage:
$ awk '/MemTotal/ {total=$2} /MemAvailable/ {avail=$2} END {used=total-avail; printf "Memory Usage: %.1f%%\n", (used/total)*100}' /proc/meminfo
Memory Usage: 56.3%
Practical Troubleshooting Scenarios
Identifying Memory Pressure
Combine tools for comprehensive monitoring:
$ watch -n 1 'free -h && echo && vmstat 1 2 | tail -1'
This updates every second, showing both free output and the latest vmstat line. Look for:
- Decreasing “available” memory in
free - Non-zero si/so values in
vmstat - Increasing swap usage
Detecting Swap Thrashing
Swap thrashing occurs when the system constantly moves pages between RAM and disk:
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 2 2048000 102400 51200 512000 4096 4096 8192 8192 450 920 20 30 30 20 0
4 1 2150000 98304 51200 480000 5120 3072 6144 4096 520 1050 25 35 25 15 0
Sustained high si/so values (measured in KB/s) combined with high wa (I/O wait) indicate thrashing. This requires either adding RAM, reducing workload, or optimizing memory-hungry applications.
Finding Memory-Hungry Processes
Identify top memory consumers:
$ ps aux --sort=-%mem | head -10
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
postgres 1234 2.5 15.3 2048000 2489344 ? Ss 10:23 5:42 /usr/bin/postgres
java 5678 8.2 12.1 8192000 1966080 ? Sl 09:15 25:18 /usr/bin/java -Xmx2g
The RSS (Resident Set Size) column shows actual RAM usage. For deeper analysis:
$ cat /proc/1234/status | grep -E 'VmRSS|VmSwap'
VmRSS: 2489344 kB
VmSwap: 51200 kB
Custom Monitoring Script
Create a simple monitoring script combining all three tools:
#!/bin/bash
echo "=== Memory Report $(date) ==="
echo
echo "--- Free Memory ---"
free -h
echo
echo "--- Memory Statistics (5 samples) ---"
vmstat 1 5
echo
echo "--- Critical /proc/meminfo Values ---"
grep -E 'MemTotal|MemAvailable|SwapTotal|SwapFree|Dirty|Writeback' /proc/meminfo
echo
echo "--- Top 5 Memory Consumers ---"
ps aux --sort=-%mem | head -6
Best Practices and Monitoring Tips
Establish baselines: Run free -h and vmstat 5 12 during normal operations to understand your system’s typical memory profile. High cache usage is expected; focus on available memory trends.
Set meaningful alerts: Don’t alert on low free memory. Alert on:
- Available memory below 10% of total
- Sustained swap I/O (si/so > 1000 KB/s for more than 1 minute)
- Dirty pages exceeding 20% of RAM
- OOM killer events in system logs
Monitor Dirty pages: Excessive dirty pages indicate I/O bottlenecks:
$ watch -n 1 "grep -E 'Dirty|Writeback' /proc/meminfo"
If Dirty consistently exceeds several hundred MB, your storage can’t keep up with write demands.
Understand OOM behavior: When the kernel exhausts memory, the Out-Of-Memory killer terminates processes. Check logs:
$ dmesg | grep -i "out of memory"
$ journalctl -k | grep -i "oom"
Historical analysis: Log memory metrics for trend analysis:
$ while true; do echo "$(date +%s),$(awk '/MemAvailable/ {print $2}' /proc/meminfo)" >> /var/log/memlog.csv; sleep 60; done
Remember: Linux memory management is designed to use all available RAM. A healthy system shows high memory usage with substantial buff/cache. Problems manifest as swap activity, not merely low free memory. Use these tools together to distinguish normal caching behavior from genuine memory pressure.