Dynamic Array: Implementation in Python, Go, Rust, and JavaScript

A dynamic array is a resizable array data structure that automatically grows when you add elements beyond its current capacity. Unlike fixed-size arrays where you must declare the size upfront,...

Key Insights

  • Dynamic arrays achieve amortized O(1) append performance by over-allocating capacity and doubling (or using a 1.5x factor) when full, trading memory for speed
  • Each language handles memory differently: Python requires ctypes for raw allocation, Go uses slices, Rust enforces ownership rules, and JavaScript abstracts everything behind flexible arrays
  • Understanding dynamic array internals helps you make informed decisions about when to pre-allocate capacity and avoid performance pitfalls in production code

Introduction: What is a Dynamic Array?

A dynamic array is a resizable array data structure that automatically grows when you add elements beyond its current capacity. Unlike fixed-size arrays where you must declare the size upfront, dynamic arrays handle memory management transparently, giving you the convenience of unbounded growth with near-constant-time insertions.

Every mainstream language provides dynamic arrays as a built-in: Python’s list, Go’s slice, Rust’s Vec<T>, and JavaScript’s Array. But understanding how they work under the hood transforms you from a user into someone who can reason about performance, predict memory behavior, and optimize critical paths.

Real-world applications are everywhere. Message queues buffer incoming events. Web servers accumulate request headers. Game engines track active entities. Any time you don’t know the final size upfront but need fast appends, you’re reaching for a dynamic array.

How Dynamic Arrays Work: The Growth Strategy

The magic of dynamic arrays lies in amortized analysis. Individual append operations aren’t always O(1)—occasionally, you hit capacity and must allocate a larger buffer, copy everything over, and free the old memory. That resize operation is O(n). But by growing the capacity geometrically (typically doubling), you ensure that expensive resizes happen infrequently enough that the average cost per operation remains constant.

The key distinction is capacity versus length. Length is how many elements you’ve stored. Capacity is how many elements the underlying buffer can hold before needing to resize. When length equals capacity and you append, the array allocates a new buffer (usually 2x the current capacity), copies existing elements, and continues.

Why 2x? It’s a trade-off. A growth factor of 2 means you waste up to 50% of allocated memory but resize less often. A factor of 1.5 wastes less memory but resizes more frequently. Python uses approximately 1.125x for small arrays and increases the factor for larger ones. Go uses a complex strategy that varies based on size.

Here’s the conceptual resize logic:

function append(element):
    if length == capacity:
        new_capacity = capacity * 2
        new_buffer = allocate(new_capacity)
        copy(buffer, new_buffer)
        free(buffer)
        buffer = new_buffer
        capacity = new_capacity
    buffer[length] = element
    length = length + 1

Python Implementation

Python’s list is already a dynamic array, but building one from scratch reveals the mechanics. We’ll use ctypes to allocate raw memory, bypassing Python’s object overhead.

import ctypes

class DynamicArray:
    def __init__(self):
        self._length = 0
        self._capacity = 1
        self._array = self._make_array(self._capacity)
    
    def _make_array(self, capacity):
        return (capacity * ctypes.py_object)()
    
    def __len__(self):
        return self._length
    
    def __getitem__(self, index):
        if not 0 <= index < self._length:
            raise IndexError("Index out of bounds")
        return self._array[index]
    
    def __setitem__(self, index, value):
        if not 0 <= index < self._length:
            raise IndexError("Index out of bounds")
        self._array[index] = value
    
    def append(self, value):
        if self._length == self._capacity:
            self._resize(2 * self._capacity)
        self._array[self._length] = value
        self._length += 1
    
    def _resize(self, new_capacity):
        new_array = self._make_array(new_capacity)
        for i in range(self._length):
            new_array[i] = self._array[i]
        self._array = new_array
        self._capacity = new_capacity
    
    def capacity(self):
        return self._capacity

The ctypes.py_object type lets us store arbitrary Python objects in a C-style array. The _resize method handles the allocation and copy. Note that we don’t explicitly free the old array—Python’s garbage collector handles that when no references remain.

Go Implementation

Go’s slices are dynamic arrays with a twist: they’re views into underlying arrays with their own length and capacity tracking. We’ll build a generic version that exposes this behavior explicitly.

package main

type DynamicArray[T any] struct {
    data     []T
    length   int
    capacity int
}

func NewDynamicArray[T any]() *DynamicArray[T] {
    return &DynamicArray[T]{
        data:     make([]T, 1),
        length:   0,
        capacity: 1,
    }
}

func (da *DynamicArray[T]) Append(value T) {
    if da.length == da.capacity {
        da.resize(da.capacity * 2)
    }
    da.data[da.length] = value
    da.length++
}

func (da *DynamicArray[T]) Get(index int) (T, bool) {
    var zero T
    if index < 0 || index >= da.length {
        return zero, false
    }
    return da.data[index], true
}

func (da *DynamicArray[T]) Set(index int, value T) bool {
    if index < 0 || index >= da.length {
        return false
    }
    da.data[index] = value
    return true
}

func (da *DynamicArray[T]) resize(newCapacity int) {
    newData := make([]T, newCapacity)
    copy(newData, da.data[:da.length])
    da.data = newData
    da.capacity = newCapacity
}

func (da *DynamicArray[T]) Len() int {
    return da.length
}

func (da *DynamicArray[T]) Cap() int {
    return da.capacity
}

Go’s built-in copy function handles the element transfer efficiently. The generic type parameter [T any] (available since Go 1.18) lets this work with any type. In practice, you’d just use slices directly with append(), but this implementation shows what’s happening underneath.

Rust Implementation

Rust’s ownership model makes dynamic array implementation more interesting. We must think carefully about who owns the memory and how borrowing works. Here’s a simplified version using Vec’s allocator:

use std::alloc::{alloc, dealloc, realloc, Layout};
use std::ptr;

pub struct DynamicArray<T> {
    ptr: *mut T,
    length: usize,
    capacity: usize,
}

impl<T> DynamicArray<T> {
    pub fn new() -> Self {
        DynamicArray {
            ptr: ptr::null_mut(),
            length: 0,
            capacity: 0,
        }
    }

    pub fn push(&mut self, value: T) {
        if self.length == self.capacity {
            self.grow();
        }
        unsafe {
            ptr::write(self.ptr.add(self.length), value);
        }
        self.length += 1;
    }

    pub fn get(&self, index: usize) -> Option<&T> {
        if index >= self.length {
            None
        } else {
            unsafe { Some(&*self.ptr.add(index)) }
        }
    }

    pub fn len(&self) -> usize {
        self.length
    }

    pub fn capacity(&self) -> usize {
        self.capacity
    }

    fn grow(&mut self) {
        let new_capacity = if self.capacity == 0 { 1 } else { self.capacity * 2 };
        let new_layout = Layout::array::<T>(new_capacity).unwrap();

        let new_ptr = if self.capacity == 0 {
            unsafe { alloc(new_layout) as *mut T }
        } else {
            let old_layout = Layout::array::<T>(self.capacity).unwrap();
            unsafe { realloc(self.ptr as *mut u8, old_layout, new_layout.size()) as *mut T }
        };

        if new_ptr.is_null() {
            panic!("Allocation failed");
        }

        self.ptr = new_ptr;
        self.capacity = new_capacity;
    }
}

impl<T> Drop for DynamicArray<T> {
    fn drop(&mut self) {
        if self.capacity > 0 {
            for i in 0..self.length {
                unsafe { ptr::drop_in_place(self.ptr.add(i)); }
            }
            let layout = Layout::array::<T>(self.capacity).unwrap();
            unsafe { dealloc(self.ptr as *mut u8, layout); }
        }
    }
}

The unsafe blocks are necessary because we’re doing manual memory management. The Drop trait implementation ensures we properly deallocate memory and drop contained elements when the array goes out of scope. In production, just use Vec<T>—this implementation demonstrates the underlying mechanics.

JavaScript Implementation

JavaScript arrays are already dynamic and sparse, but we can simulate the fixed-capacity behavior using TypedArrays to understand the concept:

class DynamicArray {
    #data;
    #length;
    #capacity;

    constructor() {
        this.#capacity = 1;
        this.#length = 0;
        this.#data = new Array(this.#capacity);
    }

    get length() {
        return this.#length;
    }

    get capacity() {
        return this.#capacity;
    }

    append(value) {
        if (this.#length === this.#capacity) {
            this.#resize(this.#capacity * 2);
        }
        this.#data[this.#length] = value;
        this.#length++;
    }

    get(index) {
        if (index < 0 || index >= this.#length) {
            throw new RangeError("Index out of bounds");
        }
        return this.#data[index];
    }

    set(index, value) {
        if (index < 0 || index >= this.#length) {
            throw new RangeError("Index out of bounds");
        }
        this.#data[index] = value;
    }

    #resize(newCapacity) {
        const newData = new Array(newCapacity);
        for (let i = 0; i < this.#length; i++) {
            newData[i] = this.#data[i];
        }
        this.#data = newData;
        this.#capacity = newCapacity;
    }

    *[Symbol.iterator]() {
        for (let i = 0; i < this.#length; i++) {
            yield this.#data[i];
        }
    }
}

The Symbol.iterator implementation makes our array work with for...of loops and spread syntax. Private fields (#data, #length, #capacity) encapsulate the implementation details.

Performance Comparison and Conclusion

Here are simple benchmarks appending one million integers:

# Python benchmark
import time
arr = DynamicArray()
start = time.perf_counter()
for i in range(1_000_000):
    arr.append(i)
print(f"Python: {time.perf_counter() - start:.3f}s")
// Go benchmark
start := time.Now()
arr := NewDynamicArray[int]()
for i := 0; i < 1_000_000; i++ {
    arr.Append(i)
}
fmt.Printf("Go: %v\n", time.Since(start))
// Rust benchmark
let start = std::time::Instant::now();
let mut arr = DynamicArray::new();
for i in 0..1_000_000 {
    arr.push(i);
}
println!("Rust: {:?}", start.elapsed());
// JavaScript benchmark
const arr = new DynamicArray();
const start = performance.now();
for (let i = 0; i < 1_000_000; i++) {
    arr.append(i);
}
console.log(`JavaScript: ${(performance.now() - start).toFixed(1)}ms`);

Typical results show Rust and Go completing in milliseconds, Python in hundreds of milliseconds, and JavaScript somewhere in between. But these custom implementations will always be slower than built-in alternatives that leverage low-level optimizations.

When should you use custom implementations? Almost never in production. Built-in dynamic arrays are battle-tested, optimized, and correct. But understanding the internals helps you make better decisions: pre-allocate capacity when you know the size, choose appropriate growth factors for memory-constrained environments, and recognize when array operations become bottlenecks.

The real value is conceptual. Once you understand that appending might trigger a resize, you’ll think twice before appending in a hot loop without pre-allocation. That’s the engineering intuition that separates good code from great code.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.