Go Database: database/sql and sqlx

Go's `database/sql` package is the standard library's answer to database access. It provides a generic interface around SQL databases, handling connection pooling, prepared statements, and...

Key Insights

  • Go’s database/sql provides a robust foundation with connection pooling and prepared statements, but requires verbose manual scanning of query results into structs
  • sqlx eliminates boilerplate by adding struct scanning, named parameters, and helper methods while maintaining full compatibility with database/sql
  • For production applications, use sqlx for productivity gains unless you need absolute minimal dependencies—the performance overhead is negligible and the developer experience improvement is substantial

Understanding Go’s Database Landscape

Go’s database/sql package is the standard library’s answer to database access. It provides a generic interface around SQL databases, handling connection pooling, prepared statements, and transactions. However, anyone who’s written production Go code knows the pain: manually scanning each column into variables, writing repetitive error handling, and dealing with NULL values becomes tedious fast.

This is where sqlx enters. Built by the team at Jmoiron, sqlx wraps database/sql and adds the conveniences developers actually want: scanning rows directly into structs, named parameters, and helper methods that reduce three lines of code to one.

Here’s what you need to get started:

import (
    "database/sql"
    _ "github.com/lib/pq" // PostgreSQL driver
    
    "github.com/jmoiron/sqlx"
)

// Register happens automatically via driver's init()

The underscore import registers the driver with database/sql. You’ll use the same pattern regardless of which database you choose—MySQL, PostgreSQL, SQLite, or others.

Core database/sql Patterns

The database/sql package forces you to be explicit about everything. This isn’t necessarily bad—it makes you think about connection management, query execution, and error handling.

func GetUserByID(db *sql.DB, id int) (*User, error) {
    var user User
    
    query := `SELECT id, email, name, created_at FROM users WHERE id = $1`
    
    err := db.QueryRow(query, id).Scan(
        &user.ID,
        &user.Email,
        &user.Name,
        &user.CreatedAt,
    )
    
    if err == sql.ErrNoRows {
        return nil, fmt.Errorf("user not found")
    }
    if err != nil {
        return nil, err
    }
    
    return &user, nil
}

Notice the manual scanning. Every column must be mapped to a struct field in the exact order. Add a column to your query? Update the Scan call. Reorder columns? Update the Scan call. This gets old quickly.

For multiple rows, the pattern involves a loop:

func GetActiveUsers(db *sql.DB) ([]User, error) {
    query := `SELECT id, email, name, created_at FROM users WHERE active = true`
    
    rows, err := db.Query(query)
    if err != nil {
        return nil, err
    }
    defer rows.Close()
    
    var users []User
    for rows.Next() {
        var u User
        if err := rows.Scan(&u.ID, &u.Email, &u.Name, &u.CreatedAt); err != nil {
            return nil, err
        }
        users = append(users, u)
    }
    
    if err := rows.Err(); err != nil {
        return nil, err
    }
    
    return users, nil
}

This works, but it’s verbose. You’re writing the same boilerplate for every query.

Prepared Statements and Transaction Management

Prepared statements are critical for both security and performance. The database/sql package handles parameterization automatically when you use placeholders:

// PostgreSQL uses $1, $2, etc.
query := `INSERT INTO users (email, name) VALUES ($1, $2)`
result, err := db.Exec(query, email, name)

// MySQL uses ?
query := `INSERT INTO users (email, name) VALUES (?, ?)`
result, err := db.Exec(query, email, name)

Never concatenate user input into SQL strings. Always use parameters.

For transactions, wrap operations in Begin/Commit/Rollback:

func TransferMoney(db *sql.DB, fromID, toID int, amount decimal.Decimal) error {
    tx, err := db.Begin()
    if err != nil {
        return err
    }
    defer tx.Rollback() // Rollback if we don't commit
    
    // Deduct from sender
    _, err = tx.Exec(
        `UPDATE accounts SET balance = balance - $1 WHERE id = $2`,
        amount, fromID,
    )
    if err != nil {
        return err
    }
    
    // Add to receiver
    _, err = tx.Exec(
        `UPDATE accounts SET balance = balance + $1 WHERE id = $2`,
        amount, toID,
    )
    if err != nil {
        return err
    }
    
    return tx.Commit()
}

The deferred Rollback is idempotent—calling it after Commit does nothing. This pattern ensures cleanup even if you return early.

Always use context for production code:

ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

err := db.QueryRowContext(ctx, query, id).Scan(&user.ID, &user.Email)

Why sqlx Changes Everything

sqlx eliminates the scanning boilerplate. Define your struct with db tags and let sqlx handle the mapping:

type User struct {
    ID        int       `db:"id"`
    Email     string    `db:"email"`
    Name      string    `db:"name"`
    CreatedAt time.Time `db:"created_at"`
}

func GetUserByID(db *sqlx.DB, id int) (*User, error) {
    var user User
    
    query := `SELECT id, email, name, created_at FROM users WHERE id = $1`
    
    err := db.Get(&user, query, id)
    if err != nil {
        return nil, err
    }
    
    return &user, nil
}

One line instead of five. The Get method handles the scan automatically, matching column names to struct tags.

For multiple rows, use Select:

func GetActiveUsers(db *sqlx.DB) ([]User, error) {
    var users []User
    
    query := `SELECT id, email, name, created_at FROM users WHERE active = true`
    
    err := db.Select(&users, query)
    if err != nil {
        return nil, err
    }
    
    return users, nil
}

The entire loop, scan, and error checking is handled internally.

Named parameters make complex queries readable:

type UserFilter struct {
    MinAge    int    `db:"min_age"`
    MaxAge    int    `db:"max_age"`
    Country   string `db:"country"`
}

func FindUsers(db *sqlx.DB, filter UserFilter) ([]User, error) {
    var users []User
    
    query := `
        SELECT id, email, name, created_at 
        FROM users 
        WHERE age >= :min_age 
          AND age <= :max_age 
          AND country = :country
    `
    
    rows, err := db.NamedQuery(query, filter)
    if err != nil {
        return nil, err
    }
    defer rows.Close()
    
    for rows.Next() {
        var u User
        if err := rows.StructScan(&u); err != nil {
            return nil, err
        }
        users = append(users, u)
    }
    
    return users, nil
}

Named parameters map directly to struct fields. No counting placeholders or maintaining parameter order.

Advanced sqlx Capabilities

The IN clause is notoriously annoying in SQL. You can’t use a single parameter for multiple values. sqlx.In() solves this:

func GetUsersByIDs(db *sqlx.DB, ids []int) ([]User, error) {
    var users []User
    
    query := `SELECT id, email, name, created_at FROM users WHERE id IN (?)`
    
    query, args, err := sqlx.In(query, ids)
    if err != nil {
        return nil, err
    }
    
    // Rebind for PostgreSQL ($1 instead of ?)
    query = db.Rebind(query)
    
    err = db.Select(&users, query, args...)
    if err != nil {
        return nil, err
    }
    
    return users, nil
}

sqlx.In() expands the slice into multiple placeholders. Rebind() converts placeholder syntax for your specific database driver.

For complex data models, nest structs:

type UserWithProfile struct {
    User
    Bio      string `db:"bio"`
    Website  string `db:"website"`
    Location string `db:"location"`
}

func GetUserWithProfile(db *sqlx.DB, id int) (*UserWithProfile, error) {
    var result UserWithProfile
    
    query := `
        SELECT u.id, u.email, u.name, u.created_at,
               p.bio, p.website, p.location
        FROM users u
        JOIN profiles p ON u.id = p.user_id
        WHERE u.id = $1
    `
    
    err := db.Get(&result, query, id)
    return &result, err
}

The embedded User struct fields are populated alongside the profile fields.

Performance and Production Considerations

Connection pooling is automatic but should be tuned for your workload:

func NewDB(dataSourceName string) (*sqlx.DB, error) {
    db, err := sqlx.Connect("postgres", dataSourceName)
    if err != nil {
        return nil, err
    }
    
    // Maximum open connections (default: unlimited)
    db.SetMaxOpenConns(25)
    
    // Maximum idle connections (default: 2)
    db.SetMaxIdleConns(25)
    
    // Maximum connection lifetime (default: unlimited)
    db.SetConnMaxLifetime(5 * time.Minute)
    
    // Maximum idle time (default: unlimited)
    db.SetConnMaxIdleTime(5 * time.Minute)
    
    return db, nil
}

For most applications, set MaxOpenConns to match your database’s connection limit divided by the number of application instances. Set MaxIdleConns equal to MaxOpenConns to avoid constant connection churn.

The performance difference between database/sql and sqlx is negligible. sqlx uses reflection for struct scanning, but this overhead is minimal compared to network I/O and query execution time. In benchmarks, the difference is typically under 5%.

Use database/sql when:

  • You’re building a library that shouldn’t impose dependencies
  • You need absolute control over every operation
  • Your queries are simple and don’t benefit from struct scanning

Use sqlx when:

  • You’re building an application (which is most of the time)
  • You want cleaner, more maintainable code
  • You work with complex queries and data models

Error handling deserves attention. Always check for sql.ErrNoRows when you expect exactly one result:

err := db.Get(&user, query, id)
if err == sql.ErrNoRows {
    return nil, ErrUserNotFound
}
if err != nil {
    return nil, fmt.Errorf("database error: %w", err)
}

Making the Right Choice

Start with sqlx for new projects. The productivity gain is real, and you can always drop down to database/sql for specific operations since sqlx.DB embeds sql.DB.

Migrating from database/sql to sqlx is straightforward—change sql.DB to sqlx.DB and gradually replace manual scanning with struct-based methods. You can mix approaches in the same codebase.

If you need more abstraction, consider full ORMs like GORM or ent. But for most applications, sqlx hits the sweet spot: enough convenience to eliminate boilerplate, not so much magic that you lose control over your SQL.

The Go database ecosystem is mature and battle-tested. Whether you choose database/sql or sqlx, you’re building on solid foundations. Just remember: explicit is better than implicit, but boilerplate is worse than both.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.