How to Connect to PostgreSQL in Go
PostgreSQL is one of the most popular relational databases, and Go's `database/sql` package provides a clean, idiomatic interface for working with it. The standard library handles connection pooling,...
Key Insights
- Use the
pgxdriver for new projects—it’s pure Go, faster thanlib/pq, and actively maintained with better PostgreSQL feature support - Always configure connection pool settings (
SetMaxOpenConns,SetMaxIdleConns,SetConnMaxLifetime) for production; the defaults are rarely optimal - Use
context.Contextwith timeouts for all database operations to prevent hanging queries and enable graceful cancellation
Why PostgreSQL and Go Work Well Together
PostgreSQL is one of the most popular relational databases, and Go’s database/sql package provides a clean, idiomatic interface for working with it. The standard library handles connection pooling, prepared statements, and query lifecycle management, letting you focus on your application logic rather than database plumbing.
The key to effective PostgreSQL integration in Go is understanding the driver ecosystem and connection management. Get these fundamentals right, and you’ll avoid the most common pitfalls that plague production applications.
Choosing and Installing a PostgreSQL Driver
Go’s database/sql package is driver-agnostic. You need a PostgreSQL-specific driver that implements the database/sql/driver interface.
Two drivers dominate the ecosystem:
lib/pq: The original PostgreSQL driver. It’s stable and widely used but in maintenance mode. It relies on cgo for certain operations, which can complicate cross-compilation.
pgx: A pure Go driver that’s faster, more feature-rich, and actively developed. It supports PostgreSQL-specific types better and offers both database/sql compatibility and a native interface.
For new projects, use pgx. Install the database/sql compatible version:
go get github.com/jackc/pgx/v5/stdlib
If you’re maintaining legacy code with lib/pq:
go get github.com/lib/pq
The rest of this article uses pgx, but the code patterns apply to both drivers with minimal changes.
Establishing Your First Connection
PostgreSQL connections require a connection string (DSN - Data Source Name) containing your database credentials and configuration. The format follows this pattern:
postgres://username:password@localhost:5432/database_name?sslmode=disable
Here’s a complete example that opens a connection, verifies it works, and handles errors properly:
package main
import (
"database/sql"
"fmt"
"log"
_ "github.com/jackc/pgx/v5/stdlib"
)
func main() {
connStr := "postgres://postgres:password@localhost:5432/myapp?sslmode=disable"
db, err := sql.Open("pgx", connStr)
if err != nil {
log.Fatalf("Unable to parse connection string: %v", err)
}
defer db.Close()
// sql.Open() doesn't actually connect - verify with Ping()
if err := db.Ping(); err != nil {
log.Fatalf("Unable to connect to database: %v", err)
}
fmt.Println("Successfully connected to PostgreSQL!")
}
Critical points: sql.Open() doesn’t establish a connection—it just validates the DSN format. Always call db.Ping() to verify connectivity. The defer db.Close() ensures cleanup when your application exits.
In production, never hardcode credentials. Use environment variables:
import "os"
connStr := fmt.Sprintf(
"postgres://%s:%s@%s:%s/%s?sslmode=require",
os.Getenv("DB_USER"),
os.Getenv("DB_PASSWORD"),
os.Getenv("DB_HOST"),
os.Getenv("DB_PORT"),
os.Getenv("DB_NAME"),
)
Configuring the Connection Pool
The sql.DB object is a connection pool, not a single connection. Go manages this pool automatically, but the defaults aren’t suitable for production workloads.
Configure these three critical settings:
db, err := sql.Open("pgx", connStr)
if err != nil {
log.Fatal(err)
}
// Maximum number of open connections to the database
db.SetMaxOpenConns(25)
// Maximum number of idle connections in the pool
db.SetMaxIdleConns(5)
// Maximum lifetime of a connection (prevents stale connections)
db.SetConnMaxLifetime(5 * time.Minute)
SetMaxOpenConns(25): Limits total connections. Set this based on your PostgreSQL max_connections setting (default 100) divided by the number of application instances. If you have 4 app servers and PostgreSQL allows 100 connections, use 20-25 per instance.
SetMaxIdleConns(5): Keeps connections ready for reuse. Too low and you’ll constantly reconnect. Too high and you waste resources. Start with 20-25% of MaxOpenConns.
SetConnMaxLifetime(5m): Forces connection recycling. This prevents issues when PostgreSQL closes long-lived connections or when you need to rebalance load across database replicas.
Executing Queries and Handling Results
Go provides three methods for executing SQL, each with specific use cases:
Query(): Returns multiple rows. Use for SELECT statements that return result sets.
QueryRow(): Returns a single row. Use for SELECT statements where you expect one result.
Exec(): Returns no rows. Use for INSERT, UPDATE, DELETE, and DDL statements.
Here’s a practical example with a users table:
type User struct {
ID int
Email string
FirstName string
LastName sql.NullString // Handles NULL values
CreatedAt time.Time
}
// Insert a new user
func createUser(db *sql.DB, email, firstName string) (int, error) {
var id int
query := `
INSERT INTO users (email, first_name, created_at)
VALUES ($1, $2, NOW())
RETURNING id
`
err := db.QueryRow(query, email, firstName).Scan(&id)
return id, err
}
// Get a single user by ID
func getUser(db *sql.DB, id int) (*User, error) {
user := &User{}
query := `
SELECT id, email, first_name, last_name, created_at
FROM users
WHERE id = $1
`
err := db.QueryRow(query, id).Scan(
&user.ID,
&user.Email,
&user.FirstName,
&user.LastName,
&user.CreatedAt,
)
if err == sql.ErrNoRows {
return nil, fmt.Errorf("user not found")
}
return user, err
}
// Get multiple users
func getUsers(db *sql.DB, limit int) ([]*User, error) {
query := `
SELECT id, email, first_name, last_name, created_at
FROM users
ORDER BY created_at DESC
LIMIT $1
`
rows, err := db.Query(query, limit)
if err != nil {
return nil, err
}
defer rows.Close()
var users []*User
for rows.Next() {
user := &User{}
err := rows.Scan(
&user.ID,
&user.Email,
&user.FirstName,
&user.LastName,
&user.CreatedAt,
)
if err != nil {
return nil, err
}
users = append(users, user)
}
// Check for errors during iteration
if err = rows.Err(); err != nil {
return nil, err
}
return users, nil
}
Notice the use of sql.NullString for LastName. This handles NULL values from the database. Without it, scanning a NULL into a string causes an error.
PostgreSQL uses $1, $2 placeholders for parameters (not ? like MySQL). This prevents SQL injection—never use string concatenation for queries.
Context-Aware Database Operations
Production applications need timeouts and cancellation. Use the context-aware methods: QueryContext(), QueryRowContext(), and ExecContext().
func getUserWithTimeout(db *sql.DB, id int) (*User, error) {
// Create a context with 3-second timeout
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
user := &User{}
query := `
SELECT id, email, first_name, last_name, created_at
FROM users
WHERE id = $1
`
err := db.QueryRowContext(ctx, query, id).Scan(
&user.ID,
&user.Email,
&user.FirstName,
&user.LastName,
&user.CreatedAt,
)
if err == context.DeadlineExceeded {
return nil, fmt.Errorf("query timeout after 3 seconds")
}
return user, err
}
This prevents runaway queries from blocking your application. If a query takes too long, the context cancels it and returns an error. In HTTP handlers, pass the request context to enable cancellation when clients disconnect.
Connection Management Best Practices
Don’t create multiple sql.DB instances: The sql.DB object is designed to be long-lived. Create one at application startup and reuse it. Creating multiple pools fragments your connection limit and degrades performance.
Always close Rows: Failing to close Rows objects leaks connections. Use defer rows.Close() immediately after checking the error from Query().
Use prepared statements for repeated queries: If you execute the same query repeatedly with different parameters, prepare it once:
stmt, err := db.Prepare("SELECT * FROM users WHERE email = $1")
if err != nil {
log.Fatal(err)
}
defer stmt.Close()
// Use stmt.QueryRow() or stmt.Query() multiple times
Monitor your connection pool: Log pool statistics periodically to catch connection leaks:
stats := db.Stats()
log.Printf("Open connections: %d, In use: %d, Idle: %d",
stats.OpenConnections,
stats.InUse,
stats.Idle,
)
Common Pitfalls and Troubleshooting
“pq: sorry, too many clients already”: You’ve exceeded PostgreSQL’s max_connections. Lower SetMaxOpenConns() or increase PostgreSQL’s limit.
Slow queries after idle periods: Connections might be timing out. Reduce SetConnMaxLifetime() or implement connection health checks.
“sql: expected 5 destination arguments, got 4”: Your Scan() arguments don’t match the SELECT columns. Count them carefully.
Context deadline exceeded on simple queries: Check network latency, PostgreSQL performance, or whether your timeout is too aggressive.
Next Steps
You now have the foundation for PostgreSQL integration in Go. From here, explore:
- Transactions: Use
db.Begin()for multi-statement operations that need atomicity - Database migrations: Tools like
golang-migrateorgoosemanage schema changes - ORMs and query builders:
sqlcgenerates type-safe code from SQL, whileGORMprovides a full ORM if you prefer that approach - Connection pooling with pgBouncer: For applications with many short-lived connections
The patterns shown here scale from simple scripts to production systems handling thousands of requests per second. Master connection management and query execution, and you’ll build reliable, performant database-backed applications.