Go GORM: Object-Relational Mapping in Go

Object-Relational Mapping (ORM) libraries bridge the gap between your application's object-oriented code and relational databases. Instead of writing SQL strings and manually scanning results into...

Key Insights

  • GORM eliminates SQL boilerplate while maintaining type safety, but introduces abstraction overhead—use it for CRUD-heavy applications, not performance-critical queries requiring fine-tuned SQL
  • Relationships and migrations are GORM’s killer features: defining has many or belongs to with struct tags beats manually managing foreign keys and join tables
  • The N+1 query problem will bite you in production—always use Preload or Joins for related data instead of lazy loading

Introduction to GORM and ORMs in Go

Object-Relational Mapping (ORM) libraries bridge the gap between your application’s object-oriented code and relational databases. Instead of writing SQL strings and manually scanning results into structs, ORMs let you work with Go types directly.

GORM is Go’s most mature and widely-adopted ORM, offering a developer-friendly API that handles common database operations without sacrificing Go’s type safety. It supports PostgreSQL, MySQL, SQLite, and SQL Server out of the box.

The decision between GORM and raw SQL isn’t binary. Use GORM for standard CRUD operations, migrations, and relationship management. Reach for raw SQL when you need complex queries, specific database features, or maximum performance.

Here’s the difference in practice:

// Raw SQL approach
rows, err := db.Query("SELECT id, email, name FROM users WHERE age > ?", 18)
if err != nil {
    return err
}
defer rows.Close()

var users []User
for rows.Next() {
    var u User
    if err := rows.Scan(&u.ID, &u.Email, &u.Name); err != nil {
        return err
    }
    users = append(users, u)
}

// GORM approach
var users []User
db.Where("age > ?", 18).Find(&users)

GORM eliminates the boilerplate while keeping your code readable and maintainable.

Setting Up GORM and Database Connections

Install GORM and your database driver:

go get -u gorm.io/gorm
go get -u gorm.io/driver/postgres
go get -u gorm.io/driver/mysql
go get -u gorm.io/driver/sqlite

Connection setup varies by database:

package main

import (
    "gorm.io/driver/postgres"
    "gorm.io/driver/mysql"
    "gorm.io/gorm"
    "log"
)

// PostgreSQL connection
func connectPostgres() (*gorm.DB, error) {
    dsn := "host=localhost user=myuser password=mypass dbname=mydb port=5432 sslmode=disable"
    db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
    if err != nil {
        return nil, err
    }
    
    // Configure connection pool
    sqlDB, err := db.DB()
    if err != nil {
        return nil, err
    }
    sqlDB.SetMaxIdleConns(10)
    sqlDB.SetMaxOpenConns(100)
    
    return db, nil
}

// MySQL connection
func connectMySQL() (*gorm.DB, error) {
    dsn := "user:pass@tcp(127.0.0.1:3306)/dbname?charset=utf8mb4&parseTime=True&loc=Local"
    return gorm.Open(mysql.Open(dsn), &gorm.Config{})
}

Connection pooling configuration matters for production applications. Set MaxIdleConns to maintain warm connections and MaxOpenConns to prevent database overload.

Defining Models and Schema Migrations

GORM models are standard Go structs with tags defining database behavior:

type User struct {
    ID        uint           `gorm:"primaryKey"`
    Email     string         `gorm:"uniqueIndex;not null"`
    Name      string         `gorm:"size:100"`
    Age       int            `gorm:"default:0"`
    Posts     []Post         `gorm:"foreignKey:AuthorID"`
    CreatedAt time.Time
    UpdatedAt time.Time
    DeletedAt gorm.DeletedAt `gorm:"index"`
}

type Post struct {
    ID        uint      `gorm:"primaryKey"`
    Title     string    `gorm:"size:200;not null"`
    Content   string    `gorm:"type:text"`
    AuthorID  uint      `gorm:"not null;index"`
    Author    User      `gorm:"foreignKey:AuthorID"`
    Tags      []Tag     `gorm:"many2many:post_tags;"`
    CreatedAt time.Time
    UpdatedAt time.Time
}

type Tag struct {
    ID    uint   `gorm:"primaryKey"`
    Name  string `gorm:"size:50;uniqueIndex"`
    Posts []Post `gorm:"many2many:post_tags;"`
}

GORM conventions simplify model definitions: ID becomes the primary key, CreatedAt and UpdatedAt are auto-managed, and DeletedAt enables soft deletes.

Run migrations with AutoMigrate:

func main() {
    db, err := connectPostgres()
    if err != nil {
        log.Fatal(err)
    }
    
    // Create/update tables based on models
    db.AutoMigrate(&User{}, &Post{}, &Tag{})
}

AutoMigrate is safe for development but limited—it won’t drop columns or modify constraints. For production, consider dedicated migration tools like golang-migrate.

CRUD Operations and Querying

GORM’s API covers all standard database operations:

// CREATE
user := User{
    Email: "alice@example.com",
    Name:  "Alice",
    Age:   28,
}
result := db.Create(&user)
// user.ID is now populated

// Batch insert
users := []User{
    {Email: "bob@example.com", Name: "Bob"},
    {Email: "carol@example.com", Name: "Carol"},
}
db.Create(&users)

// READ
var user User
db.First(&user, 1) // Find by primary key
db.First(&user, "email = ?", "alice@example.com")

var users []User
db.Where("age > ?", 25).Find(&users)
db.Where("name LIKE ?", "%ali%").Order("created_at desc").Limit(10).Find(&users)

// UPDATE
db.Model(&user).Update("age", 29)
db.Model(&user).Updates(User{Name: "Alice Smith", Age: 29})
db.Model(&user).Updates(map[string]interface{}{"name": "Alice", "age": 29})

// DELETE
db.Delete(&user, 1)
db.Where("age < ?", 18).Delete(&User{})

// Soft delete (if DeletedAt field exists)
db.Delete(&user) // Sets DeletedAt to current time
db.Unscoped().Delete(&user) // Permanent delete

Querying related data:

// Find user with their posts
var user User
db.Preload("Posts").First(&user, 1)

// Find posts with authors and tags
var posts []Post
db.Preload("Author").Preload("Tags").Find(&posts)

Advanced Features

Hooks let you execute logic at specific lifecycle points:

func (u *User) BeforeCreate(tx *gorm.DB) error {
    // Hash password, validate data, etc.
    if u.Email == "" {
        return errors.New("email required")
    }
    return nil
}

func (u *User) AfterCreate(tx *gorm.DB) error {
    // Send welcome email, create audit log, etc.
    log.Printf("User created: %s", u.Email)
    return nil
}

Transactions ensure data consistency:

func transferCredits(db *gorm.DB, fromUserID, toUserID uint, amount int) error {
    return db.Transaction(func(tx *gorm.DB) error {
        var fromUser, toUser User
        
        if err := tx.First(&fromUser, fromUserID).Error; err != nil {
            return err
        }
        if err := tx.First(&toUser, toUserID).Error; err != nil {
            return err
        }
        
        if fromUser.Credits < amount {
            return errors.New("insufficient credits")
        }
        
        tx.Model(&fromUser).Update("credits", fromUser.Credits - amount)
        tx.Model(&toUser).Update("credits", toUser.Credits + amount)
        
        return nil
    })
}

Scopes enable reusable query logic:

func ActiveUsers(db *gorm.DB) *gorm.DB {
    return db.Where("deleted_at IS NULL AND active = ?", true)
}

func RecentPosts(db *gorm.DB) *gorm.DB {
    return db.Where("created_at > ?", time.Now().AddDate(0, 0, -7))
}

// Usage
var users []User
db.Scopes(ActiveUsers).Where("age > ?", 18).Find(&users)

var posts []Post
db.Scopes(RecentPosts).Order("created_at desc").Find(&posts)

Performance Optimization and Best Practices

The N+1 query problem destroys performance. This happens when you load related data in a loop:

// BAD: N+1 queries (1 query for posts + N queries for authors)
var posts []Post
db.Find(&posts)
for _, post := range posts {
    var author User
    db.First(&author, post.AuthorID) // Separate query each iteration!
    fmt.Println(author.Name)
}

// GOOD: 2 queries total
var posts []Post
db.Preload("Author").Find(&posts)
for _, post := range posts {
    fmt.Println(post.Author.Name)
}

// BETTER: 1 query with JOIN
var posts []Post
db.Joins("Author").Find(&posts)

Use Preload for separate queries or Joins for a single query. Joins is faster but loads duplicate data for one-to-many relationships.

Add indexes for frequently queried columns:

type User struct {
    Email string `gorm:"uniqueIndex"`
    Name  string `gorm:"index"`
}

// Or programmatically
db.Exec("CREATE INDEX idx_users_email ON users(email)")

Tune connection pools based on your workload:

sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(25)      // Keep connections warm
sqlDB.SetMaxOpenConns(100)     // Prevent overwhelming the database
sqlDB.SetConnMaxLifetime(5 * time.Minute)

Common pitfalls to avoid:

  • Don’t ignore errors—always check result.Error
  • Use Select to load only needed fields for large tables
  • Avoid AutoMigrate in production code; use proper migrations
  • Use prepared statements (GORM does this automatically) to prevent SQL injection
  • Don’t use Unscoped() unless you need hard deletes or to query soft-deleted records

Conclusion

GORM strikes a practical balance between developer productivity and control. Its struct-based models, automatic migrations, and relationship handling eliminate tedious SQL boilerplate while maintaining Go’s type safety.

Use GORM when building CRUD-heavy applications, REST APIs, or admin dashboards. Consider alternatives like sqlx or raw database/sql for analytics queries, high-performance services, or when you need complete control over SQL execution.

The library’s extensive documentation at gorm.io covers advanced topics like custom data types, plugins, and database-specific features. For production applications, combine GORM with proper migration tools, monitoring, and connection pool tuning to build reliable data layers.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.