Go WebSocket: Real-Time Communication in Go

WebSockets solve a fundamental problem with traditional HTTP: the request-response model isn't designed for real-time bidirectional communication. With HTTP, the client must constantly poll the...

Key Insights

  • WebSockets maintain persistent bidirectional connections, making them ideal for real-time features like chat, live dashboards, and collaborative editing—but they require careful connection lifecycle management and resource cleanup.
  • The hub pattern centralizes message broadcasting in Go, using goroutines and channels to safely manage concurrent WebSocket connections without race conditions.
  • Production WebSocket deployments need health monitoring via ping/pong, proper origin validation, and stateless architecture considerations for horizontal scaling.

Understanding WebSockets vs HTTP

WebSockets solve a fundamental problem with traditional HTTP: the request-response model isn’t designed for real-time bidirectional communication. With HTTP, the client must constantly poll the server for updates, wasting bandwidth and introducing latency. WebSockets establish a persistent connection that allows both client and server to push data at any time.

The WebSocket protocol starts as an HTTP request with an Upgrade header, then switches to a persistent TCP connection. This makes WebSockets perfect for chat applications, live sports scores, collaborative document editing, stock tickers, and multiplayer games—anywhere you need instant updates without polling overhead.

Go’s gorilla/websocket package is the de facto standard for WebSocket implementations. It’s production-ready, actively maintained, and handles the protocol details while giving you control over connection management.

Building Your First WebSocket Server

Let’s start with a minimal WebSocket server. First, install the dependency:

go get github.com/gorilla/websocket

Here’s a basic server that echoes messages back to the client:

package main

import (
    "log"
    "net/http"
    
    "github.com/gorilla/websocket"
)

var upgrader = websocket.Upgrader{
    ReadBufferSize:  1024,
    WriteBufferSize: 1024,
    CheckOrigin: func(r *http.Request) bool {
        return true // Allow all origins for development
    },
}

func handleWebSocket(w http.ResponseWriter, r *http.Request) {
    conn, err := upgrader.Upgrade(w, r, nil)
    if err != nil {
        log.Printf("Upgrade error: %v", err)
        return
    }
    defer conn.Close()
    
    for {
        messageType, message, err := conn.ReadMessage()
        if err != nil {
            log.Printf("Read error: %v", err)
            break
        }
        
        log.Printf("Received: %s", message)
        
        if err := conn.WriteMessage(messageType, message); err != nil {
            log.Printf("Write error: %v", err)
            break
        }
    }
}

func main() {
    http.HandleFunc("/ws", handleWebSocket)
    log.Println("Server starting on :8080")
    log.Fatal(http.ListenAndServe(":8080", nil))
}

The upgrader.Upgrade() method performs the WebSocket handshake, transforming the HTTP connection into a WebSocket connection. The infinite loop reads messages and echoes them back. When an error occurs (like a client disconnect), we break the loop and close the connection.

Critical: Always set CheckOrigin appropriately. Returning true allows any origin—fine for development, dangerous for production. In production, validate the origin against your allowed domains.

Implementing the Hub Pattern for Broadcasting

A single echo server is trivial. Real applications need to broadcast messages to multiple connected clients. The hub pattern solves this with a central coordinator that manages all connections using Go’s concurrency primitives.

Here’s a production-ready hub implementation:

package main

type Client struct {
    hub  *Hub
    conn *websocket.Conn
    send chan []byte
}

type Hub struct {
    clients    map[*Client]bool
    broadcast  chan []byte
    register   chan *Client
    unregister chan *Client
}

func newHub() *Hub {
    return &Hub{
        clients:    make(map[*Client]bool),
        broadcast:  make(chan []byte),
        register:   make(chan *Client),
        unregister: make(chan *Client),
    }
}

func (h *Hub) run() {
    for {
        select {
        case client := <-h.register:
            h.clients[client] = true
            
        case client := <-h.unregister:
            if _, ok := h.clients[client]; ok {
                delete(h.clients, client)
                close(client.send)
            }
            
        case message := <-h.broadcast:
            for client := range h.clients {
                select {
                case client.send <- message:
                default:
                    close(client.send)
                    delete(h.clients, client)
                }
            }
        }
    }
}

func (c *Client) readPump() {
    defer func() {
        c.hub.unregister <- c
        c.conn.Close()
    }()
    
    for {
        _, message, err := c.conn.ReadMessage()
        if err != nil {
            break
        }
        c.hub.broadcast <- message
    }
}

func (c *Client) writePump() {
    defer c.conn.Close()
    
    for message := range c.send {
        if err := c.conn.WriteMessage(websocket.TextMessage, message); err != nil {
            break
        }
    }
}

func serveWs(hub *Hub, w http.ResponseWriter, r *http.Request) {
    conn, err := upgrader.Upgrade(w, r, nil)
    if err != nil {
        log.Println(err)
        return
    }
    
    client := &Client{
        hub:  hub,
        conn: conn,
        send: make(chan []byte, 256),
    }
    
    client.hub.register <- client
    
    go client.writePump()
    go client.readPump()
}

This architecture separates concerns beautifully. The hub runs in a single goroutine, preventing race conditions on the clients map. Each client gets two goroutines: readPump reads from the WebSocket, and writePump sends messages from the buffered channel.

The buffered send channel (256 messages) prevents slow clients from blocking the hub. If a client’s buffer fills, we close their connection rather than blocking other clients.

JavaScript Client Integration

Here’s a minimal HTML client to connect to your Go server:

<!DOCTYPE html>
<html>
<head>
    <title>WebSocket Chat</title>
</head>
<body>
    <div id="messages"></div>
    <input type="text" id="messageInput" placeholder="Type a message">
    <button onclick="sendMessage()">Send</button>

    <script>
        const ws = new WebSocket('ws://localhost:8080/ws');
        const messages = document.getElementById('messages');
        
        ws.onopen = () => {
            console.log('Connected to server');
        };
        
        ws.onmessage = (event) => {
            const msg = document.createElement('div');
            msg.textContent = event.data;
            messages.appendChild(msg);
        };
        
        ws.onerror = (error) => {
            console.error('WebSocket error:', error);
        };
        
        ws.onclose = () => {
            console.log('Disconnected from server');
        };
        
        function sendMessage() {
            const input = document.getElementById('messageInput');
            if (input.value && ws.readyState === WebSocket.OPEN) {
                ws.send(input.value);
                input.value = '';
            }
        }
    </script>
</body>
</html>

Always check ws.readyState === WebSocket.OPEN before sending. Attempting to send on a closed connection throws an error.

Production-Ready Patterns

Ping/Pong for Connection Health

WebSocket connections can silently die due to network issues. Implement ping/pong to detect dead connections:

import "time"

const (
    writeWait      = 10 * time.Second
    pongWait       = 60 * time.Second
    pingPeriod     = (pongWait * 9) / 10
    maxMessageSize = 512
)

func (c *Client) readPump() {
    defer func() {
        c.hub.unregister <- c
        c.conn.Close()
    }()
    
    c.conn.SetReadDeadline(time.Now().Add(pongWait))
    c.conn.SetPongHandler(func(string) error {
        c.conn.SetReadDeadline(time.Now().Add(pongWait))
        return nil
    })
    c.conn.SetReadLimit(maxMessageSize)
    
    for {
        _, message, err := c.conn.ReadMessage()
        if err != nil {
            break
        }
        c.hub.broadcast <- message
    }
}

func (c *Client) writePump() {
    ticker := time.NewTicker(pingPeriod)
    defer func() {
        ticker.Stop()
        c.conn.Close()
    }()
    
    for {
        select {
        case message, ok := <-c.send:
            c.conn.SetWriteDeadline(time.Now().Add(writeWait))
            if !ok {
                c.conn.WriteMessage(websocket.CloseMessage, []byte{})
                return
            }
            
            if err := c.conn.WriteMessage(websocket.TextMessage, message); err != nil {
                return
            }
            
        case <-ticker.C:
            c.conn.SetWriteDeadline(time.Now().Add(writeWait))
            if err := c.conn.WriteMessage(websocket.PingMessage, nil); err != nil {
                return
            }
        }
    }
}

The server sends pings every 54 seconds (9/10 of the 60-second pong wait). If the client doesn’t respond with a pong within 60 seconds, the read deadline expires and the connection closes.

Security Considerations

Never skip origin validation in production:

var upgrader = websocket.Upgrader{
    CheckOrigin: func(r *http.Request) bool {
        origin := r.Header.Get("Origin")
        return origin == "https://yourdomain.com" || 
               origin == "https://www.yourdomain.com"
    },
}

For authentication, validate tokens before upgrading the connection:

func serveWs(hub *Hub, w http.ResponseWriter, r *http.Request) {
    token := r.URL.Query().Get("token")
    if !validateToken(token) {
        http.Error(w, "Unauthorized", http.StatusUnauthorized)
        return
    }
    
    conn, err := upgrader.Upgrade(w, r, nil)
    // ... rest of handler
}

Scaling Considerations

The hub pattern works great for a single server, but horizontal scaling requires external message distribution. Redis pub/sub is the common solution:

func (h *Hub) publishMessage(message []byte) {
    h.redisClient.Publish(ctx, "chat", message)
}

func (h *Hub) subscribeToMessages() {
    pubsub := h.redisClient.Subscribe(ctx, "chat")
    ch := pubsub.Channel()
    
    for msg := range ch {
        h.broadcast <- []byte(msg.Payload)
    }
}

Each server instance subscribes to the Redis channel and broadcasts received messages to its local clients. This allows you to run multiple WebSocket servers behind a load balancer.

Use sticky sessions (session affinity) at the load balancer level to ensure a client’s requests always route to the same backend server, or implement connection state synchronization.

Testing WebSocket Handlers

Test WebSocket handlers using the httptest package:

func TestWebSocketEcho(t *testing.T) {
    s := httptest.NewServer(http.HandlerFunc(handleWebSocket))
    defer s.Close()
    
    url := "ws" + strings.TrimPrefix(s.URL, "http") + "/ws"
    ws, _, err := websocket.DefaultDialer.Dial(url, nil)
    if err != nil {
        t.Fatalf("Dial error: %v", err)
    }
    defer ws.Close()
    
    testMessage := []byte("hello")
    if err := ws.WriteMessage(websocket.TextMessage, testMessage); err != nil {
        t.Fatalf("Write error: %v", err)
    }
    
    _, message, err := ws.ReadMessage()
    if err != nil {
        t.Fatalf("Read error: %v", err)
    }
    
    if !bytes.Equal(message, testMessage) {
        t.Errorf("Expected %s, got %s", testMessage, message)
    }
}

WebSockets in Go provide powerful real-time capabilities with Go’s concurrency model making connection management straightforward. The hub pattern, proper health monitoring, and security validation are essential for production deployments. Start simple, test thoroughly, and scale when needed.

Liked this? There's more.

Every week: one practical technique, explained simply, with code you can use immediately.