April 13, 2026 9 min read

Go database/sql Patterns for Financial Services

Go's database/sql package is deceptively simple. The defaults that work fine for a blog engine will silently wreck a payment system. Here are the patterns I've settled on after running financial workloads in production.

The Defaults Will Hurt You

Out of the box, database/sql gives you unlimited open connections, no idle connection limit, and connections that live forever. For a CRUD app serving a few hundred requests per minute, that's fine. For a payment service processing thousands of transactions with strict latency requirements, it's a recipe for connection exhaustion, stale connections, and the kind of database outages that wake up the entire on-call rotation.

I learned this the hard way on a payment reconciliation service. We'd been running fine for weeks, then one morning the database hit its max_connections limit. Turns out we had a slow query path that was holding connections open for 30+ seconds during peak settlement windows. With no cap on open connections, the pool just kept growing until PostgreSQL refused new connections entirely. Every service sharing that database cluster went down — not just ours.

The fix wasn't complicated, but it required understanding what each pool setting actually does and picking numbers that match your workload, not the defaults.

Connection Pool Tuning

There are three knobs that matter: SetMaxOpenConns, SetMaxIdleConns, and SetConnMaxLifetime. Here's how they interact:

New Request
Incoming query
Idle Pool
Waiting connections
Active / In-Use
Running queries
Returned to Idle
If under MaxIdleConns
Closed
Over limit or MaxLifetime exceeded

When a query comes in, the pool first checks for an idle connection. If one's available, it's reused. If not, and we're under MaxOpenConns, a new connection is created. If we're at the limit, the caller blocks until a connection is returned. When a connection finishes its work, it goes back to the idle pool — unless we already have MaxIdleConns idle connections, in which case it's closed. And ConnMaxLifetime ensures no connection lives forever, which matters when your database is behind a load balancer or you're doing rolling restarts.

Here's what I use for a payment service handling around 2,000 transactions per minute against a PostgreSQL instance with max_connections = 200 shared across four services:

db, err := sql.Open("postgres", connString)
if err != nil {
    log.Fatal(err)
}

db.SetMaxOpenConns(25)
db.SetMaxIdleConns(25)
db.SetConnMaxLifetime(5 * time.Minute)
db.SetConnMaxIdleTime(2 * time.Minute)
25
MaxOpenConns
per service instance
25
MaxIdleConns
match open to avoid churn
5m
ConnMaxLifetime
rotate before LB drops

A few things to note. I set MaxIdleConns equal to MaxOpenConns. If your idle limit is lower than your open limit, you're constantly closing and reopening connections during traffic spikes — that's TCP handshake overhead plus TLS negotiation on every new connection. For financial workloads where latency matters, keep them equal. The 5-minute lifetime is tuned to our load balancer's idle timeout (typically 5-10 minutes for cloud-managed databases). You want connections to recycle before the infrastructure kills them.

The "Defer Rollback" Transaction Pattern

This is the single most important pattern for financial database code. Every transaction should start with a deferred rollback. It sounds paranoid, but it's saved me from partial commits more times than I can count.

func TransferFunds(ctx context.Context, db *sql.DB, from, to string, amount int64) error {
    tx, err := db.BeginTx(ctx, &sql.TxOptions{
        Isolation: sql.LevelSerializable,
    })
    if err != nil {
        return fmt.Errorf("begin transaction: %w", err)
    }
    defer tx.Rollback() // no-op if tx.Commit() succeeds

    // Debit source account
    var balance int64
    err = tx.QueryRowContext(ctx,
        "SELECT balance FROM accounts WHERE id = $1 FOR UPDATE", from,
    ).Scan(&balance)
    if err != nil {
        return fmt.Errorf("query source balance: %w", err)
    }

    if balance < amount {
        return fmt.Errorf("insufficient funds: have %d, need %d", balance, amount)
    }

    _, err = tx.ExecContext(ctx,
        "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from)
    if err != nil {
        return fmt.Errorf("debit source: %w", err)
    }

    // Credit destination account
    _, err = tx.ExecContext(ctx,
        "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to)
    if err != nil {
        return fmt.Errorf("credit destination: %w", err)
    }

    // Insert ledger entries for both sides
    _, err = tx.ExecContext(ctx,
        `INSERT INTO ledger_entries (account_id, amount, entry_type, reference)
         VALUES ($1, $2, 'debit', $3), ($4, $5, 'credit', $3)`,
        from, amount, fmt.Sprintf("transfer-%s-%s", from, to),
        to, amount)
    if err != nil {
        return fmt.Errorf("insert ledger entries: %w", err)
    }

    return tx.Commit()
}

The defer tx.Rollback() on line 8 is the key. If anything panics, if any error causes an early return, if you forget a check somewhere — the transaction rolls back. After tx.Commit() succeeds, the deferred rollback becomes a no-op. It's a safety net that costs nothing.

Notice the FOR UPDATE on the balance query. Without it, two concurrent transfers from the same account could both read the same balance, both decide there's enough money, and both succeed — overdrawing the account. FOR UPDATE takes a row-level lock that serializes access. Combined with sql.LevelSerializable, this gives you the strongest consistency guarantee PostgreSQL offers.

Common pitfall: Don't use db.Begin() without context in financial code. Always use db.BeginTx(ctx, opts) so your transaction respects context deadlines. A transaction without a timeout can hold locks indefinitely if the calling goroutine gets stuck — and in a payment system, that means other transactions pile up behind it until the database grinds to a halt.

Prepared Statements vs. Query Strings

Go's database/sql gives you two ways to run queries: inline with db.QueryContext or via prepared statements with db.PrepareContext. For financial services, the choice depends on the query pattern.

Approach Best For Trade-off
db.QueryContext Ad-hoc queries, reporting, one-off lookups Parse + plan on every call
db.PrepareContext Hot-path queries run thousands of times Prepared on each connection; connection-affinity issues
tx.QueryContext Queries inside transactions Scoped to transaction lifetime — no leaks

For the hot path — balance checks, ledger inserts, transaction status updates — I prepare statements at service startup and reuse them. For reporting queries that run a few times a day, inline queries are fine. The performance difference on a single call is negligible, but at 2,000 TPS the parse overhead adds up.

type PaymentStore struct {
    db          *sql.DB
    stmtBalance *sql.Stmt
    stmtDebit   *sql.Stmt
    stmtCredit  *sql.Stmt
}

func NewPaymentStore(ctx context.Context, db *sql.DB) (*PaymentStore, error) {
    stmtBalance, err := db.PrepareContext(ctx,
        "SELECT balance FROM accounts WHERE id = $1 FOR UPDATE")
    if err != nil {
        return nil, fmt.Errorf("prepare balance query: %w", err)
    }

    stmtDebit, err := db.PrepareContext(ctx,
        "UPDATE accounts SET balance = balance - $1 WHERE id = $2")
    if err != nil {
        return nil, fmt.Errorf("prepare debit: %w", err)
    }

    stmtCredit, err := db.PrepareContext(ctx,
        "UPDATE accounts SET balance = balance + $1 WHERE id = $2")
    if err != nil {
        return nil, fmt.Errorf("prepare credit: %w", err)
    }

    return &PaymentStore{
        db: db, stmtBalance: stmtBalance,
        stmtDebit: stmtDebit, stmtCredit: stmtCredit,
    }, nil
}

One gotcha: prepared statements in database/sql are re-prepared on each underlying connection. The pool manages this transparently, but it means your first call on a new connection pays the prepare cost. With the pool tuning above (idle == open), connections are stable and this rarely matters in practice.

Handling NULL Values Safely

Financial databases are full of nullable columns. A merchant might not have a settlement account yet. A transaction might not have a refund reference. If you scan a NULL into a plain string or int64, Go won't panic — it'll silently give you a zero value. In a payment system, the difference between "zero dollars" and "no value" is the difference between a correct ledger and a reconciliation nightmare.

type Transaction struct {
    ID          string
    Amount      int64
    Currency    string
    RefundRef   sql.NullString  // might not exist yet
    SettledAt   sql.NullTime    // NULL until settlement
    FeeAmount   sql.NullInt64   // some txns have no fees
}

func (s *PaymentStore) GetTransaction(ctx context.Context, id string) (*Transaction, error) {
    var t Transaction
    err := s.db.QueryRowContext(ctx,
        `SELECT id, amount, currency, refund_ref, settled_at, fee_amount
         FROM transactions WHERE id = $1`, id,
    ).Scan(&t.ID, &t.Amount, &t.Currency,
        &t.RefundRef, &t.SettledAt, &t.FeeAmount)
    if err != nil {
        if errors.Is(err, sql.ErrNoRows) {
            return nil, fmt.Errorf("transaction %s not found", id)
        }
        return nil, fmt.Errorf("query transaction: %w", err)
    }
    return &t, nil
}

The sql.Null* types carry a Valid boolean that tells you whether the value was actually present. Always check it before using the value. I've seen code that does t.FeeAmount.Int64 without checking Valid — it works until someone queries a transaction with no fee and the downstream calculation produces wrong numbers.

Row Scanning That Doesn't Drop Errors

Here's a pattern I see constantly in Go database code, and it's subtly broken:

// DON'T do this
rows, err := db.QueryContext(ctx, "SELECT id, amount FROM transactions WHERE status = $1", "pending")
if err != nil {
    return err
}
defer rows.Close()

var results []Transaction
for rows.Next() {
    var t Transaction
    rows.Scan(&t.ID, &t.Amount) // error ignored!
    results = append(results, t)
}
// rows.Err() never checked!

Two problems. First, rows.Scan can fail — type mismatches, connection drops mid-result-set, column count mismatches after a schema migration. Ignoring that error means you silently append zero-valued structs to your results. Second, rows.Err() catches errors that occurred during iteration (network issues, context cancellation). If you skip it, you might process a partial result set thinking it's complete.

The correct pattern:

rows, err := db.QueryContext(ctx, "SELECT id, amount FROM transactions WHERE status = $1", "pending")
if err != nil {
    return nil, fmt.Errorf("query pending transactions: %w", err)
}
defer rows.Close()

var results []Transaction
for rows.Next() {
    var t Transaction
    if err := rows.Scan(&t.ID, &t.Amount); err != nil {
        return nil, fmt.Errorf("scan transaction row: %w", err)
    }
    results = append(results, t)
}
if err := rows.Err(); err != nil {
    return nil, fmt.Errorf("iterate transaction rows: %w", err)
}
return results, nil

In financial code, a partial result set is worse than no result set. If you're summing up pending settlements and you only got half the rows because the connection dropped, you'll settle the wrong amount. Always check both Scan errors and rows.Err().

rows.Err()
The most commonly skipped check in Go database code — and the most dangerous in financial systems

Context Timeouts on Queries

Every database call in a financial service should have a context with a timeout. No exceptions. A query without a timeout is a query that can hold a connection forever, and in a pool with 25 connections, it only takes a few runaway queries to starve the entire service.

func (s *PaymentStore) GetBalance(ctx context.Context, accountID string) (int64, error) {
    // Enforce a 3-second timeout even if the parent context is longer
    queryCtx, cancel := context.WithTimeout(ctx, 3*time.Second)
    defer cancel()

    var balance int64
    err := s.db.QueryRowContext(queryCtx,
        "SELECT balance FROM accounts WHERE id = $1", accountID,
    ).Scan(&balance)
    if err != nil {
        if errors.Is(err, context.DeadlineExceeded) {
            return 0, fmt.Errorf("balance query timed out for account %s", accountID)
        }
        return 0, fmt.Errorf("query balance: %w", err)
    }
    return balance, nil
}

I use different timeouts for different query types. Simple lookups get 3 seconds. Batch operations like settlement queries get 30 seconds. The transfer function from earlier inherits its timeout from the incoming request context, which is typically 10 seconds for API calls. If a transfer can't complete in 10 seconds, something is wrong and we'd rather fail fast than hold locks.

The context.DeadlineExceeded check lets you distinguish between "the database is slow" and "the query hit an actual error." That distinction matters for alerting — a spike in timeouts means you need to look at query performance or database load, while a spike in other errors might mean a schema problem or connection issue.

Putting It Together: A Production Transfer Function

Here's the full pattern I use for account-to-account transfers. It combines everything above — pool-aware connections, deferred rollback, context timeouts, proper NULL handling, and row-level locking:

func (s *PaymentStore) Transfer(ctx context.Context, req TransferRequest) error {
    // Hard timeout for the entire transfer operation
    txCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
    defer cancel()

    tx, err := s.db.BeginTx(txCtx, &sql.TxOptions{
        Isolation: sql.LevelSerializable,
    })
    if err != nil {
        return fmt.Errorf("begin transfer tx: %w", err)
    }
    defer tx.Rollback()

    // Lock both rows in consistent order to prevent deadlocks
    first, second := req.FromAccount, req.ToAccount
    if first > second {
        first, second = second, first
    }

    var bal1, bal2 int64
    err = tx.QueryRowContext(txCtx,
        "SELECT balance FROM accounts WHERE id = $1 FOR UPDATE", first,
    ).Scan(&bal1)
    if err != nil {
        return fmt.Errorf("lock account %s: %w", first, err)
    }

    err = tx.QueryRowContext(txCtx,
        "SELECT balance FROM accounts WHERE id = $1 FOR UPDATE", second,
    ).Scan(&bal2)
    if err != nil {
        return fmt.Errorf("lock account %s: %w", second, err)
    }

    // Check sufficient funds on the source
    srcBalance := bal1
    if first != req.FromAccount {
        srcBalance = bal2
    }
    if srcBalance < req.Amount {
        return fmt.Errorf("insufficient funds: have %d, need %d", srcBalance, req.Amount)
    }

    // Execute the transfer
    _, err = tx.ExecContext(txCtx,
        "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
        req.Amount, req.FromAccount)
    if err != nil {
        return fmt.Errorf("debit %s: %w", req.FromAccount, err)
    }

    _, err = tx.ExecContext(txCtx,
        "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
        req.Amount, req.ToAccount)
    if err != nil {
        return fmt.Errorf("credit %s: %w", req.ToAccount, err)
    }

    return tx.Commit()
}

The subtle detail here is the deadlock prevention. If two goroutines try to transfer money between the same two accounts in opposite directions simultaneously, they can deadlock — each holding one row lock and waiting for the other. By always locking accounts in a consistent order (sorted by ID), we guarantee that can't happen. It's a small thing, but in a system doing thousands of transfers per minute, deadlocks are not theoretical — they're Tuesday.

Monitor your pool stats in production. db.Stats() gives you OpenConnections, InUse, Idle, WaitCount, and WaitDuration. Export these to Prometheus or your metrics system. A rising WaitCount means your pool is too small or your queries are too slow — either way, you want to know before it becomes an outage.

The code examples in this article are simplified for clarity and do not represent production-ready implementations. Always conduct thorough testing, security review, and compliance validation before deploying database logic in financial systems. Transaction isolation levels and locking strategies should be validated against your specific database engine and workload characteristics.