← back to posts

Engineering Trade-Offs Every Backend Engineer Must Understand

Engineering is trade-offs. Every decision optimizes for one thing at the expense of another. Senior engineers don’t make “correct” decisions — they make informed trade-offs and communicate them clearly.

Consistency vs Availability

The foundational trade-off. When a network partition happens, do you:

  • Refuse to serve until all nodes agree (consistency)
  • Serve possibly stale data to keep the system running (availability)

Most real systems need both, for different operations:

OperationPriorityWhy
Account balanceConsistencyCan’t show wrong balance
Product catalogAvailabilityStale price for 30s is OK
Order placementConsistencyCan’t double-charge
RecommendationsAvailabilityStale recs are fine
Inventory countDependsOverselling is bad, but so is blocking purchases

Don’t pick one globally. Pick per-operation based on the business impact of getting it wrong.

Latency vs Throughput

You can’t maximize both. Lower latency often means processing fewer requests per second (dedicated resources per request). Higher throughput often means batching (which adds latency).

Optimize for latency when:

  • User-facing API responses
  • Interactive operations
  • Real-time data (chat, notifications)

Optimize for throughput when:

  • Batch processing
  • Data ingestion pipelines
  • Background jobs
  • Analytics queries
// Latency-optimized: process immediately
func handleRequest(w http.ResponseWriter, r *http.Request) {
    result := process(r)
    json.NewEncoder(w).Encode(result)
}

// Throughput-optimized: batch and process
func batchProcessor(events <-chan Event) {
    batch := make([]Event, 0, 1000)
    ticker := time.NewTicker(time.Second)

    for {
        select {
        case event := <-events:
            batch = append(batch, event)
            if len(batch) >= 1000 {
                processBatch(batch)
                batch = batch[:0]
            }
        case <-ticker.C:
            if len(batch) > 0 {
                processBatch(batch)
                batch = batch[:0]
            }
        }
    }
}

Simplicity vs Flexibility

The most underappreciated trade-off. Flexible systems handle future requirements but are complex today. Simple systems are easy to understand but may need rewriting later.

My rule: choose simplicity unless you have evidence that flexibility is needed.

// Simple: hardcoded behavior
func calculateShipping(weight float64) float64 {
    if weight < 1.0 {
        return 5.00
    }
    return 5.00 + (weight-1.0)*2.50
}

// Flexible: rule engine
func calculateShipping(weight float64, rules []ShippingRule) float64 {
    for _, rule := range rules {
        if rule.Matches(weight) {
            return rule.Calculate(weight)
        }
    }
    return defaultRate
}

The flexible version is better if shipping rules change weekly. The simple version is better if they change yearly. Know your change frequency before choosing.

Read Performance vs Write Performance

Optimizing for reads often hurts writes, and vice versa:

  • More indexes = faster reads, slower writes
  • Denormalization = faster reads, harder writes (must update multiple places)
  • Materialized views = instant reads, background computation cost
  • Normalization = simple writes, complex reads (joins)

Most applications are read-heavy (10:1 to 100:1 read-to-write ratio). Optimize for reads by default.

Strong vs Eventual Consistency

Stronger consistency costs more — more coordination, more latency, lower throughput.

Strong consistency:     Distributed lock + synchronous replication
                        Latency: 50-200ms per write
                        Safe for: financial transactions

Causal consistency:     Version vectors + async replication
                        Latency: 5-50ms per write
                        Safe for: social feeds, collaborative editing

Eventual consistency:   Async replication, no coordination
                        Latency: <5ms per write
                        Safe for: caches, analytics, non-critical reads

Build vs Buy

The most expensive trade-off to get wrong.

Build when:

  • It’s your core competency
  • Off-the-shelf doesn’t fit your constraints
  • You need deep customization
  • The maintenance cost is worth the control

Buy (or use managed service) when:

  • It’s commodity infrastructure (databases, queues, caches)
  • The team’s time is better spent on product features
  • The managed service has better reliability than you can achieve
  • The cost is less than engineering time to build and maintain

I’ve seen teams spend months building custom message queues, task schedulers, and deployment pipelines that were worse than what’s available for free. Build your differentiator. Buy your infrastructure.

Monolith vs Microservices

Not a binary choice. It’s a spectrum:

Monolith → Modular monolith → Service-oriented → Microservices

Move right when you feel the pain of the current position:

  • Can’t deploy independently → split
  • Teams stepping on each other → split
  • Different scaling requirements → split

Don’t move right preemptively. The coordination cost of microservices is real and ongoing.

How to Make Trade-Off Decisions

  1. Identify the trade-off explicitly. “We’re choosing X at the expense of Y.”
  2. Quantify the cost. How much latency? How much complexity? How much engineering time?
  3. Consider the blast radius. What happens if this decision is wrong?
  4. Make it reversible if possible. Can we change this later without a rewrite?
  5. Document the decision. Write an ADR with the context and reasoning.

The best engineers I’ve worked with don’t have better answers. They ask better questions about trade-offs. They make the implicit explicit, quantify the costs, and communicate the reasoning clearly.

Every system is a collection of trade-offs. Understanding them is the difference between engineering and guessing.