grep "#performance" posts/*
9 results
Rate Limiting in Go for High-Traffic APIs
Implementing rate limiting in Go — token bucket, sliding window, distributed rate limiting with Redis, and per-user vs global strategies.
Memory Optimization Techniques for High-Traffic Go Services
Practical memory optimization in Go — sync.Pool, arena patterns, slice pre-allocation, string interning, and reducing GC pressure.
Reducing Latency in Go APIs: Lessons from Production
Techniques that cut our API P99 latency from 800ms to under 100ms — connection pooling, query optimization, caching layers, and async processing.
Practical Go Performance Tuning in Real Production Systems
A field guide to Go performance tuning — GC tuning, allocation reduction, efficient I/O, and the benchmarks that actually matter.
Understanding Go Scheduler Behavior in High-Concurrency Systems
How the Go scheduler works under heavy load — GOMAXPROCS, goroutine scheduling, preemption, and what happens when you spawn a million goroutines.
Optimizing Go Services Handling Millions of Requests
Concrete optimizations for Go HTTP services at scale — connection reuse, zero-alloc patterns, efficient serialization, and database query tuning.
Profiling Go Services in Production: CPU, Memory, and Goroutine Leaks
Practical guide to profiling Go services using pprof, trace, and runtime metrics — finding CPU hotspots, memory leaks, and goroutine leaks in production.
Designing a High-Throughput Event Processing Pipeline in Go
Architecture and implementation of a Go event pipeline handling millions of events — fan-out, batching, buffering, and backpressure.
How I Design Go Microservices for High Throughput Systems
Architectural patterns for building Go microservices that handle thousands of requests per second — connection pooling, batching, and backpressure.