← back to posts

Request Context Propagation in Node.js Microservices

When a single user action hits your API gateway and fans out across five microservices, how do you trace what happened? How do you correlate logs from the auth service with logs from the payment service for the same request?

This is the problem of request context propagation — and getting it right is one of the most impactful things you can do for your observability story.

The Problem

In a monolith, tracing is trivial. One process, one request, one log stream. But in microservices:

  • A single API call triggers downstream HTTP/gRPC calls
  • Each service has its own logs
  • Debugging a failed request means grep-ing across multiple log stores
  • Without correlation, you’re blind

Correlation IDs: The Foundation

The simplest approach: generate a unique ID at the edge and pass it through every service.

import { randomUUID } from "crypto";
import { Request, Response, NextFunction } from "express";

function correlationMiddleware(req: Request, res: Response, next: NextFunction) {
  const correlationId = req.headers["x-correlation-id"] as string || randomUUID();
  req.correlationId = correlationId;
  res.setHeader("x-correlation-id", correlationId);
  next();
}

Simple. But now every function that needs the correlation ID must receive it as a parameter. This gets ugly fast — you end up threading context through 10 layers of function calls.

AsyncLocalStorage: The Game Changer

Node.js 16+ ships with AsyncLocalStorage — a way to store context that follows the async execution chain without passing it explicitly.

import { AsyncLocalStorage } from "async_hooks";

interface RequestContext {
  correlationId: string;
  userId?: string;
  traceId: string;
  spanId: string;
}

export const requestContext = new AsyncLocalStorage<RequestContext>();

// Middleware: create context for each request
function contextMiddleware(req: Request, res: Response, next: NextFunction) {
  const context: RequestContext = {
    correlationId: req.headers["x-correlation-id"] as string || randomUUID(),
    traceId: req.headers["x-trace-id"] as string || randomUUID(),
    spanId: randomUUID(),
    userId: req.user?.id,
  };

  requestContext.run(context, () => next());
}

Now any function anywhere in the call chain can access the context:

import { requestContext } from "./context";

function getContext(): RequestContext {
  const ctx = requestContext.getStore();
  if (!ctx) throw new Error("No request context available");
  return ctx;
}

// Use it in any service layer — no parameter drilling needed
async function processOrder(order: Order) {
  const { correlationId, userId } = getContext();
  logger.info("Processing order", { correlationId, userId, orderId: order.id });
  // ...
}

Propagating Across Service Boundaries

Context within a single service is solved. But what about service-to-service calls? You need to inject context into outgoing requests.

import axios, { InternalAxiosRequestConfig } from "axios";

const serviceClient = axios.create();

serviceClient.interceptors.request.use((config: InternalAxiosRequestConfig) => {
  const ctx = requestContext.getStore();
  if (ctx) {
    config.headers["x-correlation-id"] = ctx.correlationId;
    config.headers["x-trace-id"] = ctx.traceId;
    config.headers["x-parent-span-id"] = ctx.spanId;
    if (ctx.userId) {
      config.headers["x-user-id"] = ctx.userId;
    }
  }
  return config;
});

On the receiving service, the middleware picks up these headers and reconstructs the context. The chain is unbroken.

Structured Logging with Context

The real payoff: every log line automatically includes the correlation ID.

import pino from "pino";

const baseLogger = pino();

export const logger = new Proxy(baseLogger, {
  get(target, prop) {
    const method = target[prop as keyof typeof target];
    if (typeof method !== "function") return method;

    return (...args: unknown[]) => {
      const ctx = requestContext.getStore();
      const contextFields = ctx
        ? { correlationId: ctx.correlationId, traceId: ctx.traceId, spanId: ctx.spanId }
        : {};

      if (typeof args[0] === "object" && args[0] !== null) {
        args[0] = { ...contextFields, ...args[0] as object };
      } else {
        args.unshift(contextFields);
      }

      return (method as Function).apply(target, args);
    };
  },
});

Now logger.info("Payment processed") automatically emits:

{
  "level": "info",
  "msg": "Payment processed",
  "correlationId": "abc-123",
  "traceId": "def-456",
  "spanId": "ghi-789"
}

Moving to OpenTelemetry

Once you outgrow custom correlation IDs, OpenTelemetry is the standard. It handles context propagation, tracing, and metrics with vendor-neutral instrumentation.

import { NodeSDK } from "@opentelemetry/sdk-node";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";

const sdk = new NodeSDK({
  traceExporter: new OTLPTraceExporter({
    url: "http://otel-collector:4318/v1/traces",
  }),
  instrumentations: [getNodeAutoInstrumentations()],
});

sdk.start();

The auto-instrumentation hooks into HTTP, gRPC, database drivers, and message queues — automatically propagating W3C Trace Context headers across service boundaries.

Pitfalls I’ve Hit

1. Context loss in worker threads. AsyncLocalStorage doesn’t propagate across worker_threads. If you offload work to a thread pool, you need to manually pass context.

2. Context loss in message queues. When you publish to RabbitMQ/Kafka, the context doesn’t magically appear on the consumer side. Serialize it into message headers.

async function publishEvent(queue: string, payload: unknown) {
  const ctx = requestContext.getStore();
  const headers = ctx
    ? { "x-correlation-id": ctx.correlationId, "x-trace-id": ctx.traceId }
    : {};

  await channel.publish(exchange, queue, Buffer.from(JSON.stringify(payload)), { headers });
}

3. Performance overhead. AsyncLocalStorage has near-zero overhead in Node.js 16+. Earlier versions using async_hooks directly had measurable cost. Make sure you’re on a recent version.

Key Takeaways

  • Start with correlation IDs — they’re simple and immediately useful
  • Use AsyncLocalStorage to avoid parameter drilling
  • Inject context into all outgoing HTTP calls, gRPC calls, and message queue publishes
  • Structured logging with automatic context injection is the biggest quality-of-life win
  • Graduate to OpenTelemetry when you need distributed tracing with visualization

Request context propagation isn’t glamorous, but it’s the foundation of debuggable microservices. When something breaks at 2 AM, you’ll be glad every log line tells you exactly which request it belongs to.