PostgreSQL LISTEN/NOTIFY: real-time invalidation, delivery semantics, and when to stop

Use LISTEN/NOTIFY for lightweight pub/sub from Postgres to app servers: payload limits, no persistence, connection pooling pitfalls, and patterns that stay correct under load.

Author: Matheus Palma7 min read
PostgreSQLBackendSoftware engineeringNode.jsArchitectureCaching

You ship a dashboard backed by a read-heavy API and a short-lived cache in each app instance. Writes land in PostgreSQL; within a second, some users still see stale totals. Polling the database on every read is honest but expensive; wiring Redis pub/sub adds another moving part you must operate. LISTEN and NOTIFY sit in the middle: Postgres itself becomes a tiny event bus—no extra broker, sub-millisecond fan-out on a single host, and a mental model many teams already trust because the source of truth is the same database that accepted the write.

The catch is that LISTEN/NOTIFY is not a message queue. Notifications can be dropped, payloads are tiny, and every pattern you choose must survive connection churn, poolers, and bursty traffic. In client projects where the goal is production-ready backends without sprawling infrastructure, I reach for NOTIFY when the requirement is best-effort freshness with a clear fallback—not when the business literally depends on every event being processed exactly once.

This article explains how LISTEN/NOTIFY behaves at the protocol level, how to integrate it safely with Node.js and common pools, where it shines for cache invalidation and live UI hints, and which problems deserve a dedicated queue or change-data-capture pipeline instead.

What LISTEN/NOTIFY actually guarantees

NOTIFY channel, 'payload' asks the server to deliver a message to every session that has executed LISTEN channel on the same database cluster. Important properties:

  • No durability. If no session is listening when NOTIFY runs, the message is not stored for later. There is no backlog on the channel.
  • Small payloads. Historically the payload limit was 8000 bytes; in current PostgreSQL versions it is 7999 bytes in the NOTIFY payload field (the surrounding command can be larger, but the payload is capped). Large events belong elsewhere—store a row id or version and let consumers fetch details.
  • Best-effort delivery to connected listeners. The server does not wait for application code to finish handling a previous notification before accepting more NOTIFY calls, but if the client cannot read fast enough, TCP backpressure can slow the session; in extreme cases, disconnects follow. This is backpressure in the honest sense: the database will not buffer an unbounded queue of notifications per connection.
  • No ordering across channels. Ordering is only meaningful per session for the stream of protocol messages that client receives; if you need global ordering of domain events, you still need a log (outbox table, WAL, Kafka, etc.).

Understanding those limits upfront prevents the classic failure mode: treating NOTIFY like “INSERT into an events table that also magically pushes,” then losing silent consistency when the listener restarts during a deploy.

Integration patterns that survive production

1. NOTIFY as a hint, not the full event

The robust pattern is: commit the write, then NOTIFY with a compact key—tenant id, cache segment, aggregate id, or a monotonic version you also store in a row the listener can read.

-- Single transaction: persist the write, then emit one compact hint
WITH u AS (
  UPDATE tenant_dashboard_stats
  SET revenue_cents = revenue_cents + $1,
      stats_version = stats_version + 1
  WHERE tenant_id = $2
  RETURNING tenant_id, stats_version
)
SELECT pg_notify(
  'tenant_stats_changed',
  json_build_object('tenant_id', tenant_id::text, 'v', stats_version)::text
)
FROM u;

Listeners invalidate in-process caches or bump a local version so the next read misses the cache and refetches. If a notification is missed, the next user-driven refresh, a periodic reconciliation job, or a version check on read still converges. You traded strong real-time for eventual consistency with a fast path—which is usually the right trade for dashboards and derived aggregates.

2. Triggers versus application code

You can emit NOTIFY from a trigger after row changes so all writers automatically signal consumers. That centralizes signaling but couples notification volume to write volume: a bulk import becomes a thundering herd on every listener. Application-level NOTIFY after intentional writes lets you debounce (e.g. one notify per transaction batch) and attach richer routing metadata. Triggers are fine for low-volume tables; for hot paths, prefer explicit calls or a single trigger that batches via transition tables (PostgreSQL 10+) with care.

3. One dedicated subscriber connection per process

LISTEN/NOTIFY is bound to a single backend session. Generic connection pools rotate connections across requests; a LISTEN on a borrowed socket is wrong because the next checkout might be a different session—or the same session might be used for unrelated queries, interleaving protocol traffic awkwardly.

Production pattern:

  • Reserve one long-lived client (or a small fixed set) whose only job is to LISTEN and push work into an in-memory queue consumed by the rest of the app.
  • Run ordinary queries through the normal pool.

In Node.js with pg, that is typically a Client for subscriptions and a Pool for request-scoped work.

4. Poolers: PgBouncer and transaction pooling

Transaction pooling mode in PgBouncer breaks session affinity: after each COMMIT, you may land on another server session, so LISTEN is effectively unusable through that pool. Use session pooling for the subscriber connection, or connect the listener directly to Postgres, bypassing PgBouncer for that one role. Document this in runbooks; it is a frequent source of “works on my laptop, silent in staging” bugs.

Practical example: Node.js listener with graceful lifecycle

The following sketch shows a dedicated subscriber client, safe payload parsing, and forwarding into an application-level invalidation bus. It is intentionally minimal—real code adds metrics, structured logging, and reconnection backoff with jitter.

import { Client } from "pg";

export type TenantStatsHint = { tenantId: string; version: number };

export function createTenantStatsSubscriber(connectionString: string, emit: (hint: TenantStatsHint) => void) {
  const client = new Client({ connectionString });

  client.on("notification", (msg) => {
    if (msg.channel !== "tenant_stats_changed") return;
    if (!msg.payload) return;
    try {
      const parsed = JSON.parse(msg.payload) as { tenant_id?: string; v?: number };
      if (!parsed.tenant_id || typeof parsed.v !== "number") return;
      emit({ tenantId: parsed.tenant_id, version: parsed.v });
    } catch {
      // Malformed payload: log in real services; never throw into pg client
    }
  });

  return {
    async start() {
      await client.connect();
      await client.query("LISTEN tenant_stats_changed");
    },
    async stop() {
      await client.query("UNLISTEN tenant_stats_changed");
      await client.end();
    },
  };
}

Invalidation handlers should be cheap: mark keys dirty, bump a generation counter, or enqueue work to a worker that already uses SKIP LOCKED or a real queue. Heavy work still belongs off the hot path.

Trade-offs versus alternatives

ApproachStrengthsWeaknesses
LISTEN/NOTIFYNo new infra; low latency; simple for hintsNo persistence; payload size; one session per listener; pooler traps
Polling (SELECT max(updated_at))Easy to reason about; works through poolsLatency/cost trade-off; not great for large fan-out
Logical replication / CDCDurable ordered streams; great for projectionsOperational complexity; more moving parts
Dedicated broker (Redis, NATS, Kafka)Backlog, fan-out, consumer groupsAnother system to secure, monitor, and upgrade

For systems I help teams harden for scale, NOTIFY is the wedge: it removes obvious staleness without new services. When product requirements grow into audit trails, replay, or multi-region consumers, we graduate the same domain events into an outbox table or CDC and keep NOTIFY only as an optional optimization—or drop it entirely.

Common mistakes and pitfalls

  1. Calling LISTEN on pooled request connections. Subscriptions must stay on stable sessions; otherwise you signal the wrong connection or lose the subscription silently after RETURN to the pool.
  2. Assuming at-least-once delivery. There is no redelivery. Design caches and UIs so a missed notification only delays freshness, not correctness.
  3. Large payloads in NOTIFY. You will hit the size cap and errors at runtime; keep payloads under a kilobyte in practice.
  4. Emitting NOTIFY from hot triggers on bulk operations. A million-row migration can DoS your own app servers with notifications; gate bulk jobs or disable triggers during controlled migrations.
  5. Ignoring authentication and exposure. Any role that can NOTIFY on a channel can reach listeners for that channel; channel names are not a security boundary. Treat payloads as untrusted strings and authorize when acting on them.
  6. PgBouncer transaction mode. The listener appears connected but never receives events—or receives them on the wrong session lifecycle. Always verify pool mode for subscriber roles.

Conclusion

PostgreSQL LISTEN/NOTIFY is a sharp tool: low ceremony, low latency, and tightly aligned with the database you already run. It fits hint-based invalidation and lightweight internal signaling when you accept best-effort delivery and keep payloads small. It does not replace a log, a queue, or CDC when you need durability, ordering across aggregates, or cross-region fan-out.

The operational recipe that has held up across consulting engagements is simple: notify with ids and versions, listen on dedicated connections outside transaction pooling, and design reads so correctness does not depend on hearing every message. That combination gives you fresher UIs and saner caches without pretending Postgres is something it is not—and leaves a clean upgrade path when traffic or compliance eventually demands a real event backbone.

Subscribe to the newsletter

Get an email when new articles are published. No spam — only new posts from this blog.

Powered by Resend. You can unsubscribe from any email.