PostgreSQL advisory locks: coordination without touching rows
When and how to use session and transaction advisory locks in Postgres for jobs, migrations, and cross-request serialization, with pooling caveats and a Node.js example.
You need exactly one worker to process a nightly reconciliation job across several app instances, but there is no natural row to SELECT … FOR UPDATE. Alternatively, two HTTP handlers must never run the same expensive migration step in parallel, and you want the guard in the database—not only in Redis—so every client sees the same rule. Row-level locks are the default tool, yet they require a row to exist and can interact awkwardly with connection pools and long-lived transactions.
PostgreSQL advisory locks are application-defined locks: you choose numeric keys; Postgres enforces mutual exclusion for those keys. They are powerful for logical coordination when correctness depends on serialization but the data model does not map cleanly to a single table row. In client work and internal platforms, they often appear in migration runners, singleton cron semantics, and serializing writes around keys that are not primary keys in a table. This article explains session versus transaction locks, how they behave under pooling, trade-offs versus row locks and external lock services, and a practical pattern you can drop into a Node service.
Session locks versus transaction locks
Postgres exposes two families: session-level and transaction-level advisory locks.
Session-level locks (pg_advisory_lock, pg_try_advisory_lock, …)
A session lock is held until explicitly released or until the session ends. It is not tied to the SQL transaction boundary: COMMIT does not release it. That makes session locks dangerous in typical stateless HTTP + pool architectures unless you acquire and release in the same borrowed connection and guarantee no mid-request connection swap—usually by running the whole critical section inside one explicit client transaction on one connection.
Session locks are appropriate when the critical section spans multiple statements or transactions by design (for example, an interactive admin tool holding a maintenance lock), or when you wrap the entire unit of work in a dedicated connection you control.
Transaction-level locks (pg_advisory_xact_lock, …)
A transaction lock is held from acquisition until transaction end (COMMIT or ROLLBACK). It fits most web handlers: BEGIN, take the lock, do work, COMMIT. If the process dies, ROLLBACK on disconnect releases the lock. This is usually the default choice for API backends.
Both families come in blocking and non-blocking (pg_try_*) variants, and in single bigint or two integer key forms. Two-int keys are convenient when you namespace by “domain” and “resource id” without bit-packing.
Why advisory locks instead of row locks?
Row locks (FOR UPDATE, FOR SHARE) require a row. You can invent a locks table and insert one row per logical resource, then lock that row—sometimes called a lock striping table. That works well when you already maintain lock rows or can justify the extra table and cleanup.
Advisory locks avoid storage churn: no insert/delete of sentinel rows, no unique constraint races on “create lock row first.” They are also useful when the resource identifier is not stored as a row you want to couple to the lock (for example, locking a logical “tenant export job” id that spans many tables).
Trade-offs:
- No automatic visibility in
pg_locksrow metadata beyond what you encode in the key; operational playbooks must document key ranges. - Key collisions are your responsibility: two subsystems reusing the same bigint key will block each other incorrectly.
- Connection affinity: session locks require the same physical connection for acquire and release; transaction locks require the same connection for the whole transaction—both conflict with naive “one query per request from the pool” patterns.
Key encoding and namespaces
Advisory lock APIs take either one bigint or two int4 values. If you derive keys from strings (for example, a slug), use a stable hash into 63 bits (keep the high bit clear if you want to stay in signed bigint comfort) or map to two integers with a documented scheme.
Reserve ranges so payroll never shares keys with inventory: e.g. domain id 42 in the high bits and a hashed resource id in the low bits, or always use the two-argument form (domain_id, resource_hash).
Interaction with connection pooling
PgBouncer transaction pooling returns the server session to the pool after each transaction. A session advisory lock acquired on that session would leak to the next unrelated client using the same backend—catastrophic. Rule: do not use session advisory locks through transaction-pooled PgBouncer unless you use a dedicated session-mode pool or a direct connection for that code path.
Transaction advisory locks are safe with transaction pooling if the entire BEGIN … pg_advisory_xact_lock … COMMIT runs on one checkout from the pool without interleaving other queries on other connections for the same logical operation.
Application pools (many Node clients) keep one connection per in-flight transaction if you use explicit BEGIN/COMMIT on a single client—verify your driver’s behavior. If your ORM opens implicit transactions per statement, you may need an explicit transaction wrapper.
Compared to Redis or ZooKeeper locks
Distributed lock services handle cross-database and multi-region exclusion. Advisory locks are single-Postgres-cluster (one primary; replicas do not participate in the same lock memory). For single-database workflows, advisory locks reduce moving parts: no extra network hop, same failure domain as your data, and ACID transaction coupling with writes.
If you need exclusion across services that do not all talk to Postgres, or across regions with independent databases, use a dedicated coordinator or lease-based locks with fencing tokens—not advisory locks alone.
Practical example: serializing a maintenance task
The following pattern uses a transaction-level non-blocking lock so a scheduled job skips cleanly if another instance already runs the task. It uses pg_try_advisory_xact_lock so you do not block a worker thread forever.
BEGIN;
SELECT pg_try_advisory_xact_lock(8739201, 1) AS acquired;
-- If acquired is false, ROLLBACK and exit; another session holds the lock.
-- Run the critical section: DDL, batch update, etc.
COMMIT;
In Node.js with pg, wrap the critical section in one explicit transaction on one client:
import pg from "pg";
const ADVISORY_DOMAIN = 8739201; // reserved for "nightly-reconciliation" subsystem
const ADVISORY_RESOURCE = 1;
export async function runReconciliationIfLeader(pool: pg.Pool): Promise<boolean> {
const client = await pool.connect();
try {
await client.query("BEGIN");
const { rows } = await client.query<{ acquired: boolean }>(
"SELECT pg_try_advisory_xact_lock($1, $2) AS acquired",
[ADVISORY_DOMAIN, ADVISORY_RESOURCE]
);
if (!rows[0]?.acquired) {
await client.query("ROLLBACK");
return false;
}
// await reconcileOrders(client); // your workload using the same client/tx
await client.query("COMMIT");
return true;
} catch (e) {
try {
await client.query("ROLLBACK");
} catch {
/* ignore */
}
throw e;
} finally {
client.release();
}
}
Using the same client for the advisory lock and subsequent queries keeps the lock inside one transaction. Mixing pool.query for the lock and a different connection for updates would break correctness.
For blocking acquisition with a timeout, Postgres does not offer a built-in “try for N ms”; common approaches are SET LOCAL lock_timeout = '2s' in the transaction before pg_advisory_xact_lock, or application-level timeouts on the query.
Common mistakes and pitfalls
- Session locks + PgBouncer transaction mode: as above, a recipe for cross-tenant blocking or deadlocks visible only under load.
- Lock ordering deadlocks: if different code paths acquire
(a,b)and(b,a)advisory locks alongside row locks, you can deadlock like any other lock graph. Document global lock ordering. - 64-bit key collisions: hashing strings to
bigintwithout a namespace bit invites collisions between features; use two-int form or reserved high bits. - Long transactions: holding
pg_advisory_xact_lockwhile doing slow network I/O ties up a database connection and blocks other waiters. Keep critical sections short; offload slow work after commit if exclusion is no longer needed. - Assuming replicas: standbys do not grant inter-session advisory locks shared with the primary; coordination is always on the primary you connect to.
Conclusion
Advisory locks let you express application-level mutual exclusion inside Postgres without sentinel rows, which keeps migration runners, singleton jobs, and some cross-table workflows simple and transactionally aligned with your data.
Key takeaways:
- Prefer transaction-level (
pg_advisory_xact_*) locks in typical pooled HTTP services; treat session-level locks as a specialist tool for owned connections only. - Never rely on advisory locks for cross-cluster exclusion; pair in-DB locks with clear key namespaces and short critical sections.
- Operational success depends on connection discipline—one transaction, one client, same wire for lock and writes.
When you are designing backends that must stay correct under retries, parallelism, and partial failures, small choices around where exclusion lives—rows, advisory keys, or an external lock—have long operational shadows. Getting that boundary right is part of building scalable, production-ready data layers teams can run without heroics. For architecture questions or collaboration, the contact page is the right place to reach out.
Subscribe to the newsletter
Get an email when new articles are published. No spam — only new posts from this blog.
Powered by Resend. You can unsubscribe from any email.