In my last blog post, Evolution of Logical Replication, I mentioned the future development of a feature to allow "upgrades of logical replication nodes." The upcoming release of PostgreSQL 17 includes this feature. Previously, after major version upgrades (via pg_upgrade), users couldn't immediately connect and write data to logical replication nodes. This was because the slots were lost during upgrades, preventing replication from continuing. As a result, new writes wouldn't get replicated, causing data on both nodes to become out of sync. As explained in this blog post, users had to block applications from writing until the replication setup was re-enabled after the upgrade.
With PostgreSQL 17, logical replication nodes can be upgraded without blocking writes or requiring users to manually detach/attach subscriptions or create slots. The migration of logical slots is supported only when the old cluster is version 17.0 or later, so users will benefit from this feature when upgrading from 17.0 to 18.0 (or later versions). Refer to the PostgreSQL documentation for the prerequisites for upgrading publisher and subscriber nodes.
This blog will delve into the internals of this feature and provide an example of upgrading a two-node logical replica setup. The later part of this blog will talk about online upgrading physical replication setups.
Let's start with an example of upgrading a logical replica setup where the old and new versions of databases are PostgreSQL 17.
1. CREATE TABLE foo(c1 int);
2. CREATE PUBLICATION pub FOR TABLE foo;
Subscriber:
1. CREATE TABLE foo(c1 int);
2. CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub;
Publisher:
3. INSERT INTO foo VALUES(generate_series(1,5));
3. SELECT * FROM foo;
c1
----
1
2
3
4
5
(5 rows)
This shows that logical replication is happening between the publisher and the subscriber. Now, we can upgrade either the publisher or subscriber first and let the other node continue operations. In this example, I choose to upgrade the subscriber first but note that the publisher needs to retain the WAL for all changes until the subscriber is upgraded.
4. Stop the subscriber server.
5. Upgrade the subscriber:
pg_upgrade --old-datadir "/opt/PostgreSQL/postgres/17/data1_sub" --new-datadir "/opt/PostgreSQL/postgres/17/data1_sub_upgraded" --old-bindir "/opt/PostgreSQL/postgres/17/bin" --new-bindir "/opt/PostgreSQL/postgres/17/bin"
4. INSERT INTO foo VALUES(generate_series(10,15));
c1
----
1
2
3
4
5
10
11
12
13
14
15
(11 rows)
8. ALTER SUBSCRIPTION sub DISABLE;
Publisher:
5. Stop the publisher server.
6. Upgrade the publisher:
pg_upgrade --old-datadir "/opt/PostgreSQL/postgres/17/data1_pub" --new-datadir "/opt/PostgreSQL/postgres/17/data1_pub_upgraded" --old-bindir "/opt/PostgreSQL/postgres/17/bin" --new-bindir "/opt/PostgreSQL/postgres/17/bin"
7. Start the publisher server.
Now enable the subscriptions on the subscriber and write some data on the publisher.
9. ALTER SUBSCRIPTION sub ENABLE;
8. INSERT INTO foo VALUES(99);
Subscriber:
10. SELECT * FROM foo WHERE c1 = 99;
c1
----
99
(1 row)
Note that the subscriber has replicated the new value from the publisher after its upgrade.
To migrate logical slots during an upgrade, a list of logical slots is fetched while reading information from the old cluster. Later in the upgrade process, pg_upgrade revisits the list and restores slots by executing pg_create_logical_replication_slot() on the new cluster. If the old node has invalid slots or slots with unconsumed WAL records, pg_upgrade fails. These checks are necessary to prevent data loss. We also ensure that logical decoding output plugins required by slots in the old cluster are present in the new cluster.
This work also allows upgrades to preserve the full subscription state, enabling us to replicate changes on subscriber nodes after the upgrade. Previously, only the subscription's metadata information was preserved. Without the list of relations and their state, it wasn't possible to re-enable subscriptions without missing some records. Similarly, we weren't preserving the replication origin's information after the upgrade, which is needed to ensure that we don't replicate anything twice. Additionally, pg_upgrade will check that all subscription relations are in 'i' (init) or 'r' (ready) state and will error out if that's not the case, logging the reason for the failure. This helps avoid the risk of any dangling slot or origin after the upgrade.
Before PostgreSQL 17, upgrading a streaming replica was possible only when the primary used link mode to upgrade, and users could use rsync to sync data from the primary to the standby. Also, users couldn't perform any operations on any of the nodes (primary and standby) until the upgrade on the primary and rsync to the standby were finished.
With the advent of pg_createsubscriber and the ability to retain the subscriber's full state in PostgreSQL 17, one can upgrade a streaming replica online (without stopping operations), and the upgrade doesn't even require rsync.
Say there is a physical replication setup between node-A and node-B. Now, we want to upgrade both nodes in the physical replication setup.
2. Run pg_createsubscriber on node-B.
3. Upgrade node-B.
4. Start node-B.
5. Create a physical replica from node-B, say node-C. So both node-B and node-C are on newer server versions.
6. Transition all writes from node-A to node-B.
7. Decommission node-A.
By the end, we have a physical replica setup (node-B → node-C) of the newer version without stopping operations.
Note: As pg_createsubscriber internally uses logical replication, any new DDLs or sequence changes won't be replicated. Refer to the PostgreSQL documentation for the usage of pg_createsubscriber.
Thanks to Vignesh C. and Hayato Kuroda for verifying the above steps for upgrading the streaming replication setup.