BDR1 [0] came first and was, and is, open source. pgactive is based on BDR1. BDR2 was a closed-source rewrite of BDR1 that was later abandoned.
pglogical v1 and v2 (PGL1, PGL2) were, and are, open-source [1].
pglogical v1, after heavy modification, was eventually merged into Postgres 10.
Based on learnings from this logical replication in Postgres 10, 2nd Quadrant started pglogical v2.
pgEdge is based on pglogical v2.
Then later 2nd Quadrant started pglogical v3 (closed source) and BDR v3 (closed source). They were merged into just BDR v4. At some point the BDR product was renamed to Postgres Distributed (PGD) [2].
2ndQuadrant was acquired by EDB. We (EDB) just released PGD v6.
[0] https://github.com/2ndQuadrant/bdr/tree/bdr-plugin/REL1_0_ST...
> The replication mechanism is based on logical decoding and an earlier version of the pglogical extension provided for community by the 2ndQuadrant team.
"Last write wins" sounds like a recipe for disaster IMO.
This is still one of those things that keeps people on MySQL - there are not one, but two open-source solutions available that provide synchronous cluster replication, allowing for "safe" writes against multiple primaries.
But I could be totally wrong - (1) curious if someone could link to things / explain, and (2) fyi ('stephenr) last write wins based on timestamp is a thing im mysql world as well (though again maybe set of options / different conflict resolution methods available is larger in mysql?)
[1]: https://dev.mysql.com/doc/refman/8.4/en/mysql-cluster-replic...
[2]: https://dev.mysql.com/blog-archive/enhanced-conflict-resolut... (nice writeup, maybe outdated idk?)
The two "options" I was referring to are MySQL group replication and the Galera replication plugin for MySQL. Both provide synchronous replication, so the write either succeeds to a majority of the cluster or is rejected.
ACID+distributed== tradoffs that will always keep this a horses for courses problem.
Same situation as, e.g., backups. You can just use pg_dump, but to be serious you need a 3rd party solution that does log shipping and so on.
[0] https://www.postgresql.org/download/products/3-clusteringrep...
As I understand it, this is a wrapper on top of Postgres' native logical replication features. Writes are committed locally and then published via a replication slot to subscriber nodes. You have ACID guarantees locally, but not across the entire distributed system.
https://www.postgresql.org/docs/current/logical-replication....
It all feels like they expect developers to sift through the conflict log to resolve things manually or something. If a transaction did not go through on some of the nodes, what are the others doing then? What if they can not roll it back safely?
Such a rabbit hole.
Given this is targeted at replication of postgres nodes, perhaps the nodes are deployed across different regions of the globe.
By using active-active replication, all the participating nodes are capable of accepting writes, which simplifies the deployment and querying of postgres (you can read and write to your region-local postgres node).
Now that doesn't mean that all the reads and writes will be on conflicting data. Take the regional example, perhaps the majority of the writes affecting one region's data are made _in that region_. In this case, the region local postgres would be performing all the conflict resolution locally, and sharing the updates with the other nodes.
The reason this simplifies things, is that you can treat all your postgres connections as-if they are just a single postgres. Writes are fast, because they are accepted in the local region, and reads are replicated without you having to have a dedicated read-replica.
Ofc you're still going to have to design around the conflict resolution (i.e. writes for the same data issued against different instances), and the possibility of stale reads as the data is replicated cross-node. But for some applications, this design might be a significant benefit, even with the extra things you need to do.
For someone who has these requirements out of the gate, another datastore might be better. But if someone is already deeply tied to Postgres and perhaps doing their own half assed version of this, this option could be great.
Behind the scenes, the way it works is by combining software tricks with special hardware. You rent a (part of a) database cluster. The cluster is running on high end hardware running customized kernels, with a private Infiniband RDMA-capable interconnect between the nodes separate from the front-side network that clients connect with. A lock manager coordinates ownership of data blocks, which can be read either from disk nodes or directly out of the RAM of other database nodes. So if one node reads a block then writes to it, the only thing written to disk immediately is the transaction log. If another node then needs to write to that block, it's transferred directly over the interconnect using RDMA to avoid waiting on the remote CPU, the disk is never touched. Dirty blocks are written back to disk asynchronously. The current transaction counter is also poked directly into remote nodes via RDMA.
In the latest versions the storage nodes can also do some parts of query processing using predicate push-down, so the amount of data to be transferred over the interconnect is also lowered. The client drivers understand all the horizontal scalability stuff and can failover between nodes transparently, so the whole setup is HA. A node can die and the cluster will continue, including open transactions.
If you need to accelerate performance further you can add read-through coherent cache nodes. These act as proxies and integrate with the block ownership system to do processing locally.
Other than financial reasons (I own some stock), I've started making this argument here on HN because it's unintuitive but correct, which is just enjoyable. A lot of people in the startup world don't realize any of the above, thinking that horizontally scalable fully coherent SQL databases either don't exist or have severe caveats. E.g. one reply to you suggests FoundationDB which is great, but it's a KV store and not a SQL database.
[1] https://news.ycombinator.com/item?id=44074506 (last paragraph)
But if you want to run the DB on your own VMs or bare metal, you can do that. It doesn't have any DRM so from time to time you'll be expected to run some scripts that check your usage and reports back, to ensure you've been paying for what you use. But otherwise it's operationally no different to an open source DB.
The open source aspect makes a difference in terms of who you pay for support (if anyone), what quality of support you get, things like that.
Ideal? Not entirely but it should still give most query benefits of regular SQL and allows one to to benefit from good indexes (the proper indexes of an SQL database will also help contain the costs of an updated datamodel).
I think this is more interesting for someone building something social media like perhaps rather than anything involving accounting.
On the other hand, the increase in exploration costs should be more than offset by having most data changes logged to be able to track changes.
You would still get weird replication issues/conflicts when requests failed over in some conditions, but it worked fairly well the majority of the time.
These days I'd stick to single primary/writer as much as possible though tbh.
It doesn't look like you'd need multi master replication in that case? You could simply partition tables by site and rely on logical replication.
There's a requirement that during outages each site continue operating independently and might* need to make writes to data "outside" its normal partition. By having active-active replication the hope is that the whole thing recovers "automatically" (famous last words) to a consistent state once the network comes back.
Do you consider that acceptable, or don't you?
We're expecting this to be a rare occurrence (during partition, user at site A needs to modify data sourced from B). It doesn't have to be trivially easy for us to recover from, only possible.
In principle you could use CRDTs to end up with a "not quite random" outcome that simply takes the conflict into account - it doesn't really attempt to "resolve" it. That's quite good for some cases.
Seems the same is playing out out in Postgres with this extension, maybe will take it another 20 years
You can scale surprisingly far on a single-master Postgres with read replicas.
One of the use cases is to have a development db that can get data from production or staging (and doesn't send local changes back)
What I've done usually is have some script/cron/worker run periodically to get data, either via dump or running some queries, create a snapshot, store it in S3, then have a script on the local dev code that gets the snapshot and inserts/restores the data in the local db. This works for many cases, but index building can be a pain (take a long time), depending on the data
Having said that, legal exposure and risk will highly depend on what you are working on. Probably for most projects this isn’t a big deal. IANAL, this is not legal advice
The goal is not necessarily having an easy way to reset, but rather an easy/quick way to load real data
This would be all the time, ie. in my local dev, I’d like to have an up to date (daily is ok), copy of the source db that I can modify anytime, but that everyday will sync with the source, without having to fully rebuild indices
How would you go about managing/coordinating that?
Thinking about the developer experience though, when loading a snapshot manually, the dev knows they are overwriting their local db. However, if replication happened automatically/continuously on the background, it could lead to some really confusing/annoying behaviors
Either architect for no data overlap on writes across all the "actives" (in which case software like pgactive could be a good deal) or use a purely distributed database (like Yugabyte).
And I was wondering what other ways, besides schemas, of dividing up 'writer responsibility' would also work? Partitions?
Once you have done that, for updates and deletes you need to keep the same rule (i.e. don't update "foreign" rows).
If you do this, no other technique is needed. Partitions, however, are potentially a good technique to enforce some of these invariants, which gives us quick understanding of where data is originating from given the table name. Same could apply to schemas.
RLS may also help enforce these invariants.
RDS uses block replication. Aurora uses it's own SAN replication layer.
DMS maybe?
But only last month did they officially release it as open source to the community https://aws-news.com/article/2025-06-09-announcing-open-sour...
[1]: https://www.allthingsdistributed.com/2025/05/just-make-it-sc...
> We’re not using any of the storage or transaction processing parts of PostgreSQL, but are using the SQL engine, an adapted version of the planner and optimizer, and the client protocol implementation. [1]
Rather, DSQL seems to do its region replication using the distributed journal abstraction [2].
[1] https://brooker.co.za/blog/2024/12/04/inside-dsql.html [2] https://brooker.co.za/blog/2024/12/06/inside-dsql-cap.html
Useful for metric ingestion. Not useful for bank ledgers or whatever.
I don't think that is used for cross region replication
Pgactive: Active-Active Replication Extension for PostgreSQL on Amazon RDS - https://news.ycombinator.com/item?id=37838223 - Oct 2023 (1 comment)
I see a lot of patroni with etcd and haproxy being advised. It must work well for people to be so excited about it, but it feels a bit overwhelming to me when I look at the docker compose files.
At the same time there is pgool which looks like mostly a single thing to deploy in front of each postgres server.
Any tips from the pg-interested people here?
I’d like a docker compose like experience to setup a cluster that is highly available with point in time recovery or at least no data loss.
If you are familiar, then it's about as turn-key as you can get.
There's also a different PG operator based on Patroni that's supposed to work pretty well iirc
This is not a way to get better performance or scalability in general.
Seems sort of like a CQRS implementation on top of PG (you're using PG replication as the change queue to loosely separate writes/reads, losing transaction guarantees in the process)