Well, this all depends on the definition of «function properly». Convergence ensures that everyone observed the same state, not that it’s a useful state. For instance, The Imploding Hashmap is a very easy CRDT to implement. The rule is that when there’s concurrent changes to the same key, the final value becomes null. This gives Strong Eventual Consistency, but isn’t really a very useful data structure. All the data would just disappear!
So yes, CRDT is a massively useful property which we should strive for, but it’s not going to magically solve all the end-user problems.
One simple answer to this problem that works almost all the time is to just have a “conflict” state. If two peers concurrently overwrite the same field with the same value, they can converge by marking the field as having two conflicting values. The next time a read event happens, that’s what the application gets. And the user can decide how the conflict should be resolved.
In live, realtime collaborative editing situations, I think the system just picking something is often fine. The users will see it and fix it if need be. It’s really just when merging long running branches that you can get in hot water. But again, I think a lot of the time, punting to the user is a fine fallback for most applications.
Yet here we are, circling back to collaborative editing...
At this point I think the term "CRDT" has too much baggage and I should probably stop using it, or at least not put it in blog post titles.
With CRDT, you have local consistency and strong convergence, but no guarantee of semantic convergence (i.e. user intent). I would still hire OP, but I would definitely keep him in the backend and away from UX
In general the automatic merging works pretty well most of the time. Where things go wrong is - for example - when people think they can put JSON data into a text crdt and have the system behave well. Instead the automatic merging breaks the rules of JSON syntax and the system falls over.
If two users concurrently edit the same word in a text document, how does your system help?
* first update sets task cancelled_at and cancellation_reason
* second update wants the task to be in progress, so sets started_at
CRDT's operate only at the column/field level. In this situation you'd have a task with cancelled_at, cancellation_reason, status in progress, and started_at. That makes no sense semantically, a task can't both be cancelled and in progress. CRDTs do nothing to solve this. My solution is aimed at exactly this kind of thing. Since it replicates _intentions_ instead of just data it would work like this:
action1: setCancelled(reason) action2: setInProgress
When reconciling total order of actions using logical clocks the app logic for setCancelled runs first then setInProgress runs second on every client once they see these actions. The app logic dictates what should happen, which depends on the application. You could have it discard action2. You could also have it remove the cancellation status and set in_progress. It depends on the needs of the application but the application invariants / semantics are preserved and user intentions are preserved maximally in a way that plain CRDTs cannot do.
For example, lets say we have a state machine for a task. The task is currently in the IN_PROGRESS state - and from here it can transition to either CANCELLED or COMPLETE. Either of those states should be terminal. That is to say, once a task has been completed it can't be cancelled and vice versa.
The problem I see with your system is - lets say we have a task in the IN_PROGRESS state. One peer cancels a task and another tries to mark it complete. Lets say a peer sees the COMPLETE message first, so we have this:
IN_PROGRESS -> COMPLETE
But then a peer sees the CANCEL message, and decides (unambiguously) that it must be applied before the completion event. Now we have this: IN_PROGRESS -> CANCELLED (-> COMPLETE ignored)
But this results in the state of the task visibly moving from the COMPLETE to CANCELLED state - which we said above the system should never do. If the task was complete, it can't be cancelled. There are other solutions to this problem, but it seems like the sort of thing your system cannot help with.In general, CRDTs never had a problem arbitrarily picking a winner. One of the earliest documented CRDTs was a "Last-writer wins (LWW) register" which is a register (ie variable) which stores a value. When concurrent changes happen, the register chooses a winner somewhat arbitrarily. But the criticism is that this is sometimes not the application behaviour what we actually want.
You might be able to model a multi-value (MV) register using your system too. (Actually I'm not sure. Can you?) But I guess I don't understand why I would use it compared to just using an MV register directly. Specifically when it comes to conflicts.
As for the specific scenario, if a client sets a task as COMPLETE and another sets it as CANCELLED before seeing the COMPLETE from the other client here's what would happen.
Client1: { id: 1, action: completeTask, taskId: 123, clock: ...}
Client1: SYNC -> No newer events, accepted by server
Client2: { id: 2, action: cancelTask, taskId: 123, clock: ...}
Client2: SYNC -> Newer events detected.
Client2: Fetch latest events
Client2: action id: 1 is older than most recent local action, reconcile
Client2: rollback to action just before id: 1 per total logical clock ordering
Client2: Replay action { id: 1, action: completeTask, taskId: 123, clock: ...}
Client2: Replay action { id: 2, action: cancelTask, taskId: 123, clock: ...} <-- This is running exactly the same application logic as the first cancelTask. It can do whatever you want per app semantics. In this case we'll no-op since transition from completed -> cancelled is not valid.
Client2: SYNC -> no newer actions in remote, accepted
Client1: SYNC -> newer actions in remote, none local, fetch newer actions, apply action { id: 2, action: cancelTask, ...}
At this point client1, client2, and the central DB all have the same consistent state. The task is COMPLETE. Data is consistent and application semantics are preserved.
There's a little more to it than that to handle corner cases and prevent data growth, but that's the gist of it. More details in the repo.
The great thing is that state is reconciled by actually running your business logic functions -- that means that your app always ends up in a valid state. It ends up in the same state it would have ended up in if the app was entirely online and centralized with traditional API calls. Same outcome but works totally offline.
Does that clarify the idea?
You could argue that this would be confusing for Client2 since they set the task to cancelled but it ended up as complete. This isn't any different than a traditional backend api where two users take incompatible actions. The solution is the same, if necessary show an indicator in the UI that some action was not applied as expected because it was no longer valid.
edit: I think I should improve the readme with a written out example like this since it's a bit hard to explain the advantages of this system (or I'm just not thinking of a better way)
He very much leans toward them being hard to use in a sensible way. He has some interesting points about using threshold functions over a CRDT to get deterministic reads (i.e. once you observe the value it doesn't randomly change out from under you). It feels a bit theoretical though, I wish there were examples of using this approach in a practical application.
``` fn add(x: num, y: num) = x * y ```
The server has the authoritative state, users submit edits, which are then rejected or applied and the changes pushed to others. The users is always assumed to be online for multiplayer editing. No attempt is made to reconcile independent edits, or long periods of offline behavior.
To prevent data loss, when the user is offline and desyncs, he gets to keep his changes and manually merge them back.
I'm sure this isn't a Google genius worthy implementation and fails in the incredibly realistic scenario where thousands of people are editing the same spreadsheet at the same time, but its simple and fails in predictable ways.
but no you dont need it
And if this happens, your experience is going to be terrible anyway.
Just a basic example for a task tracker:
* first update sets task cancelled_at and cancellation_reason
* second update wants the task to be in progress, so sets started_at
If code just uses the timestamps to consider the task state, it would not assume the task is cancelled, unexpected since the later user update set it to in progress.
Easy fix, we just add a state field 'PENDING|INPROGRESS|CANCELLED|...'.
Okay, but now you have a task that is in progress, but also has a cancellation timestamp, which seems inconsistent.
The point is:
With CRDTs you have to consider how partial out of order merges affect the state, and make sure your logic is always written in a way so these are handled properly. That is *not easy*!
I'd love it if someone came up with a framework that allows defining application semantics on top of CRDTs, and have the framework ensure types remain consistent.
Then each event is associated with zero or more "parent events".
- An event has 0 parents if it is the first change
- An event has 1 parent if it simply came after that event in sequence
- And if an event merges 2 or more branches in history, it says it comes after all of those events
You can also think about it like a set. If I know about events {A, B, C} and generate event D, then D happens-after {A, B, C}. (Written {A,B,C} -> D). But if A->B, then I only need to explicitly record that {B,C} -> D because the relationship is transitive. A -> B -> D implies A -> D.
There are techniques to make it less painful, but it will still be possible.
Einstein just had to come along and screw everything up.
Causality is the key.
The point is that you always have to think about merging behaviour for every piece of state.
The difference is that coming up with a correct CRDT solution for application specific consistency requirements can be a research project. In many cases, no CRDT solution can exist.
In my experience, 95% of applications are handled just fine by the sort of JSON types built in to Yjs or automerge. The problems I hear people complain about are things like performance, size on disk and library ergonomics. And the long tail of features - like ephemeral data support and binary assets.
But data mapping seems mostly fine?
I know of a couple of exceptions. Arbitrary nested tree reparenting can be a nightmare. And there aren’t many good rich text implementations out there.
What problems are you actually running into?
One large class of problems I'm thinking of is simply outside the scope of CRDTs. The whole idea of _eventual_ consistency doesn't really work for things like payment systems or booking systems. A lot of OLTP applications have to be consistent at all times (hence the O). Money must not be double spent. Rooms or seats must not be double booked.
The other class of problems is more debatable. CRDTs can guarantee that collaborative text editing results in the same sequence of letters on all nodes. They cannot guarantee that this sequence makes sense. Authors can step on each other's toes.
Whether or not this is a problem depends on the specific workflow and I think it could be mitigated by choosing better units of storage/work (such as paragraphs rather than letters).
Yes! I think of it as owned data and shared data. Owned data is data that is owned by one process or node. Eg my bank balance, the position of my mouse cursor, the temperature of my CPU. For this stuff, you don’t want a crdt. Use a database. Or a variable in memory or a file on disk. Broadcast updates if you want, but route all write requests through the data’s owner.
Then there’s shared data - like the source code for a project or an apple note. There, CRDTs might make sense - especially if you get branching and merging support along for the ride.
> Authors can step on each other's toes.
Yeah when merging long lived branches, the workflow most people want is what git provides - of humans manually resolving conflicts. There’s no reason a crdt couldn’t provide this. CRDTs have a superset of the information available to git. It’s weird nobody has coded a system like that up yet.
The _source of truth_ are these facts (like "the air is blue" or "the user inserted the letter A at position X" or "the CPU is 40 degrees"). The view of this source is what we see, and can be seen through a CRDT or any other lens.
Normally we do that by storing something totally different under the hood. Eg, git actually stores a commit graph. But the system makes a determinism guarantee: we promise that all users who have the same version checked out will see exactly the same thing. At one level, we’re storing “a list of facts” (the commit graph). But at another level of abstraction, we’re just storing application data. It’s just also replicated between many peers. And editable locally without network access.
This is never true. You can prove that at some time now()-T where T > 0 you had the same view of the universe, but you cannot prove that you currently have the exact same view because even with the attempt of checking, T becomes greater than 0. Sometimes, this doesn't matter (T can be arbitrarily large and still effectively be zero -- like asking your friend if he is still married to that person. They can answer you days later, and it'll still be true), but sometimes even very small values of T cannot be assumed to be zero.
It works like you describe, with humans manually resolving conflicts. The conflicts are represented in the data model, so the data model itself converges without conflicts...if that makes sense.
Your system looks like it just enforces a global order on the actions. This will give you SEC - but how do you preserve the information that these edits were concurrent - and thus conflict with one another?
That's an interesting idea. I have to think about this.
Flight booking is often statistically consistent only. Overbooking, etc.
Absolutely. Bookkeeping is an offline activity (I'm only doing it once a year in my company, ha ha). You just have to make sure not to record the same transaction more than once, which could be non-trivial but shouldn't be impossible to do with CRDTs.
>Flight booking is often statistically consistent only. Overbooking, etc.
That may be acceptable in some cases but you still can't use CRDTs for it, because you need a way to limit the extent of overbooking. That requires a centralised count of bookings.
> You just have to make sure not to record the same transaction more than once
So this should be pretty easy. Have a grow only set of transactions. Give each one a globally unique ID at the point of creation. Order by date and do bookkeeping. One thing you can’t guarantee is that the balance is always positive. But otherwise - yeah.
CRDTs can't eliminate the requirement to think about what the consistent states are.
The gist is:
* Replicating intentions (actions, immutable function call definitions that advance state) instead of just replicating state.
* Hybrid logical clocks for total ordering.
* Some client side db magic to make action functions deterministic.
This ensures application semantics are always preserved with no special conflict resolution considerations while still having strong eventual consistency. Check out the readme for more info. I haven’t gotten to take it much further beyond an experiment but the approach seems promising.
I've had similar thoughts, but my concern was: if you have idempotent actions, then why not just encode them as actions in a log. Which just brings you to event sourcing, a quite well-known pattern.
If you go that route, then what do you need CRDTs for?
At least in my thinking/prototyping on the problem so far I think this solution offers some unique properties. It lets clients operate offline as long as they like. It delegates the heavy lifting of resolving state from actions/events to clients, requiring minimal server logic. It prevents unbounded growth of action logs by doing a sort of "rebase" for clients beyond a cutoff. It seems to me like it maximally preserves intentions without requiring specific conflict resolution logic. IMO worth exploring further.
Event Sourcing is not strictly designed to achieve eventual consistency in the face of concurrent writes though. But that doesn't mean it can't be!
I've also been considering an intent based CRDT system for a while now (looking forward to checking out GPs link) and agree that it looks/sounds very much like Event Sourcing. It's worth while being clear on the definition/difference between the two though!
I know you can use some unique persistent ids instead of names, but then you get into issues that two clients create two files with the same name: do you allow both or not? What if they initially create it equal? What if they do so but then they modify it to be different?
Any many CRDT implantations have already solved this for the styled text domain (e.g bold and cursive can be additive but color not etc).
But something user definable would be really useful
The basic CRDT ideas are actually pretty easy to implement: add some metadata here, keep some history there. The difficulty, for the past 20 years or so, is making the overheads low, and the APIs understandable.
Many projects revolve around some JSON-ish data format that is also a CRDT:
- Automerge https://automerge.org (the most tested one, but feels like legacy at times, the design is ~10yrs old, there are more interesting new ways)
- JsonJoy https://jsonjoy.com/
- RDX (mine) https://replicated.wiki/ https://github.com/gritzko/go-rdx/
- Y.js https://yjs.dev/
Others are trying to retrofit CRDTs into SQLite or Postgres. IMO, those end up using last-write-wins in most cases. Relational logic steers you that way.
I may go into the technical details, assuming my hourly rate is respected.
- RDX has a spec, so it can have compatible implementations. The result of a merge is specified to a bit. Automerge works the way Orion coded it (this time).
- There are equivalent text and binary formats, JDR and RDX.
- RDX palette of types is richer. Automerge is narrower than JSON.
- RDX can work in any commodity LSM db, natively, in the core (and it does).
- and so on...
Conflict-free replicated data types (CRDTs) https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...
Do people really distinguish "Strong Eventual Consistency" from "Eventual Consistency"? To me, when I say "Eventual Consistency" I alwayes mean "Strong Eventual Consisteny".
In an eventually consistent system replicas can diverge. A "last write" system can be eventually consistent, but a given point can read differently.
Eg: operations
1) Add "AA" to end of string 2) Split string in middle
Replicas R1 and R2 both have the string "ZZZZ"
If R1 sees operations (1) then (2) it will get "ZZZZAA", then "ZZZ", "ZAA"
If R2 sees (2) then (1) it will get:
"ZZ", "ZZ", then "ZZAA", "ZZ".
Strong Eventual Consistency doesn't have this problem because the operations have the time vector on them so the replicas know what order to apply them.
But the point is that there is a point in time where reading from different replicas gives different results.
In the CAP theorem consistency is comprised.
Any eventually consistent system has to have a strategy for ensuring that all nodes eventually agree on a final value. R1 and R2 need to communicate their respective states, and agree to a single one of them - maybe using timestamps if R2's value is newer, R1 will replace its own value when they communicate), maybe using a quorum (say there is also an R3 which agrees with R1, then R2 will change its value to match the other two), maybe using an explicit priority list (say, R1's value is assumed better than R2's).
If you ask your cache for a value, it could choose to reply now, with the information that it has - favouring A.
Or it could wait and hope for more accurate information to return to you later, favouring C.
'Cache' seems to imply that it's built for availability purposes.
P.S. I am the author of the project.
In a specific use case that might apply. For example, if two people edit the same document and fix the same typo, the visual outcome is the same, no matter who made the change first or last.
But that is very niche as if we would take a programming code, someone can change a line of code that someone else is changing as well and they might be the same, but then you have other lines of code as well that might not be and then you end up with a code that won't compile. In other words, if we focus on the singular change in insolation, this makes sense. But that is essentially never the case in distributed environments in this context and we have to look at broader picture where multiple changes made by someone are related or tied to each other and do not live insolation.
Either way, i see nothing useful here. You can "render" your local changes immediately vs wait for them to be propagated through the system and return back to you. There is very little difference here and in the end it is mostly just about proper diffing approach and has little to do with the distributed system itself.
PS: the problem here is not really the order of applied changes for local consumer, like in case of editing a shared word document. The problem here is if we have a database and we commit a change locally but then someone else commits different change elsewhere, like "update users set email = foo@bar where id = 5" and before we receive the other, later, change we serve clients invalid data. That is the main issue of eventual consistency here. As I am running a system like this, I have to use "waiters" to ensure I get the correct data. For example, when user creates some content via web ui and is redirected back to list of all content, this is so fast that the distributed system has not had enough time to propagate the changes. So this user will not see his new content in the list - yet. For this scenario, I use correlation id that i receive when content is created and i put it into the redirect so when user moves to the page that lists all the content, this correlation is detected and a network call is made to appropriate server whose sole purpose is to keep the connection open until that server's state is caught up to the provided correlation id. Then I refresh the list of content to present the user the correct information - all of this whilst there is some loading indicator present on the page. There is simply no way around this in distributed systems and so I find this article of no value(at least to me).