The conclusion should be much narrower. For the same row double debit, true SNAPSHOT isolation, enabled and actually used, not just RCSI, turns the loser into an update-conflict/retry. The idiomatic SQL fix is a conditional debit UPDATE, row-count check, constraints, and one transaction for the transfer.
The deadlock example is also overstated. SQL Server detects the cycle, rolls back a victim, returns error 1205 [1], and the application is expected to retry. Microsoft has shitty defaults and the author turned it into a much broader claim ..
[1] - "Your transaction (process ID #...) was deadlocked on {lock | communication buffer | thread} resources with another process and has been chosen as the deadlock victim. Rerun your transaction"
"Deadlocks guide" - https://learn.microsoft.com/en-us/sql/relational-databases/s...
BEGIN TRANSACTION;
IF EXISTS (
SELECT 1
FROM account_balances WITH (UPDLOCK, HOLDLOCK)
WHERE owner = 'alice'
AND balance >= 10
)
BEGIN
INSERT INTO account_ledger (owner, amount, memo)
VALUES
('alice', -10, 'transfer to bob'),
('bob', 10, 'transfer from alice');
END
COMMIT TRANSACTION;
or if I wanted to use the update pattern since I am taking the lock anyway BEGIN TRANSACTION;
UPDATE accounts WITH (UPDLOCK)
SET balance = balance - 10
WHERE owner = 'alice'
AND balance >= 10;
IF @@ROWCOUNT = 1
BEGIN
UPDATE accounts
SET balance = balance + 10
WHERE owner = 'bob';
END
COMMIT TRANSACTION;Preventing these kinds of concurrency issues is exactly why I built https://socketcluster.io years ago. Though it solves the problem at the app layer rather than the storage layer.
But not many developers care about these race conditions it seems.
It's not just an issue with SQL but a more general issue with many programming languages and approaches.
This is a great example because it shows how concurrent executions can lead to significant issues.
There's nothing "incorrect by construction".
The author claims the original snippet "looks completely reasonable". It absolutely does not, if you know anything about client-server databases.
UPDATE accounts
SET balance = balance - 10
WHERE owner = 'alice' AND balance >= 10;
Another possible surprise, say two xacts do this at the same time: INSERT INTO foo(num) (
SELECT 1 WHERE NOT EXISTS (
SELECT * FROM foo WHERE num = 1
)
);
Without a UNIQUE on num, you get num=1 twice. Of course adding UNIQUE would prevent this, but what you might not expect is UNIQUE implicitly adds a lock too. So not only do you only get num=1 once, but also both xacts are guaranteed to succeed, which in some situations is an important distinction.Schools teach that databases are ACID, but in most cases they aren't by default, and enabling full ACID comes with other caveats and also a large performance hit.
It has since become a tool of even front end engineers.
??? This doesn't make sense. It's like saying "just implement it properly".
what about distributed clients? what about _different_ clients?
Also idk what the Rust suggestion is. Nothing in Rust would guard against this kind of race condition if you replicated the example over there. You can do mutex or even RwMutex on a single struct, but that doesn't force you to hold struct A's read mutex while you write to struct B.
https://en.wikipedia.org/wiki/Transact-SQL
A more universal industry standard is SQL/PSM, which originated from Oracle PL/SQL:
https://en.wikipedia.org/wiki/SQL/PSM
Demonstrating the flaws in question in the PSM standard would be more useful.
> Let the user manage locks themselves, and make sure the correct locks are acquired before mutating a database object.
As demonstrated by the extended (corrected) version of the transaction, the user is controlling which locks get used. So how does this make it into the conclusion as a want, when it already is how it works?
I don't think it's SQL itself - it's the DB vendors ship weak isolation so people aren't hit by deadlocks, isn't it?
> Make transactions atomic by default
Not the issue, right? It's the weak isolation.
Have a transactions table with the payer and receiver and calculate the current balance using the transactions.
Each transaction must have a unique Id (pk)
Edit: Well another option is to add a "pending" col and do three separate db xacts: 1. insert pending=true row 2. select balance with pending debits deducted (which ages out pending rows older than 1min) 3. update row to pending=false if successful. This is a useful pattern if you're waiting on an external system too, but not good in this case where you're just trying to update in one DB.
Balance is calculated & stored after the fact from a known correct value.
Fair that things often grow beyond their original intent.
Which is to say that I am definitely indexing on the idea that you would try and get a query language to encode processes being the problem, here.
You can even insert them in an unvalidated state, then validate them later. That way if you have two transactions that come one after another, it doesn't matter because you can process them sequentially anyway.