P1 and P2 start executing. P1 sets temp=1, then P2 runs through the loop for i=1..9. Then P1 sets n=temp=1. P2 sets temp=2, then P1 resumes executing for i=2..10, and completes. P2 sets n=temp=2, then completes.
The intuition is P1 checkpoints, P2 runs through the loop up until the last iteration, P1 resets n to the checkpoint + 1, P2 checkpoints, P1 runs through the loop, then P2 resets n to the checkpoint + 1.
EXTENDS Naturals, Sequences
(*--algorithm StateStore {
variables
n = 0; completions = <<>>;
define {
Invar == Len(completions) = 2 => n # 2
}
fair process (thing \in {"p", "q"})
variables i = 0, temp = 0;
{
Loop:
while (i < 10) {
First:
temp := n + 1;
Second:
n := temp;
Last:
i := i + 1;
};
Complete:
completions := Append(completions, 1)
}
}*)
(I acknowledge that this isn't the most elegant Pluscal but I think it is a roughly accurate?)>If we run P and Q concurrently, with ‘n’ initialized to zero, what would the value of ‘n’ be when the two processes finish executing their statements?
It depends a lot on the observer of `n` and what relationship it has with P and Q. Which isn't defined.
E.g. a trivial boundary-answer is that it can be 0, if nothing guarantees that P's and Q's threads' memory reaches the `n`-observer's thread. This is somewhat common if you try to synchronize via `sleep`, because that usually doesn't guarantee anything as all.
We also don't know the computer architecture. We can probably assume at least one byte moves between memory levels atomically, because basically every practical system does that, but that doesn't have to be true. If bits move between memory levels independently, this could observe 31, because it can be any combination of the bits between `00000` and `10100`, which includes `01011` -> there can be a `1` in any position, including all positions, so you can observe a number that was never assigned. (IRL: multi-word values and partial initializations can do this, e.g. it's why double-checked locking is flawed without something to guarantee initialization completes before the pointer is exposed)
"If we run P and Q concurrently with ‘n’ initialized to zero, what extreme interleaving could result in the lowest value of 'n' when the two processes finish executing their statements on a model checker?"
I'll edit it to improve, thanks.
though there's a lot of fun in here if you allow for optimizing compilers and lack of synchronization. e.g. without explicit synchronization between P/Q and the observing thread (assuming it's the main thread), it's reasonable for a compiler to simply delete P and Q entirely and replace the whole program with `print(0)`
I think it is possible to implement locking with only atomic assigments.
But I think "language operations are atomic and everything else isn't" is a reasonable inference for a puzzle.
I'm also intrigued by locking being done through atomic assignments; I thought it required at least something like "write if equal to value" or "write new value and return old".
This locking is 'slow' and has 'unfair' behavior.
byte n = 0
byte A = 0
byte B = 0
proctype Client(byte id)
{
do
:: (A == 0) -> B = id
:: (A == id) ->
n = n + 1;
assert (n <= 1);
n = n - 1;
A = 0
od
}
proctype LockServer()
{
do
:: (A == 0) -> A = B
od
}
init {
atomic {
run LockServer();
run Client(1);
run Client(2);
run Client(3);
}
}
It fails when two 'Client(2)' is replaced by 'Client(1)'I suppose if you assume all global accesses are atomic, it's a good demonstration that atomic operations don’t compose atomically (even in trivial cases) and aren't particularly useful as concurrency primitives.
You can read the code, figure out what primitives you have, translate the problem into something simple (I like card and board games), then map it back.
Problem: two processes each do this ten times: - read the counter - do any number of other things in other processes - write back what you read, plus one.
Game: You have two piles of cards. Each card costs 1 move to play, but lets you discard any number of cards from other piles at the same time.
Solution: play one red card, discarding everything but one blue card. Play the single remaining blue card, discarding the leftover red cards.
Back: process 1 reads, process 2 runs 9 steps, process 1 writes, process 2 reads, process 1 finishes, process 2 writes.
this will help to determine or realize better when and where synchronization is needed even though you might not expect it. (or remoddeling of the code).
its extremely hard to writr concurrent code well. i think i've hardly ever managed lockless code that wasnt super bugy (yet! :D). it takes a lot of focus and patience, and tons details about the platform(s) it might run on.
(Assumes atomic assignments, as noted by someone else.)
* Run procedure P into its first iteration, stopping after temp is assigned "n + 1", or the value 1
* Run procedure Q through nine iterations, stopping after the ninth has completed (n changes, but who cares?)
* Run the next line of procedure P's first iteration, so that it assigns temp's value of 1 to n; n is now 1
* Run procedure Q into its tenth iteration, stopping after temp is assigned the value 2
* Complete procedure P (n changes, but who cares?)
* Complete procedure Q, so that n is assigned the value of temp, or 2.