Briefly, I'd have started with an array which for each colour had an array containing the coordinate pairs for that colour. I'd probably then have sorted by length of each array. The state also has an empty array for the coordinates of each placed queen.
To solve, I'd take the head array as my candidates, and the remaining array of arrays as the next search space. For each candidate, I'd remove that coordinate and anything that was a queen move from it from the remaining arrays, and recursively solve that. If filtering out a candidate coordinate results in an empty list for any of the remaining arrays, you know that you've generated an invalid solution and can backtrack.
At no point would I actually have a representation of the board. That feels very imperative rather than functional to me.
To me, this solution immediately jumps out from the example - one of the queens in on a colour with only 1 square, so it HAS to be there. Placing that there immediately rules out one of the choices in both colours with 2 squares, so their positions are known immediately. From that point, the other 2 large regions have also been reduced to a single candidate each.
The site author himself has blocked users from the UK because of that stupid law that you cite in your comment: "The UK's Online Safety Act requires operators of 'user to user services' to read through hundreds (if not thousands) of pages of documentation to attempt to craft "meaningful" risk assessments and 'child access assessments' or face £18,000,000 fines, even imprisonment."
- with SMT (11 days ago, 47 comments) https://news.ycombinator.com/item?id=44259476
- with APL (10 days ago, 1 comment) https://news.ycombinator.com/item?id=44273489 and (8 days ago, 20 comments) https://news.ycombinator.com/item?id=44275900
- with MiniZinc (1 day ago, 0 comment) https://news.ycombinator.com/item?id=44353731
Largely so from a programming perspective it becomes a simplified version of Einstein's Riddle that I showed the class, doing in a similar way.
https://theintelligentbook.com/willscala/#/decks/einsteinPro...
Where at each step, you're just eliminating one or more possibilities from a cell that starts out containing all of them.
Queens has fewer rules to code, making it more amenable for students.
randomboard =: 3 : '? (y,y) $ y'
testsolution =: 4 : 0 NB. solution is a list of columns.
m =. x
n =. #x
solution =. y A. i. n
regions =. ({&m) <"1 (i. n) ,. solution
distinctregions =. n -: # ~. regions
adjacentregions =. 1 e. |2-/\solution
distinctregions * -. adjacentregions
)
findsolution =:3 : 0
board =: y
ns =. 1 i.~ (board & testsolution)"0 i. !#y
if. (ns = !#y) do. 'No solution found'
else.
echo 'Solution index is ', ": ns
ns A. i. #y end.
)
regions =: 4 : 0
({&x) <"1 (i. #x) ,. y
)
number2solution =: 4 : 0
y A. i. #x
)
writesolution =: 4 : 0
board =. x
sol =.y
m1 =. m
n1 =. #x
count =. 0
for_a. sol do.
m1 =. n1 (< count , a) } m1
count =. count + 1
end.
m1
)
writewithsolution=: 4 : 0
m1 =: x writesolution y
(":"1 x) ,. '|' ,. ":"1 m1
)
m =: randomboard 9
echo m writewithsolution findsolution m
I won't say this reduces the "Haskell is imposing" to zero, but a non-trivial amount of the initial impression of imposingness is just the very different syntax, such as the way functions are not called with parentheses after the function name. But the different syntax isn't really that big a deal. You just don't know it and aren't used to it. Under the hood it does have some differences, but the differences are magnified when you try to swallow the surface differences and the deep differences all in one shot. Nobody who knows Haskell did that; they learned it the same way you learn any other language, one bit at a time.
That will produce challenging boards ?
1) It's not too hard to make a problem with at least one solution (just put the queens down first, then draw boxes), but there isn't any good way of making levels with unique solutions.
2) Once you've accomplished that, it's hard to predict how hard a level will be, and then it's hard to make levels easier / harder.
I happen to be currently researching this topic (well, I'm doing all kinds of these grid-based puzzles, but this is an example). The algorithm tries to make "good" levels, but there is a good probability it will end up with something useless we need to throw away, and then try again.
It's easy to make levels which are trivial, and similarly easy to make levels which are far beyond human ability, but hitting things in the 'human tricky but solvable' sweet-spot is where most of the difficulty comes from.
I should probably try writing up a human-readable version of how I do it. It involves a bunch of Rust code, so I can hit a whole bunch of trendy topics!
Do you have a blog? I'm interested.
If the base solver you have is a system that can be run in various configurations with different levels of reasoning and assumption as well as a report on the amount of search needed if any, that can be very useful as a way to measure the hardness. In Sudoku as a Constraint problem (https://citeseerx.ist.psu.edu/document?doi=4f069d85116ab6b4c...), Helmut Simonis tested lots of 9x9 Sudoku puzzles against various levels of propagation and pre-processing as a way to measure the hardness of Sudoku puzzles by categorizing them by the level of reasoning needed to solve without search. The MiniZinc model for LinkedIn Queens (https://news.ycombinator.com/item?id=44353731) can be used with various solvers and levels of propagation as such a subroutine.
Now, for production-level puzzle making, such as what King does for Candy Crush, the problems and requirements are even harder. I've heard presentation where they talk about training neural networks to play like human testers, so not optimal play but most human like play, in order to test the hardness level of the puzzles.
A common opinion is that a good board is solvable without the use of backtracking. A set of known techniques should be enough to solve the board. To validate if a board is "fun" you need to have a program that can solve the board using these known techniques. Making that program is much harder than just making a general solver. And then you need to find the boards that can be validated as fun. Either you search through random boards, or you get clever...
#Variables: 121 (91 primary variables)
- 121 Booleans in [0,1]
#kLinear1: 200 (#enforced: 200)
#kLinear2: 1
#kLinear3: 2
#kLinearN: 30 (#terms: 355)
Presolve summary:
- 1 affine relations were detected.
- rule 'affine: new relation' was applied 1 time.
- rule 'at_most_one: empty or all false' was applied 148 times.
- rule 'at_most_one: removed literals' was applied 148 times.
- rule 'at_most_one: satisfied' was applied 36 times.
- rule 'deductions: 200 stored' was applied 1 time.
- rule 'exactly_one: removed literals' was applied 2 times.
- rule 'exactly_one: satisfied' was applied 31 times.
- rule 'linear: empty' was applied 1 time.
- rule 'linear: fixed or dup variables' was applied 12 times.
- rule 'linear: positive equal one' was applied 31 times.
- rule 'linear: reduced variable domains' was applied 1 time.
- rule 'linear: remapped using affine relations' was applied 4 times.
- rule 'presolve: 120 unused variables removed.' was applied 1 time.
- rule 'presolve: iteration' was applied 2 times.
Presolved satisfaction model '': (model_fingerprint: 0xa5b85c5e198ed849)
#Variables: 0 (0 primary variables)
The solution hint is complete and is feasible.
#1 0.00s main
a a a a a a a a a a *A*
a a a b b b b *B* a a a
a a *C* b d d d b b a a
a c c d d *E* d d b b a
a c d *D* d e d d d b a
a f d d d e e e d *G* a
a *F* d d d d d d d g a
a f f d d d d d *H* g a
*I* i f f d d d h h a a
i i i f *J* j j j a a a
i i i i i k *K* j a a a
Together with validating that there is only 1 solution you would probably be able to make the search for good boards a more guided than random creation.I'm trying to use it during the generation process to evaluate the difficulty a basic heuristic I'm trying to work with is counting the number of times a particular colour is eliminated - the higher the count the harder the problem since it requires more iteration of the rules to solve. (A counter example to this would be a board with 1 colour covering everything except the cells a queen of the other colours needs to be placed on)
Also I'm trying to evaluate the efficacy of performing colour swaps but it's proving more challenging than I thought. The basic idea is you can swap the colours of neighbouring cells to line up multiple colours so there are less obvious "single cells" which contains the queen. The problem with this is it can introduce other solutions and it's difficult to tell whether a swap makes the puzzle harder or simpler to solve.
https://en.wikipedia.org/wiki/Monad_(functional_programming)
https://en.wikipedia.org/wiki/Quantum_programming
Conceptually they are similar, but the math is way over my head. I have trouble grokking each one actually.
But it's pretty easy for a beginner to start with a list of true/false (or true/null) monads as inputs to a pure function. Imagine the monads occupying nodes in a tree structure like JSON, or merging through NAND/NOR gates to reduce to fewer outputs.
From the outside, we can toggle the inputs to feed them examples like 0101 and see how that affects the outputs. This is basically how a spreadsheet works.
Then we can extend the monads to contain a set of values. Or even a range of values, like a floating point number from 0 to 1 or 0 to pi/2, etc, more like imaginary numbers for use in quantum programming (not sure if this is still a monad).
Functional programming can lazily evaluate the inputs and eliminate don't-cares to calculate all possible outputs within the limits of their computing power and time. Quantum gates can do something similar using the interference patterns between the inputs and logic somehow (the hand wavy part nobody seems to be able to explain).
Maybe this approach could be used as a bridge to eliminate the hand wavy part and give us something tractable in layman's terms. This might be considered quantized or simulated quantum programming.
-
Note: monads are similar to futures/promises and async/await in imperative programming, like using the imaginary number i in algebra. Except that we are generally only concerned with a handful of expected results, so often miss the failure modes by not stress-testing the logic with fuzzing and similar techniques. Which tends to make async code nondeterministic and brittle. So I'm also interested in transpiling async/nonblocking <-> sync/blocking and state machine <-> coroutine.
I don't think expressiveness applies here.
randomboard =: 3 : '? (y,y) $ y'
testsolution =: 4 : 0
m =. x
n =. #x
n -: # ~. ({&m) <"1 (i. n) ,. y A. (i. n)
)
findsolution =:3 : 0
board =: y
ns =. 1 i.~ (board & testsolution)"0 i. !#y
if. (ns = !#y) do. 'No solution found' else. ns A. i. #y end.
)
writesolution =: 4 : 0
board =. x
sol =.y
m1 =. m
n1 =. #x
count =. 0
for_a. sol do.
m1 =. n1 (< count , a) } m1
count =. count + 1
end.
m1
)
writewithsolution=: 4 : 0
m1 =: x writesolution y
(":"1 x) ,. '|' ,. ":"1 m1
)
m =: randomboard 9
echo m writewithsolution findsolution m
load 'queens.ijs'
5 2 8 0 3 3 0 5 2|9 2 8 0 3 3 0 5 2
8 2 3 6 7 7 4 5 1|8 9 3 6 7 7 4 5 1
6 1 5 8 3 5 8 7 6|6 1 5 9 3 5 8 7 6
8 4 8 8 7 5 1 1 1|8 4 8 8 9 5 1 1 1
2 6 7 6 5 4 7 3 1|2 6 7 6 5 4 7 9 1
6 8 1 4 1 4 3 2 7|6 8 1 4 1 9 3 2 7
6 0 5 6 5 5 8 5 0|6 0 5 6 5 5 8 5 9
1 7 5 5 8 1 1 0 1|1 7 5 5 8 1 9 0 1
8 4 6 2 2 4 6 4 1|8 4 9 2 2 4 6 4 1
For the symmetry, LinkedIn Queens generally do not have symmetric boards since that would imply more than one solution.
Here's a trivial and fast MIP solution using python/pulp, which would be essentially the same in any mathematical programming DSL:
from collections import defaultdict
import pulp
board = [
["P", "P", "P", "P", "P", "P", "P", "P", "P"],
["P", "P", "R", "S", "S", "S", "L", "L", "L"],
["P", "R", "R", "W", "S", "L", "L", "L", "L"],
["P", "R", "W", "W", "S", "O", "O", "L", "L"],
["P", "R", "W", "Y", "Y", "Y", "O", "O", "L"],
["P", "R", "W", "W", "Y", "O", "O", "L", "L"],
["P", "R", "R", "W", "Y", "O", "B", "L", "L"],
["P", "R", "R", "G", "G", "G", "B", "B", "L"],
["P", "P", "R", "R", "G", "B", "B", "L", "L"],
]
# group by color for color constraint
def board_to_dict(board):
nr = len(board)
res = defaultdict(list)
for i, row in enumerate(board):
if len(row) != nr:
raise ValueError("Input must be a square matrix")
for j, color in enumerate(row):
res[color].append((i, j))
return res
color_regions = board_to_dict(board)
N = len(color_regions)
prob = pulp.LpProblem("Colored_N_Queens", pulp.LpMinimize)
x = [[pulp.LpVariable(f"x_{i}_{j}", cat="Binary") for j in range(N)] for i in range(N)]
# Row constraints
for i in range(N):
prob += pulp.lpSum(x[i][j] for j in range(N)) == 1
# Column constraints
for j in range(N):
prob += pulp.lpSum(x[i][j] for i in range(N)) == 1
# Color region constraints
for positions in color_regions.values():
prob += pulp.lpSum(x[i][j] for (i, j) in positions) == 1
# No diagonal adjacency
for i in range(N):
for j in range(N):
for di, dj in [(-1, -1), (-1, 1), (1, -1), (1, 1)]:
ni, nj = i + di, j + dj
if 0 <= ni < N and 0 <= nj < N:
prob += x[i][j] + x[ni][nj] <= 1
# Trivial objective
prob += 0
res = prob.solve()
print(f"Solver status: {pulp.LpStatus[prob.status]}")
if pulp.LpStatus[prob.status] == "Optimal":
for i in range(N):
row = ""
for j in range(N):
row += ("#" if pulp.value(x[i][j]) > 0.5 else " ") + board[i][j] + " "
print(row)
and its output: #P P P P P P P P P
P P R S S #S L L L
P R R W S L L L #L
P R #W W S O O L L
P R W Y Y Y O #O L
P R W W #Y O O L L
P #R R W Y O B L L
P R R #G G G B B L
P P R R G B #B L L