The meat of the code was a one liner. But opening a file, throwing away the first line and parsing a CSV took a lot more head scratching, especially if you've only ever done puzzles in Haskell.
Makes the hard things easy and the easy things hard
line = decimal `sepBy` char ','
file = line `sepBy` newline
That's it. It almost reads like prose.It’s fun because the interviewer often doesn’t grok the paradigm. In one case, I was asked to make a password generator, and I went with QuickCheck’s Gen, which lets me write a random generator of things without explicitly defining a single function (they’re all monads).
So you go from somehow describing the thing you want to make to “I’m done.” :-D
I also refused to solve a part of the task because it would lead to bad password policy. Instead I property-tested the generator.
That said: if I ever gave the palindrome question to a candidate (I haven't) and they pulled out a `reverse` library func/keyword we'd be encouraging them to do more.
[0] https://hackage.haskell.org/package/base-4.21.0.0/docs/Prelu...
Yeah spot on - exactly what I was aiming at too...
Having the intuition to use reverse (& pragmatism to keep it simple) is a great sign, and realistically more valuable [to me] for hiring than implementing rudimentary algorithms, but... still want at least a bit of both abilities
Also knowledge of standard library / language features is useful but (for me) doesn't equate to abilities in complex problem solving.
But honestly could be bias :)
What is the order of sumNToTotal?
> While it's true that sort has an O(nlogn) performance profile, one interesting thing here is that sorting is lazy in Haskell! This means that if our two strings are unequal, they will only be sorted far enough to determine inequality!
I don't understand exactly how this works - don't all O(n log n) sorting algorithms still have to do some work on the other elements of the list before pulling out the smallest one? If you just scanned and pulled the smallest one to the front each time that is O(n^2). And there's an implementation detail here that differences at the start of the sorted string will be caught more quickly than at the end, so real-world behaviour is dependent on your choice of sort order.
Agreed that the performance analysis is quite superficial and that some of the solutions like sum NToTotal must be very sub-optimal.
Thus the whole thing is O(n²). It's not like this is particularly hard for Haskell. If anything, it's often easier because we can set up the recurrence by analogy with the definition.
(the final filter sums k elements nCr(n, k) times as well)
This is just a beginner's Haskell post. If an interviewer really meant to test optimization knowledge, I would hope they wouldn't ask me to optimize one O(n) utility function into another.
I think not every piece of code needs to be hyper-optimised and in those cases, higher-level (declarative) languages where you don't specify every detail of the algorithm can give rise to very elegant code. But writing very performant Haskell code is a bit trickier.
In the anagram case, there's a straightforward O(n) solution where you count the occurrence of each letter in a hash map for both words and compare those. That can be written in Haskell too, of course, but it's not going to be as short.
Well, it's not clear to me why we're terming these as silly. They mostly seem simple, instead, to me.
fizzle :: Int -> String
fizzle n
| fizz && baz = "Fizz Baz!" -- if this branch is taken, buzz is never evaluated
| fizz && buzz = "Fizz Buzz!"
| fizz = "Fizz!"
| buzz = "Buzz!"
| otherwise = show n
where
fizz = n `mod` 3 == 0
buzz = n `mod` 5 == 0
baz = n `mod` 7 == 0
Probably not Haskell developers, but then again, I don't have enough money for my opinions to be worth anything.
> One I'm glad I don't have.
you should probably put less thought into it if you want to be happy & not grapple with such difficult thoughts.
> At what point do you draw the line and begin to help others altruistically?
At the point you have money, IMO. Otherwise what is the point of having money?
It is also an operator, meaning it can be used with infix notation, as in (x : xs). Haskell has something called operator sections, where if one supplies only one of the arguments to an operator it will return a function expecting the other argument. In other words
(x:) == \xs -> (x:xs)
and (:xs) == \x -> (x:xs)
This can be used as in this article, to create a function that prepends x to any list. Another common example is (1+) which increments any number it is given, or (:[]) which turns any value into a one-element list.It can also be used much more cleverly -- especially considering that any two-argument function can be turned into an operator with backticks -- but then (in my opinion) readability starts to suffer.
The article also features examples of point-free style, another unfortunate trend for readability.
As long as you use operators sparingly, don’t abuse partial application and prefer explicit lambdas to composition, Haskell is fairly readable. The issue is that approximately no Haskeller writes Haskell this way.
(For example, (\x -> x ++ y) and (\y -> x ++ y) look pretty similar to me at first glance, but (++y) and (x++) are immediately distinguishable.)
Of course, this is reliant on knowing the operators but that seems like a mostly orthogonal issue to me: You still need to know the operator in the lambda expression. That said, the niceness of sections gives people yet another reason to introduce operators for their stuff when arguably they already are too prevalent.
bool m3 = !(i % 3);
bool m5 = !(i % 5);
if (m3) printf("Fizz");
if (m5) printf("Buzz");
if (m3 || m5) printf("\n");
You can turn in your visitor badge at the front desk, and they'll call you an Uber. n `mod` 3 == 0 && n `mod` 5 == 0
And you have if (m3 || m5)
I really don't see what point you're trying to make here...they saying its unnecessary because if you go in order you first print "fizz", then print "buzz" which will always print "fizz buzz" for the equivalent of " mod 15" you don't need a special string that like.
the "if (m3 || m5)" is just printing a newline because under that condition you printed something earlier.
I've always liked this solutiin, which avoids that: https://archive.is/KJ39B
fizzbuzz i =
fromMaybe (show i) . mconcat $
[ "fizz" <$ guard (i `rem` 3 == 0)
, "buzz" <$ guard (i `rem` 5 == 0)
]
main =
for_ [1..100] $
putStrLn . fizzbuzz
This allows you to add special prints by adding just the one line of code, changing nothing else.With C, any claim one makes about repeatability is always wrong at some point depending on the version compliance.
I like C, but Haskell is a happy optimistic syntax... Julia is probably the language I'd wager becoming more relevant as Moore's laws corpse begins to stink. =3
Phrasing the question in a way that doesn't leave room for that insight is also a pretty big goof.
As for AI, yes, FizzBuzz is trivial for any model because it's so well-represented in the training data. The common benchmarks involve things like "Render a physically-correct bouncing ball inside a rotating hexagon," or something else that is too complex to simply regurgitate.
Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.
To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."
So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.
Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.
So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.