80 pointsby behnamoh20 hours ago10 comments
  • djtango16 hours ago
    I remember I did an Advent of Code day 1 in Haskell a while back.

    The meat of the code was a one liner. But opening a file, throwing away the first line and parsing a CSV took a lot more head scratching, especially if you've only ever done puzzles in Haskell.

    Makes the hard things easy and the easy things hard

    • ngruhn16 hours ago
      Same. Until I discovered those parser combinator libraries which are THE way to do parsing in Haskell. It makes parsing so pleasant you'll never need a regex again. A parser for that CSV file probably looks like this:

          line = decimal `sepBy` char ','
          
          file = line `sepBy` newline
      
      That's it. It almost reads like prose.
      • djtango16 hours ago
        Thanks for sharing! This was in a pre-LLM era and I couldn't be bothered to research the right way to do this. Those parser combinators are excellent, so clean
      • rafram10 hours ago
        That wouldn’t be a very good CSV parser.
        • Tainnor9 hours ago
          It doesn't have to be a general purpose CSV parser, it just has to be able to parse the specific AoC input
    • nh23423fefe6 hours ago
      <input runhaskell -e 'interact (oneliner . parse . drop 1 . lines)'
  • sshine16 hours ago
    I live-coded Haskell a couple of times at job interviews for non-Haskell positions where you were free to choose.

    It’s fun because the interviewer often doesn’t grok the paradigm. In one case, I was asked to make a password generator, and I went with QuickCheck’s Gen, which lets me write a random generator of things without explicitly defining a single function (they’re all monads).

    So you go from somehow describing the thing you want to make to “I’m done.” :-D

    • 1817282828617716 hours ago
      Did you get the job?
      • sshine14 hours ago
        Yeah.

        I also refused to solve a part of the task because it would lead to bad password policy. Instead I property-tested the generator.

  • pentaphobe15 hours ago
    Always fun to see this kind of Rosetta Code thing, feel like no matter the topic there's something to glean from reading!

    That said: if I ever gave the palindrome question to a candidate (I haven't) and they pulled out a `reverse` library func/keyword we'd be encouraging them to do more.

    • chriswarbo9 hours ago
      Haskell's default `String` type is a singly-linked list of `Char` values[0] so it takes O(n) time to access the end of the string. Hence I think the solution using `reverse` is probably the best there is, since any attempt to be "smarter" would just be more complicated for no real performance gain. Though I agree that we could ask someone to also write their own reverse function.

      [0] https://hackage.haskell.org/package/base-4.21.0.0/docs/Prelu...

      • pentaphobe8 hours ago
        "... write their own reverse.."

        Yeah spot on - exactly what I was aiming at too...

        Having the intuition to use reverse (& pragmatism to keep it simple) is a great sign, and realistically more valuable [to me] for hiring than implementing rudimentary algorithms, but... still want at least a bit of both abilities

        Also knowledge of standard library / language features is useful but (for me) doesn't equate to abilities in complex problem solving.

        But honestly could be bias :)

  • thdhhghgbhy16 hours ago
    Typical Haskell coding post with no mention of order or efficiency of the algorithm solutions. The thing is, even in basic coding quizes, if the interviewer is worth their salt, they will ask about how to improve the efficiency of the naive solutions first put forward, like the palindrome answer in this post.

    What is the order of sumNToTotal?

    • dmurray15 hours ago
      It does discuss performance and points out some places where the Haskell implementation makes the natural naive solutions performant.

      > While it's true that sort has an O(nlogn) performance profile, one interesting thing here is that sorting is lazy in Haskell! This means that if our two strings are unequal, they will only be sorted far enough to determine inequality!

      I don't understand exactly how this works - don't all O(n log n) sorting algorithms still have to do some work on the other elements of the list before pulling out the smallest one? If you just scanned and pulled the smallest one to the front each time that is O(n^2). And there's an implementation detail here that differences at the start of the sorted string will be caught more quickly than at the end, so real-world behaviour is dependent on your choice of sort order.

      Agreed that the performance analysis is quite superficial and that some of the solutions like sum NToTotal must be very sub-optimal.

    • kqr14 hours ago
      The matchesSum check is constant in the length of the input list. The filter adds an iteration over all results of (combinations n xs). If the complexity of (combinations n) is T(n) in xs, then by analogy of its recursive definition we have two recursive calls that are T(n-1), we have an fmap that is O(1) and a concatenation that is O(1). That sets up the recurrence T(n) = 2T(n-1) making the (combinations n) function O(n) in xs.

      Thus the whole thing is O(n²). It's not like this is particularly hard for Haskell. If anything, it's often easier because we can set up the recurrence by analogy with the definition.

      • iamevn14 hours ago
        can you not narrow it down further to O(k * nCr(n, k)) (n=size of input, k=combination size) since it does k conses for each of the nCr(n, k) combinations?

        (the final filter sums k elements nCr(n, k) times as well)

      • benmmurphy14 hours ago
        how is combinations O(n)? for n choose 3 generating combinations is going to approach n^3 since the formula is basically (n * n * n) / constant and for sumToAny generating all the combinations is 2^n.
    • Paracompact15 hours ago
      The execution order, or the order of the output list? The latter should be self-evident from the function body (first elements get collected first), and the former holds no tricks either since filters are evaluated from the head first, and concat (<>) allows lazy access to the first list's elements.

      This is just a beginner's Haskell post. If an interviewer really meant to test optimization knowledge, I would hope they wouldn't ask me to optimize one O(n) utility function into another.

    • 14 hours ago
      undefined
    • iamevn14 hours ago
      with N as the length of the subsets and X as the length of input, `combinations n xs` basically walks through list xs and at each element, conses that element onto each combination of size n-1 of later elements in the list then appends those resulting lists together. this is on the order of N * nCr(X, N) aka O(NX!/(N!(X-N)!)) which dominates the overall runtime. the final filter walks through each combination and does O(N) work at each element to find the ones that sum to the desired amount, again O(NX!/(N!(X-N)!))
    • Tainnor9 hours ago
      As much as I find Haskell elegant for many things, I have to agree. It reminds me of the infamous Haskell "quicksort" which isn't actually quicksort because it's not in place.

      I think not every piece of code needs to be hyper-optimised and in those cases, higher-level (declarative) languages where you don't specify every detail of the algorithm can give rise to very elegant code. But writing very performant Haskell code is a bit trickier.

      In the anagram case, there's a straightforward O(n) solution where you count the occurrence of each letter in a hash map for both words and compare those. That can be written in Haskell too, of course, but it's not going to be as short.

      • jose_zap4 hours ago
        Interestingly, the naive solution presented in this article is O(n) thanks to laziness.
  • ReDress14 hours ago
    > Silly job interview questions in Haskell.

    Well, it's not clear to me why we're terming these as silly. They mostly seem simple, instead, to me.

  • lynx9717 hours ago
    Well, that fizlle implementation is not a good example of a great answer. It checks for the two conditions twice. At least I would use a where clause to define fizz and buzz.
    • internet_points13 hours ago
      which is nice with the laziness because you only eval what you need, e.g.

          fizzle :: Int -> String
          fizzle n
            | fizz && baz = "Fizz Baz!" -- if this branch is taken, buzz is never evaluated
            | fizz && buzz = "Fizz Buzz!"
            | fizz = "Fizz!"
            | buzz = "Buzz!"
            | otherwise = show n
            where
              fizz = n `mod` 3 == 0
              buzz = n `mod` 5 == 0
              baz = n `mod` 7 == 0
  • 18 hours ago
    undefined
  • Sourabhsss117 hours ago
    Words for the functions are easy to understand.
  • rhdjsjebshjffn19 hours ago
    [flagged]
    • 90s_dev18 hours ago
      For one thing, it's hard to know what to spend it on. What should take priority? Obviously your own basic needs, for starters. And then of course the needs of your immediate dependents. But then what? Do you try to increase your quality of life? Theirs? At what point do you draw the line and begin to help others altruistically? Are two coins worth more than billions in practice? Besides, what even is quality of life? It's a very difficult and heavy problem. One I'm glad I don't have.
      • zwnow16 hours ago
        Altruism? In rich folk? Usually they convince themselves on thinking they need stuff they don't actually need. Employees can't afford everyday life? Not their issue.
      • fkfyshroglk16 hours ago
        > it's hard to know what to spend it on.

        Probably not Haskell developers, but then again, I don't have enough money for my opinions to be worth anything.

        > One I'm glad I don't have.

        you should probably put less thought into it if you want to be happy & not grapple with such difficult thoughts.

        > At what point do you draw the line and begin to help others altruistically?

        At the point you have money, IMO. Otherwise what is the point of having money?

  • pensatoio19 hours ago
    Fifteen years into my career, and I'm finally realizing that "expressive" languages are practically unreadable.
    • tikhonj19 hours ago
      unfamiliar languages are practically unreadable
      • rhdjsjebshjffn18 hours ago
        I don't see how that's any better for a haskell shop. i got some empathy but you chose a rough life.
    • rrradical19 hours ago
      As a haskell programmer, all of that was easily readable.
      • sgarland18 hours ago
        As a non-Haskell programmer, all of that was easily readable. Straightforward words for functions makes that pretty easy.
        • RHSeeger18 hours ago
          There were a couple of places that took me a couple reads to figure out, like the fact that `(x:)` was "prepend". But overall, I followed the code pretty well. From the context of someone that wrote a small amount of Haskell a decade ago.
          • kqr18 hours ago
            The : operator is the linked list data constructor. It takes an element and a list and creates a new linked list by linking the element to the existing list. It does the opposite when used in a pattern match: separates out the first element in a linked list.

            It is also an operator, meaning it can be used with infix notation, as in (x : xs). Haskell has something called operator sections, where if one supplies only one of the arguments to an operator it will return a function expecting the other argument. In other words

                (x:) == \xs -> (x:xs)
            
            and

                (:xs) == \x -> (x:xs)
            
            This can be used as in this article, to create a function that prepends x to any list. Another common example is (1+) which increments any number it is given, or (:[]) which turns any value into a one-element list.

            It can also be used much more cleverly -- especially considering that any two-argument function can be turned into an operator with backticks -- but then (in my opinion) readability starts to suffer.

            • shakna17 hours ago
              So it's the equivalent of Lisp's cons then?
              • djtango16 hours ago
                Yeah so prepend is just currying cons with the head
              • sshine14 hours ago
                Yes, : is pronounced cons.
          • StopDisinfo91018 hours ago
            It’s partial application of cons via the operator, admittedly a poor choice from Haskell, a language which likes operators a bit too much. I think eta-expansion makes the whole thing clearer: (\xs -> x:xs) but most Haskellers would disagree.

            The article also features examples of point-free style, another unfortunate trend for readability.

            As long as you use operators sparingly, don’t abuse partial application and prefer explicit lambdas to composition, Haskell is fairly readable. The issue is that approximately no Haskeller writes Haskell this way.

            • Skeime14 hours ago
              I don't use Haskell nearly enough to call myself a Haskeller but I will still disagree. Yes, operator sections are yet another thing to learn but I find them very intuitive, and actually easier to read than the equivalent lambda expression because I don't have to match up the bound variable.

              (For example, (\x -> x ++ y) and (\y -> x ++ y) look pretty similar to me at first glance, but (++y) and (x++) are immediately distinguishable.)

              Of course, this is reliant on knowing the operators but that seems like a mostly orthogonal issue to me: You still need to know the operator in the lambda expression. That said, the niceness of sections gives people yet another reason to introduce operators for their stuff when arguably they already are too prevalent.

              • sshine14 hours ago
                It’s not that those language features are hard to understand. They’re all syntactic and don’t bring a ton of theory with them. It’s just that the tower of understanding for basic programs is very tall, and the tendency to introduce abstraction essentially never ends. I spent ten years with Haskell as my go-to language and there are still things I don’t understand and haven’t explored. It’s not like that with Python, or Go, or even Rust.
            • 16 hours ago
              undefined
        • 90s_dev18 hours ago
          I honestly just skimmed the code and assumed it probably does what I would guess it does. It seemed to make sense and be straightforward... assuming my guesses were right, I guess?
        • CamperBob218 hours ago
          As a C programmer, that's the worst FizzBuzz implementation ever. You're not supposed to special-case % 15, just

              bool m3 = !(i % 3);
              bool m5 = !(i % 5);
              if (m3) printf("Fizz");
              if (m5) printf("Buzz");
              if (m3 || m5) printf("\n");
          
          You can turn in your visitor badge at the front desk, and they'll call you an Uber.
          • unsnap_biceps18 hours ago
            They have

               n `mod` 3 == 0 && n `mod` 5 == 0
            
            And you have

               if (m3 || m5)
            
            I really don't see what point you're trying to make here...
            • tayo4217 hours ago
              the article has a special string that says "fizz buzz",

              they saying its unnecessary because if you go in order you first print "fizz", then print "buzz" which will always print "fizz buzz" for the equivalent of " mod 15" you don't need a special string that like.

              the "if (m3 || m5)" is just printing a newline because under that condition you printed something earlier.

              • hnlmorg16 hours ago
                It’s still an extra condition. So it really doesn’t matter if you’re printing “Fizz Buzz” string or not, it’s the same amount of branching.
                • airstrike9 hours ago
                  add another bool that stores the information to write a newline whenever either fizz or buzz happen and you don't need the third branch
          • kqr18 hours ago
            I agree. But you're also not supposed to have separate booleans for each special print, because when you have many special prints it gets annoying to extend.

            I've always liked this solutiin, which avoids that: https://archive.is/KJ39B

                fizzbuzz i =
                  fromMaybe (show i) . mconcat $
                    [ "fizz" <$ guard (i `rem` 3 == 0)
                    , "buzz" <$ guard (i `rem` 5 == 0)
                    ]
            
                main =
                  for_ [1..100] $
                    putStrLn . fizzbuzz
            
            This allows you to add special prints by adding just the one line of code, changing nothing else.
          • coolcase18 hours ago
            Nice. C looks like a safe language with the casual using an int as a bool.
            • Joel_Mckay17 hours ago
              Depends which of the hundreds of C compilers you used, as some "bool" are cast as uint8_t, others unsigned char, and others bit-packed with an optimizer.

              With C, any claim one makes about repeatability is always wrong at some point depending on the version compliance.

              I like C, but Haskell is a happy optimistic syntax... Julia is probably the language I'd wager becoming more relevant as Moore's laws corpse begins to stink. =3

          • Glyptodon17 hours ago
            I can't tell if you're being serious or making a joke.
          • chpatrick18 hours ago
            That's an extra system call for printing the newline.
            • CamperBob218 hours ago
              Well, duh, yeah, if you call setbuf(stdio, NULL) first.
              • CamperBob25 hours ago
                Uh, setbuf(stdout, NULL)

                I'll call my own Uber, thanks

          • vostok18 hours ago
            I think this person wants a space between Fizz and Buzz.
            • CamperBob218 hours ago
              Exactly, which means the interviewer didn't even state the problem correctly. The train had already jumped the rails by the time the candidate started writing. Hopefully HR will agree that they deserve each other.
          • YZF17 hours ago
            And yet it's funny how many times you see the supposed "correct" solution missing that 3x5=15. I wonder how AI will answer fizzbuzz, is that part of any standard benchmark?
            • CamperBob25 hours ago
              I mean, all trolling aside, that's kind of the idea behind FizzBuzz. If you don't notice that 15 is divisible by 3 and 5 and take advantage of that somehow in your logic, or at least acknowledge it, you really cannot be said to have aced the problem, even if your program's output is technically correct.

              Phrasing the question in a way that doesn't leave room for that insight is also a pretty big goof.

              As for AI, yes, FizzBuzz is trivial for any model because it's so well-represented in the training data. The common benchmarks involve things like "Render a physically-correct bouncing ball inside a rotating hexagon," or something else that is too complex to simply regurgitate.

          • internet_points13 hours ago
            and people say haskell is hard to get
    • sponnath18 hours ago
      Not all programming languages are obvious derivatives of C. Haskell is pretty readable once you spend some time getting to know the syntax.
      • 90s_dev17 hours ago
        The semantics are very different. It's much closer to a mathematical proof than a C program with different syntax, in terms of its lazy evaluation.
        • Darmani17 hours ago
          I prefer to think of Haskell-like lazy evaluation as constructing a dataflow graph. The expression `map f (sort xs)` constructs a dataflow graph that streams each output of the sort function to `f`, and then printing the result begins running that job. Through that lens, the Haskell program is more like constructing a Spark pipeline. But you can also think of it as just sorting a list then transforming each element with a function. It only makes a difference in resource costs or when there's potential nontermination involved, unless you use unsafe effects (e.g.: unsafePerformIO).

          Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.

          To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."

          So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.

          Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.

          So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.

    • bcrosby9517 hours ago
      How much time have you spent with functional programming languages?
      • djtango16 hours ago
        We once hired a F# developer to write Clojure and within a week, he was writing some of the clearest Clojure I've read.

        Learn paradigms not languages...