The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
https://github.com/ensisoft/detonator
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority.
Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc.
Memorizing problems is wholly not the point of Leetcode grinding at all.
In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems.
I guess it's a matter of opinion but my point is, this is probably the right metric. Arguably, the kind of people who shut up and play along with these stupid games because that's where the money is make better team players in large for-profit organizations than those who take a principled stance against ever touching Leetcode because their efforts wouldn't contribute anything to the art.
That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum.
It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before.
Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done.
Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) )
Then comes the ability/memorization to actually code it, e.g. if I knew it needs coding red-black tree I wouldn't even start.
Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on.
or that writing a new book is the same.
I.e. it's not about that. Like sure it helps to have a base set of shared language, knowledge, and symbols, but math is so much more than just that.
You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?
If it didn't work, these companies wouldn't be able to function at all.
It must be the case that it works better than running a RNG on everyone who applied.
Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway.
This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.
I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.
This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.
I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices.
Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before.
Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
>Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
Should be illegal honestly.
That wouldn't be hard to do. Given the disparate impact standard, everything is biased against a protected class.
I can't imagine this kind of entitlement. If you don't want to work for them, don't study leetcode. If you want to work for them (and get paid tons of money), study leetcode. This isn't a difficult aristotelian ethics/morals question.
Ten years ago it was more based on Cracking the Coding Interview.
So i'd guess what you're referring to is even older than that.
Apart from those companies where social capital counts for more ...
Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage.
The faangs jump and then the rest of the industry does some dogshit imitation of their process
It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
I’ve seen how the faangs interview process accounts for those types of bias and mental blindness and are actually effective, but their solutions require time and/or money so everywhere I’ve been implements the first 80% that’s cheap and then skips on the rest that makes it work
Any way to reach out? :)
I think it boils down to companies not wanting to burn money and time on training, and trying to come up with all sorts of optimized (but ultimately contrived) interview processes. Now both parties are screwed.
>It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
Tell me about it! Who were you explaining this to?
I've always explained it as demonstrating your ping pong skills to get on the basketball team.
Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?
This was the demo/take-home (for https://monumental.co): https://github.com/rublev/monumental
Sorry about the job interview. That sucks.
I'm not saying that using AI for take-home assignments is bad/unethical overall, but you need to be honest about it. If he was lying to them about not using any AI assistance to write all those emojis and folder structure map in the repo, then the CTO had a good nose and rightfully caught him.
No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple).
Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable.
Since you can't win with docs, the best approach is to produce high quality diagrams that are simultaneously useful for a wide audience from novice to expert. The only problem is that even producing high quality diagrams at a ratio of 1 diagram per 1k lines of code is still very time consuming to produce if you're putting lots of thought into it, double so if you're fighting the diagramming tools, or if you want something that's easy for multiple stakeholders with potentially very different job descriptions to take in. Everyone will call it inadequate, ask why it took so long, and ask for the missing docs that they will hate anyway!
On the bright side, LLMs are pretty great at generating mermaid, either from code, or natural language descriptions of data-flows. Diagrams-as-code without needing a whole application UI or one of a limited number of your orgs lucid-chart licenses is making "Don't like it? Submit a PR" a pretty small ask. Skin in the game helps to curbs endless bike-shedding criticism
> Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable.
As a counterpoint, I can confidently say that I've never once had anyone give any feedback to me on the presence or absence of emojis in code I've written, whether for an interview, work, or personal projects, and I've never had anyone accuse my documentation of being AI generated or gotten feedback in an interview that my code didn't have enough documentation. There's a pretty wide spectrum between "indistinguishable from what I get when I give an LLM the same assignment as my interviewee" and "lacking any sort of human-readable documentation whatsoever".
Should I also be "honest" about tab-completion? Where do you draw the line? Maybe I should be punished for having an internet connection too. Using AI for docker/readme's/simple scaffolding I would have done anyways? Oh the horror!
There was no lying because there was no discussion or mention of AI at all. Had they asked me, I'd have happily told them yes I obviously use AI to help me save time on grunt-work, I've been doing this stuff for like 15 years.
It's an unpaid take-home assignment. You'd have to be smoking crack to think that I would be rawdogging this. Imagine if I had a family or a wife or an existing job? I'd dump them after getting linked their assignment document.
Honestly at this point in the AI winter if you are a guy who has AI-inspired paranoia then I don't want to work for you because you are not "in the know".
It's not defensible in any case.
That being said, I think the CTO's "vide coding paranoid" after seeing this repo is 100% justified.
Given what you’ve said in your other comments, it seems like you used AI in a way that I wouldn’t have a problem with but just briefly looking through I can see how it would look suspicious.
I'd probably draw it somewhere in the miles-long gap between tab completion and generating code with an LLM. It sounds like that's where the company drew it too.
I'm not a math/university educated guy so this was truly "from the ground up" for me despite the math being simple. I was quite proud of that.
>Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot?
No it seems completely immaterial. I'll happily talk about it if asked but it's just another tool in the shed. Great for scaffolding but makes me want to rip my hair out more often than not. If it doesn't one-shot something simple for me it has no use because it's infuriating to use. I didn't get into programming because I liked writing English.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem: https://github.com/jonnycoder1/merck_coding_challenge
That doesn't look too challenging for anyone who has experience in low-level programming, embedded systems, and reverse engineering. In fact for me it'd be far easier than part 1, as I've done plenty of work similar to the latter, but not the former.
You have also not attempted to hide that, which is interesting.
The fucking CTO thought I vibe-coded it and dismissed me. Shout-out to the hiring manager though, he was real.
The job market is brutal right now, and you have my sympathy. I hope you can find a good fit soon.
Luckily the hiring manager called me back and levelled with me, nobody kept him in the loop and he felt terrible about it.
Some stupid contrived dumbed down version of this crane demo was used for the live test where I had to build some telemetry crap. Nerves took over, mind blanked.
Here's the take-home assignment requirements btw: https://i.imgur.com/HGL5g8t.png.
Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png.
At this rate I'm probably going to starve to death before I get a job. Should I write a blog post about my last 2 years of experiences? They are comically bad.
This was for monumental.co - found them in the HN who's hiring threads.
This never happened to me in a job interview before I turned 40. But once I knew I was too old to look the part, and therefore and had to knock it out of the park, mind blank came roaring in. I have so much empathy now for anyone it ever happened to when I was giving the a job interview. Performing under that kind of pressure has nothing to do with actual ability to do the job.
These are the same link
I hope you can at least leverage this demo. Maybe remove the identifications of it and shove it into your CV as a "hobby project"? It looks pretty good for that.
Best!
Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.
These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.
And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.
That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.
I actually love LC and have been doing a problem a week for years. Basically I give myself 30 minutes and see what I can do. It’s my equivalent to the Sunday crossword. After awhile the signals and patterns became obvious, to me anyway.
I also love puzzlerush at chess.com. In chess puzzles there are patterns and themes. I can easily solve a 1600 rated problem in under 3 seconds for a chess position I’ve never seen before not because I solve the position by searching some move tree in my mind, I just recognize and apply the pattern. (It also makes it easier to trick the player when rushing but even the tricks have patterns :)
That said, in our group we will definitely have one person ask the candidate a LC style question. It will probably be me asking and I usually just make it up on the spot based on the resume. I think it’s more fun when neither one of us know the answer. Algorithm development, especially on graphs, is a critical part of the job so it’s important to demonstrate competency there.
Software engineering is a hugely diverse field now. Saying you’re a programmer is kinda like saying you’re an artist. It does give some information but you still don’t really know what skill set that person uses day to day.
Even if you are an exception either you are writing the library meaning you write that algorithm once for the hundreds of other users, or the algorithm was written once (long ago) and you are just spending months with a profiler trying to squeeze out a few more CPU cycles of optimization.
There are more algorithms than anyone can memorize that are not in your library, but either it is good enough to use a similar one that already is your library, or you will build it once and once again it works so you never go back to it.
Which is to say memorizing how to implement an algorithm is a negative: it means you don't know how to write/use generic reusable code. This lack is costing your company hundreds of thousands of dollars.
I don't think most LC problems require you to do that. Actually most of them I've seen only require basic concepts taught in Introduction to Algorithms like shortest path, dynamic programming, binary search, etc. I think the only reason LC problems stress people out is time limit.
I've never seen a leetcode problem that requires you to know how to hand code an ever so slightly exotic algorithm / data structure like Fibonacci heap or Strassen matrix multiplication. The benefit of these "fastest algos" is too small to be measured by LC's automatic system anyway. Has that changed?
My personal issue with LC is that it has a very narrow view of what "fast" programs look like, like most competitive programming problem sets. In real world fast programs are fast usually because we distribute the workload across machines, across GPU and CPU, have cache-friendly memory alignment or sometimes just design clever UI tricks that make slow parts less noticeable.
I'm wondering how software devs explain this to themselves. What they train for vs what they actually do at their jobs differ more and more with time. And this constant cycle of forgetting and re-learning sounds like a nightmare. Perhaps people burn out not because of their jobs but the system they ended up in.
I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.
Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.
If you’re hiring software engineers by asking them questions that are best answered by AI, you’re living in the past.
You would need "photographic" memory
Most people memorize and cargo cult practices with no deeper understanding of what they are doing.
And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.
The problem is that it is too amenable to prep
You can move your score like 2stddev with practice, which makes the test almost useless in many cases
On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
This framing of the problem is deeply troubling to me. A good test is one that evaluates candidates on the tasks that they will do at the workplace and preferably connects those tasks to positive business outcomes.
If a candidate's performance improves with practice, then so what? The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company.
Skill is not a univariate quantity that doesn't change with time. Also it's susceptible to other confounding variables which negatively impact performance. It doesn't matter if you hire the smartest devs. If the social environment and quality of management is poor, then the work performance will be poor as well.
Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned?
(I had to leave the field, sadly, due to disability)
Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it.
What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.
One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"
The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.
> you can't ask clarifying questions
Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem?The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.
Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.
This solves one problem but it does add performance anxiety to the mix having to live code.
1. People can be hired to take the test for you - surprise surprise 2. It is akin to deciding if someone can write a novel from reading a single sentence.
> It is akin to deciding if someone can write a novel from reading a single sentence.
For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.
https://www.reddit.com/r/leetcode/comments/1mu3qjt/breaking_...
There are funded companies set up just to help you get past this stuff.
https://www.reddit.com/r/leetcode/comments/1iz6xcy/cheating_...
Personally I feel software development has become more or less like assembly line work. If I was starting out today I would seriously consider other options.
Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.
> I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers
If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.
> It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.
I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.
> Without any chance of additional info/questions it's literally a setup to fail.
Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.
Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).
I'm fine with hard questions in an actual interview.
Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).
You have few minutes and you're on your own.
If you pass that abomination, maybe, you have in person ones.
It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.
That could exist for software too, but I'm not sure HN folks would like that alternative any better. Like if you thought memorizing leetcode questions for 2 weeks before an interview was bad, well I have some bad news.
Maybe in 50-100 years software will have that, but things will look very different.
Plumber is probably the closest to what you're getting at. They are state licensed typically, with varying levels of requirement. But the requirement is often just like "have worked for 2-4 years as a trainee underneath a certified plumber" or whatever. That would be closest to what I'm guessing you would be recommending?
Also relevantly: the accountant and plumber jobs that are paying $300k-$500k+ are very rare. There exist programming jobs that pay what a typical plumber makes, but don't have as many arcane interview hoops to jump through.
So .. my approach would be to just open dev tools and deactivate that event.
Show of practical skill or cheating?
But probably a general solution exists ... and there are actually extensions that will do that in general.
The worst ones i've had though had extra problems though:
one i was only told about when i joined the interview and that they would be watching live.
One where they wanted me streaming my face the whole time (maybe some people people are fine with that)
And one that would count it against me if i tabbed to another page. So no documentation because they assume i'm just googling it.
Still it's mostly on me to prepare and expect this stuff now.
But I have also been to places that demand actual working code which is compiled and is tested against cases
Usually there the problem is simpler, so there's that
In any real engineering situation I can solve 100% of these problems. That's because I can get a cup of coffee, read some papers, look in a textbook, go for a walk somewhere green and think hard about it... and yes, use tooling like a constraint solver. Or an LLM, which knows all these algorithms off by heart!
In an interview, I could solve 0% of these problems, because my brain just doesn't work that way. Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
As someone just starting out, the general feeling among my peers is that I must bend to the interviewer's whims, any resistance or pushback will get you rejected. If this is dodging a bullet, then the entire junior field is a WW1 trench, at least where I am. Why would a company hire someone who gets 9/10 on the behavioral portion when they have a dozen other 10/10 candidates? Of course when the interviewer asks me to use "any language", I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68. Stepping out of line would just be performatively asking them to reject me.
If the candidate reads that this may be the case, asks for, obviously, that reason, and the interviewer confirms that they mean "any", then it's a red flag for that interviewer, at least, as a co-worker, if they go on to get upset over your choice, unless it's something where you're obviously taking the piss, like Brainfuck (the later suggestion of assembly probably counts as this, but at that point the interviewer[s] had already failed the interviewee's test of them, so, whatever)
But yes, if you're desperate for a job you should indeed just ignore any red flags and do your best to fit the perfect-cog mold and do whatever emotional labor is required to seem the way you think they want you to be, and take whatever abuse they offer with a smile. That's true.
Also, we can't know what exactly was said, so maybe miscommunication could be partly to blame. Like, "Are you sure I can use any language? (Are you really so gracious as to give me this option?)" vs. "Are you sure I can use any language? (Can I use something you definitely don't know?)"
When I did interviews, I used to ask for “any imperative language”. Most people chose C or Java, some chose e.g. Python and the best solutions looked very different from the C/Java ones. I did not deduct points for either; a good solution is a good solution.
I once had a candidate that chose Oberon, because it was the only language they felt comfortable with (by their own account). They fell through on the interview for other reasons, but this seriously made me consider to what degree they had any programming experience at all outside a few select school assignments.
Independent of that, if someone came with a solution in a constraint solver, my next question would be (as it usually was, regardless of approach) “and what is the runtime complexity of your solution?” and I'd be impressed if they had any nonobvious thoughts about that!
This is just conflict avoidance and naivety. After a while you start to realize that there's a whole world of people just like on HN and *we hire people too*. No matter what you do, youll end up in the place you deserve. If you try to be sneaky, you will end up working for people who are either easily fooled or see right through how to exploit you. If you let your nerd shine you'll end up with people who love your nerdiness.
I mean, I'm hoping for that too. But it also feels like this only applies as long as there's a balance of likeminded people who are already in the industry vs. the people looking to get a job. For someone like me, without a real network, meeting a person like the kind you mention is extremely unlikely. Even then, most of these people are looking for more qualified candidates, since there's an overabundance of juniors and seniority is a good predictor for being really passionate about their field. So, maybe I'll figure that out someday, but right now I just need a job, and what people in my cohort do is a way to try and get a job at all costs.
Of course, my team also writes SDKs in a bunch of different languages, so it makes sense. Even if that weren't the case though, I'd be stoked. To your point though, early in your career, I get your viewpoint. It's hard out there to get a foot in the door and you have to seize opportunities.
But interviews are bidirectional. The company is deciding if they want me, and I’m deciding if I want them. If I chose to use Self or Forth as the whiteboard context for the conversation we’re having, it’s deliberately to make the interviewer think, and hopefully learn. If the experience of thinking differently about a problem (that they chose!) and learning something new is a negative signal to them, that’s fine —- it being a negative signal to them is a negative signal to me, and I don’t want to be there anyway! If they’re excited, and intrigued, and give “12 o’clock” feedback — well, that’s the team I want to work with. So I’ve helped us both accomplish our goals (making accurate assessments about fit), and aligned our metrics along the way.
This is not what you see in practice. Trying to hire, the view is very much different, in my experience. Every candidate has strengths and flaws, it's much more of a... constraint problem!
The idea that there even exists a perfect candidate is one of the biggest issues with hiring practices in tech these days.
I, for one, would be extremely impressed by a candidate breaking out J or Prolog for a constraint problem. But I'm also not a typical hiring manager for sure.
So make sure you use those "do you have any questions" time to ask questions! What is it really like to work there. How much notice do you need to give before taking vacation? Do they really give pay raises? How often do they lay people off? What is the dress code? Do they let you take time for your kids school activities? And so on - these questions should be things that are important to you - find out.
In the best cases the interview is only about convincing you to take the offer - generally because someone who you worked with at a previous job said "hire this person" and they trust that person enough to not need any other interview. So keep your network open.
I feel like in practice, unless you're an established, senior professional in a high-paying, in-demand field with a network to rely on, this would go something like:
> What is it really like to work there. How much notice do you need to give before taking vacation? Do they really give pay raises? How often do they lay people off? What is the dress code? Do they let you take time for your kids school activities?
"Candidate ABC seems too demanding and picky, constantly inquiring about irrelevant specifics. They would be a bad fit for our company culture. I advise going with candidate XYZ instead."
I know applicants need the job more than they need you. However you still have options if you don't get this one - you should always be following several leads until you finally get a job. Odds are your other leads are not anywhere close to as advanced as this, but if you can wait a couple more months you have a chance.
I started giving interviews again and im surprised how many people dont ask anything. I'm an IC, not a hiring manager, and only evaluating a specific thing, (technical assement) and still nothing really.
When I interview people I encourage them to ask any question they want and I make damned sure it doesn't reflect in my report to the higher-ups! Just imagine being in their shoes, you could be in the same position tomorrow!
Instead you insist we should solve a nieche problem with a ill suited tool, while inventing a costume solution when a standard solution exist.
A company's interview process tells you a lot about how the company thinks and operates. This was was surely a dumpster fire.
Because you're unemployed and need to work to get some money.
Do you think you're a super intelligent person when you couldn't even figure that out?
What if the interviewers decided to ask the candidate about their language choice and trade-offs between different languages? Wouldn't that actually give them more signals into the skill of the engineer, rather than just blindly following their script?
Weird experience. Didn't get that job (probably for the best tbf).
That said, I interview in silicon valley and I'm a mixed race American. (extremely rare here) I think a lot of people just don't want me to pass the interview and will put up the highest bar they can. Mind you, I often still give optimal solutions to everything within good time constraints. But I've practiced 1000+ problems and done several hundred interviews.
Source: we am a hiring manager.
They can also be dreadfully slow (and typically are) compared to just a simple dynamic program.
I'm generally against using leetcode in interviews, but wherever I've seen it used it's usually for one reason & one reason alone: known dysfunctional hiring processes. These are processes where the participants in the hiring process are aware of the dysfunction in their process but are either powerless or - more often - too disorganised to properly reform the process.
Sometimes this is semi-technical director level staff leveraging HR to "standardise" interview techniques by asking the same questions across a wide range of teams within a large corp. Other times this is a small underresourced team cobbling together interview questions from online resources in a hurry, not having the cycles to write a tailored process for themselves.
In these cases, you're very likely to be dealing with a technical interviewer who is not an advocate of leetcode interviewing & is attempting to "look around" the standardised interview scoring approach to identify innovative stand out candidates. In a lot of cases I'd hazard even displaying an interest in / some knowledge of solvers would count significantly in your favour.
Do you know how few people in this world even know what a constraint solver is, let alone how to correctly define the problem into one?
I used a constraint solver to solve a homework problem once in my CS degree 3rd year. My god just writing the damn constraints was a huge cognitive load!
I do hope you're exagerating here, but in case you aren't: this is an extremely simplistic view of what (software) engineers have to do, and thus what hiring managers should optimize for. I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.
Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
In this hypothetical, why do you do leetcode hard interviews?
I don't. I do easy code interviews because there are people who work great on a team and know enough buzzwords to sound like they know how to write code, but cannot. Something that isn't hard to solve in about 20 minutes (I can solve in 5 - but I've seen a solution several times and so don't have to think about the solution), but is different enough that you haven't memorized the solution. If you can't solve an easy problem then you can't code.
I thought I already answered that:
>> Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
I should have said "if you deemed this a fail on the code interview, you are an idiot".
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot
Sometimes you just don't want someone that takes these shortcuts. I think being able to solve the problem without a constraint solver is much more impressive
Now, if they did answer with a constraint solver, I'd probably ask some followup whiteboard questions to make sure they do actually know how to code. But just giving a constraint solver as an answer definitely wouldn't be bad.
Otherwise penalizing interviewees for suggesting quick-and-dirty solutions reinforces bad habits. "Premature optimization is the root of all evil," after all.
There is some debate about what premature optimization is, but I consider it about micro optimizations that often are doing things a modern compiler will do for you better than you can. All too often such attempts result in unreadable code that is slower because the optimizer would have done something different but now it cannot. Premature optimization is done without a profiler - if you have a profile of your code and can show a change really makes a difference then it isn't premature.
On the other hand job interviews imply time pressure. If someone isn't 100% sure how to implement the optimization algorithm without looking it up brute force is faster and should be chosen then. In the real world if I'm asked to do something I can spend days researching algorithms at times (though the vast majority of the time what I need is already in my language's standard library)
1. Any optimization in a typical web development file where the process is not expected to be particularly complex. Usually a good developer will not write something very inefficient and usually bottlenecks come from other areas
2. Doing stuff like replacing a forEach with a for loop to be 0.5% faster
A trick if you can't do a custom algorithm and using a library is not allowed during interview could be to be ready to roll your own DPLL-based solver (can be done in 30 LOC).
Less elegant, but it's a one-size-fits-all solution.
At this point, job interviews are so far removed from actual relevance. Experience and aptitude still matter a lot, but too much experience at one employer can ground people in rigid and limiting ways of thinking and solving problems.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.
I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.
There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.
You need to make sure a candidate can program so asking programing question make sense. However the candidate should not be judged on if they finish or get an optimal or even correct answer. You need to know if they write good code that you can understand, and are on a path that if given a reasonable amount of time on a realistic story would finish it and get it correct. If someone has seen the problem before they may get the correct answer, but if they have not seen it they won't know and shouldn't expected to get the right answer in an hour.
I will say, IME, it's pretty obvious when people have seen a problem before, and unless you work at a big company that has a small question pool, most people are not regurgitating answers to these questions but actually grappling with them in realtime. I say this as someone who has been on both ends of this, these problems are all solvable de novo in an hour by a reasonable set of people.
Leetcode ability isn't everything, but I have generally found a strong correlation between Leetcode and the coding aspects of on the job performance. It doesn't test everything, but nothing in my experience of hiring has led me to wanting to lower the bar here as much as raise the bar on all other factors that influence job performance.
return Counter(nums).most_common(1)[0][0]
And that's 50th percentile for runtime and memory usage. Doing it with another one liner that's 87% percentile for time because it uses builtin Python sorting but is 20th percentile for memory:
return sorted(nums)[len(nums) // 2]
But the interviewer might be looking for the best approach, which beats "100%" of other solutions in runtime per Leetcode's analysis:
m, c = -1, 0
for x in nums:
if not c:
m = x
c = 1
elif m == x:
c += 1
else:
c -= 1
return m
If I were interviewing, I'd be happy with any of these except maybe the sorted() one, as it's only faster because of the native code doing the sort, which doesn't change that it's O(n log n) time and O(n) space. But I've had interviews where I gave answers that were "correct" to the assumptions and constraints I outlined but they didn't like them because they weren't the one from their rubric. I still remember a Google interview, in which we're supposed to "design to scale to big data", in which they wanted some fiddly array manipulation algorithm like this. I gave one that was O(n log n) but could be done in place with O(1) memory, and the interviewer said it was "incorrect" in favor of a much simpler O(n) one using dicts in Python that was O(n) memory. Had the interviewer specified O(n) memory was fine (not great for "big data" but ok) I would have given him the one liner that did it with dicts lolI guess my point is that interviewers should be flexible and view it as a dialogue rather than asking for the "right answer". I much prefer "identify the bug in this self contained code snippet and fix it" type problems that can be completed in <15-30 minutes personally, but Leetcode ones can be fine if you choose the right problems for the job.
I would rather work with a flexible data type with suboptimal performance than a brittle data type that maybe squeezes out some extra performance.
Your example of in-place array mutation feels like a good example of such a thing. I feel like there should be a category of interviewing questions for "code-safety" not just performance.
Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.
Cleverness is definitely not high on the list of things they're looking for.
All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".
For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.
Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.
Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.
Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.
100% and it's a shame that over time this has become completely lost knowledge, on both sides of the interview table, and "leetcode" is now seen as an arbitrary rote memorization hurdle/hazing ritual that software engineers have to pass to enter a lucrative FAANG career. Interviewees grind problems until they've memorized every question in the FAANG interview bank, and FAANG interviewers will watch a candidate spit out regurgitated code on a whiteboard in silence, shrug, and say "yep, they used the optimal dynamic programming solution, they pass."
I've probably implemented first-order Markov-chain text generation more than a dozen times in different languages, and earlier this week I implemented Newton–Cotes adaptive quadrature just because it sounded awesome (although I missed a standard trick because I didn't know about Richardson extrapolation). I've also recently implemented the Fast Hadamard Transform, roman numerals, Wellons–NRK hash tries, a few different variants of Quicksort (which I was super excited to get down to 17 ARM instructions for the integer case), an arena allocator with an inlined fast path, etc. Recently I wrote a dumb constrained-search optimizer to see if I could get a simpler expression of a word-wrap problem. I learned about the range-minimum-query algorithm during a job interview many years ago and ad-libbed a logarithmic-time solution, and since then I've found a lot of fascinating variants on the problem.
I've never had a job doing this kind of thing, and I don't expect to get one, just like I don't expect to get a job playing go, rendering fractals, reading science fiction, or playing video games. But I think there's a certain amount of transferable skill there. Even if what I need to do this week is figure out how to configure Apache to reverse proxy to the MediaWiki Docker container.
(I know there are people who have jobs hacking out clever algorithms on different platforms. I even know some of them personally. But there are also people who play video games for a living.)
I guess I'd fail your interview process?
But also, interviews are fuzzy and not at all objective, false negatives happen as well as false positives.
If you want people to know about these things you should put them in your resume though. People can't read your mind.
Absolutely agree. When I interview, I start with a simple problem and add complexity as they go. Can they write X? Can they combine it with Y? Do they understand how Z is related?
Interviewers always say this, but consider: would you endorse a candidate who ultimately is unable to solve the problem you've presented them, even if they think, communicate, and decompose problems well? No interview in this industry prizes those things over getting the answer right.
My first boss (a CTO at a start-up) drilled this into us. What you know is far less valuable than how you learn/think and how you function on a team.
Now I give you problems I expect to take 20 minutes if you have never seen them before so you should at least solve 1. I have more than once realized someone was stuck on the wrong track and redirection efforts were not getting them to a good track so I switched to a different problem which they were then able to solve. I've also stopped people when they have 6 of 10 tests passing because it is clear they could get the rest passing but I wouldn't learn anything more so it wasn't worth wasting their time.
In the real world I'm going to give people complex problems that will take days to solve.
One way to think about this is:
Is a fresh graduate more likely to provide a solid answer to this than a strategic-thinking seasoned engineer? If so, just be conscious of what your question is actually probing.
And, yes, interview candidates are often shocked when I tell them that I’m fine with them using standard libraries or tools that fit the problem. It’s clear that the valley has turned interviewing into a dominance/superiority model, when it really should be a two-way street.
We have to remember that the candidate is interviewing us, too. I’ve had a couple of interviews as the interviewee where the way the interview was conducted was why I said “no” to an offer (no interest in a counter, just a flat “no longer interested” to the recruiter, and, yes, that surprises recruiters, too).
Many formulations scale in a way that is completely unusable in practice.
Knowing how to get tools like Z3 or Gurobi to solve your problems is it's own skill and one that some companies will hire for, but it's not a general purpose technology you can throw at everything.
This post is the unironic version of "FizzBuzz in TendorFlow", where just because you have a big hammer doesn't mean everything is a nail. And I say that as an enjoyer of bug hammers including SMT solvers.
No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.
If somebody grinds LeetCode while hating it, it signals they are really desperate for a job and willing to jump through hoops for you.
If somebody actually enjoys this kind of stuff, that is probably a signal that they are a rare premium nerd and you should hire them. But the probably play Project Euler as well (is that still up?).
If somebody figures out a one-trick to minmax their LeetCode score… I dunno, I guess it means they are aware of the game and want to solve it efficiently. That seems clever to me…
Like in race? Like in wealth? Like in defection willingness? Like in corruption?
Asking for a friend who is regularly identified as among the most skilled but feels their career has been significantly derailed by this social phenomenon.
In this case the group is people good at leetcode - the people I know of in that group are perfectly fine with any race so long as they can solve leetcode. There are people who care about race, but I've never had much to do with them so I can't guess how they think.
You are right, this definition does come from some person with some set of motivations, but that person is some mid/high-level manager who probably hasn't ever written a line of code in their life.
Will you put up with very long hours of insane grindy nonsense in the spirit of being a team player for a team that doesn't really remember what game they're playing?
Are you sufficiently in need of income to be fighting through this interview dance in preference to other things, such that once you join you'll be desperate to stay?
Those are extremely important questions, and a willingness to have spent a thousand hours memorising leetcode correlates strongly with the attributes sought.
In no case is it a useful signal on if I can do my job better than someone else. Some people like this type of problem and are good at it anyway which is a good signal compared to average - but there are also above average people who don't enjoy this type of problem and so don't practice it. Note that both cases the people I'm talking about did not memorize the problem and solution.
In my notes I have roughly 30 patterns to leetcode questions btw.
If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.
If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?
At least that's been my experience. I'm sure there are exceptions.
This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.
Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.
More of my work tends to be "rapidly adopting solution to additional and changing requirements" than "come up with most efficient solution", so why are we interviewing for something where in practice we just throw a couple extra instances at it? (Your specific job role may vary, of course, but I usually just increase the scaling factor)
Author's point is that coming up with the most efficient solution might not actually be a good measure of your real-world performance.
And that's been a longrunning critique of leetcode, of course. However, this is a neat framing where you can still use the same problems but give solutions that perform better when measured by "how adaptable is this to new requirements?"
He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.
I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.
But CP (and CP-SAT) solvers tend to do very well on scheduling problems
This reminds me of high school ~25 years ago when I just started learning TI-Basic on my calculator and was dabbling in VB6 on my PC, and I was flipping burgers at Steak n Shake as my part time job. The manager moaned about how hard it was to write the employee schedules out each week (taking into account requested days off, etc) and I thought “ooh, I know how to write software now, I’ll make a scheduling program!” I told the manager I bet I could do it.
… it took a very short time for 16 year old me to realize writing scheduling software to solve for various constraints is pretty damned hard. I never brought it up after that.
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.
He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.
Using local-search are very useful for scaling but at the cost of proven optimality
But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.
Like it might even be the case that certain types of pretty powerful DSLs just never generate "bad structures". I don't know, I've not done research on circuits, but this kind of analysis shows up all the time in other adjacent fields.
Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.
Agreed. An algorithm right now in our company turns a directed graph problem, which to most people would seem crazy, into roughly ~m - n (m edges, n nodes) SAT checks that are relatively small. Stuffing all the constraints into an ILP solver would be super inefficient (and honestly undefined). Instead, by defining the problem statement properly and carving out the right invariants, you can decompose the problem to smaller NP-complete problems.
Definitely a balancing act of design.
Then the final 10% is either NP hard, or we want to add some DSL flexibility which introduces halting problem issues. Once you lower it enough, then comes the SMT solvers.
The conventional wisdom is the larger you make an NP hard problem, the slower is going to get. Irregardless of algorithm.
coins = [100,50,25,10,5,1]
change = 1234;
result = [0,0,0,0,0,0];
for(i=0:i<coins.length;i++){
while(change>coins[i]){
result[i]++;
change-=coins[i];
}
}
//[12,0,1,1,4]
Coudnt help myself sorryThe mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.
It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.
I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.
And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.
The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.
The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.
Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.
The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.
That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.
And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).
So when I say I’d politely tell them why they’re fucked, it’s actually out of a genuine desire to help the company.
But you’re right, I’m also thankful that they showed their red flag so visibly, early enough, and I’m happy to not move forward!
The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.
So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.
But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.
I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.
It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.
Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.
Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).
Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!
I guess I've finally found an acceptable real-world use case for this phrase :)
% Given a set of coin denominations,
% find the minimum number of coins
% required to make change.
% IE for USA coinage and 37 cents,
% the minimum number is four
% (quarter, dime, 2 pennies).
num(0). num(1). num(2).
num(3). num(4). num(5).
?- num(Q), num(D), num(P),
37 is Q * 25 + D * 10 + P
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.(Posting again what I already posted two days ago [2] here)
[1]: https://quantumprolog.sgml.net/browser-demo/browser-demo.htm...
> We can solve this with a constraint solver
Ok, using your favorite constraint solver, please write a solution for this.
> [half an hour later]
Ok, now how would you solve it if there was more than 100 data points? E.g. 10^12?
https://pierre-flener.github.io/research/NordConsNet/NordCon...
Actually people perform worse in an interview using AI because they spend time trying to understand what the tool is proposing and then time to figure out why that doesn’t work.
Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?
If not, how can you claim you have solved the problem?
The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.
Interviewer: You can't use a constraint solver
Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.
The author himself finds that these are largely math problems:
> Lots of similar interview questions are this kind of mathematical optimization problem
So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.
At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:
https://socialengineering.fm/episodes/the-problem-with-techn...
-what tech you worked with and some questions about decisions
-debugging an issue they encountered before
-talking about interests and cultural fit
Instant green flag for me. Too bad that after receiving my offer covid happened and they had a hiring freeze.
You can try to sus out smooth talking faker or just tell them to write a thing and then talk only after they demonstrate basic comprehension.
(if you have enough time)
Populate a 2d lookup array. $7,50 becomes arr[750] = [7,1,0,0,0,0] which represents [7x100,1x50,0x25,0x10,0x5,0x1]
With each loop check if the array entry exists, if so check if that number of coins is larger. [7,1,0... is better than [7,0,2...] because 8 is a better solution than 9!
> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.
It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...
Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.
Update... refactor... update... break off... etc. A lot of times, I'm looking at something where the tooling is 8+ years old, and the first order of business should be to get it working on a current and fully patched version of whatever is in place... replacing libraries that are no longer supported, etc. From there, refactor what you can, break off what makes sense into something new, refactor again. This process, in my experience, has been far more successful than ground up, new versions.
I say this while actively working on a "new" version of a software. New version being web based, "old" version being a winforms VB.Net app from over a decade ago. Old version has bespoke auth, new verion will rely on Azure Entra... Sometimes, starting over is the answer, but definitely not always.
If you just implement it recursively without tabling then you end up re-doing work and it's often an exponential runtime instead of polynomial.
To clarify on overlapping, consider Fibonacci:
F(n) = F(n-1) + F(n-2) # and the base cases
F(n-1) includes F(n-2) in its definition, and both F(n-2) and F(n-1) include F(n-3). If you implement this naively it produces an exponential runtime. Once you add the table, the single initial recursive call to F(n-1) will end up, through its chain of calls, storing the result of F(n-2) and now the implementation is linear instead of exponential.Yes, that's one (common) approach to dynamic programming. The recursive function call are memoized so that previous calculations are remembered for future function calls. Overlapping subproblems become trivial if you can reuse previously computed values. The recursion with memoization is top-down dynamic programming.
They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.
Really? This kind of interview needs to go away.
However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.
Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."
An interview shouldn't be an university exam.
Even getting an efficient algorithm basically right, is no guarantee.
In some cases there might be alternative solutions which have some tradeoffs, and you might have to come up with those, as well
Miss a counterexample? Even if you get it after a single hint?. Fuck you, you're out. I can find someone who doesn't need the hint
All I can say is that I do conduct interviews, and that I follow the above philosophy (at least for my round).
It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.
Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.
It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.
It's interesting how powerful contraint solvers are (Ive never used one).
But actually all of these problems are fairly simple if we allow brute force solutions. They just become stacked loops.
That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.
Yes, and it won't work on the problem described. The greedy algorithm only works on certain sets of coins (US coin denominations are one of those sets), and fails in at least some cases with other coin sets (as illustrated in the bit you quoted).
That fits your definition of "use as many coins as possible of a given large denomination before dropping back to the next-lower denomination" but will find 10-10-10-1-1-1-1-1-1-1 and stop before it even tries 10-9-anything.
This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.
What the heck are you talking about? I didn't even visit ChatGPT today.