What seems to be working for companies is ability to identify if candidate's used LLMs unfairly during interviews, all interviewers are following the planned agenda and taking decisions consistently and having the right evaluation metrics
We are trying to solve these problems at fairground.work . would be glad to demo if you want to take it for a spin
The best candidates usually don't want to do them.
And if they fix it? Pay them. Simple.
Stop fooling around with abstract puzzles that have zero relevance to a battle-tested production environment.
I don't disagree it has serious problems, but this doesn't seem a workable solution in my experience on the other side.
If you want the good jobs, you will have to be more flexible. Ask the deal breaker questions in the first meeting
Ultimately, hiring is a transaction: Can this candidate fix your issue?
Sure, you need to filter for red flags, but come on—the current interview meta is broken/dumb.
It really boils down to this: What value does A bring to B, and vice versa?
I mean during the interview, I'm paying someone and they are using an Ai, what am I getting out of them besides "burning tokens" (by proxy) to see if they can get an Ai to solve some interview task. I'm not going to learn much and it seems rather wasteful
> Can this candidate fix your issue?
I don't care if they aren't a decent person as they work with others. I'd prefer lower quality output from a person who is not abrasive
If you think that code or output is the thing that matters, expect to keep having issues. The soft skills matter just as much, this is what is actually being tested. I don't care if you get the answer, I care about seeing the process and personality. It's also why take home is not the default and only reserved for the final technical