For example, recently a friend had an interview and the guy interviewing him seemed disappointed that my friend didn't have experience solving a problem in a particular way as if that were the only way to solve that problem. In my opinion, the way the interviewer solves that problem is inefficient. But they didn't seem to see any other way.
(Yes, a candidate can communicate their abilities better. But in my experience, this only goes so far, and the people hiring need to make more effort.)
A better process would be more open-minded and test itself by interviewing candidates who the interviewer thinks are bad. In science there's an idea called negative testing. If a test is supposed to separate good from bad, you can't just check what the test says is good, you also need to check what the test says is bad. If good things are marked as bad by the test, something's wrong with the test. If I were hiring, I'd probably start by filtering out people who don't meet very basic requirements and have some fairly open-ended interviews early with randomly selected people (who pass the initial screening) to refine the hiring process and help me realize gaps in my understanding.
The example you gave about solving the same problem differently is common; different approaches get mistaken for lack of competence.
I like the negative testing idea a lot. If a hiring process never examines who it’s rejecting, it has no way to know whether it’s filtering quality or just filtering familiarity.
Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?
I'm sure many folks hiring do iteratively improve their hiring criteria, though I'm skeptical of how rigorous their process is. For all I know they could make their hiring criteria worse over time! I have never been involved in a hiring decision, so what I write is from the perspective of a job candidate.
From the candidate side, it’s almost impossible to tell whether criteria are being refined thoughtfully or just drifting based on recent hires or strong opinions in the room.
What strikes me is that without explicit feedback loops, iteration can easily turn into reinforcement, people conclude “this worked” without ever seeing the counterfactual of who was filtered out.
From the outside, it often looks less like a calibrated process and more like accumulated intuition. I’m curious whether that matches what others here have seen from the inside.
Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause. What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.
From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?
1. Poor signaling. There is a bunch of noise in both job requirements and resumes.
2. Unclear goals. Many technical job postings are not clear in what they want. This is not really the fault of the employer but more of an industry failure to identify qualifications.
As a result you get super talented people that cannot find work and simultaneously grossly unqualified people who easily find work that is substantially over paid for the expected level of delivery and responsibilities.
The unclear goals point is important too. When a role isn’t well-defined, hiring ends up optimizing for proxies rather than outcomes. Do you think this is mostly a language problem (how roles and experience are described), or a structural one where teams don’t actually agree internally on what success in the role looks like?
Most employers need a person in the seat doing the work and will lower their preferences to find enough candidates for a selection. Government does not do that. If candidates fail to meet the requirements for a government contract the seat just remains empty.
Consider how engineering works. An engineers resume will just list employment history, education, and awards. There is no need to fluff things up because engineers are required to have a license(s) and that demonstrates qualification. Software does not have that, so people have to explain their capabilities over and over.
Do you think the absence of clear baselines is something the industry could realistically converge on, or is software work too varied for that to work in the way it does for licensed engineering?
Then there could be additional specialized qualifications above the base qualification, for example: security/certificates/cryptography, experimentation, execution performance, transmission/API management.
What you described, building something end to end, making real tradeoffs, and caring about the problem is exactly the kind of signal people say they want, but it doesn’t always map cleanly to how hiring filters operate.
Being early in your career makes that mismatch louder, not smaller. Without context, depth can look like “small” and polish can look like “impact”. One thing that might help is making the reasoning behind your choices visible, not just the output.
When reviewers can see why you built things the way you did, it becomes easier to compare substance to surface. It’s normal to feel unsure at this stage, but from the outside, what you’re describing sounds like a real foundation, not a disadvantage. I wish you all the best!