4 pointsby Signatura14 hours ago5 comments
  • btrettel13 hours ago
    I think a big part of the problem is an overly narrow view of what a qualified candidate looks like from the hiring side. Tons of qualified people are rejected because they don't look qualified to the people hiring.

    For example, recently a friend had an interview and the guy interviewing him seemed disappointed that my friend didn't have experience solving a problem in a particular way as if that were the only way to solve that problem. In my opinion, the way the interviewer solves that problem is inefficient. But they didn't seem to see any other way.

    (Yes, a candidate can communicate their abilities better. But in my experience, this only goes so far, and the people hiring need to make more effort.)

    A better process would be more open-minded and test itself by interviewing candidates who the interviewer thinks are bad. In science there's an idea called negative testing. If a test is supposed to separate good from bad, you can't just check what the test says is good, you also need to check what the test says is bad. If good things are marked as bad by the test, something's wrong with the test. If I were hiring, I'd probably start by filtering out people who don't meet very basic requirements and have some fairly open-ended interviews early with randomly selected people (who pass the initial screening) to refine the hiring process and help me realize gaps in my understanding.

    • Signatura11 hours ago
      I agree with this. What stands out to me is that the hiring process often treats one internal mental model as “correct”, and anything outside of it as a flaw in the candidate.

      The example you gave about solving the same problem differently is common; different approaches get mistaken for lack of competence.

      I like the negative testing idea a lot. If a hiring process never examines who it’s rejecting, it has no way to know whether it’s filtering quality or just filtering familiarity.

      Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

      • btrettel11 hours ago
        > Have you seen teams actually test or evolve their hiring criteria this way, or does it usually stay fixed once defined?

        I'm sure many folks hiring do iteratively improve their hiring criteria, though I'm skeptical of how rigorous their process is. For all I know they could make their hiring criteria worse over time! I have never been involved in a hiring decision, so what I write is from the perspective of a job candidate.

        • Signatura10 hours ago
          That makes sense, and I think your skepticism is reasonable.

          From the candidate side, it’s almost impossible to tell whether criteria are being refined thoughtfully or just drifting based on recent hires or strong opinions in the room.

          What strikes me is that without explicit feedback loops, iteration can easily turn into reinforcement, people conclude “this worked” without ever seeing the counterfactual of who was filtered out.

          From the outside, it often looks less like a calibrated process and more like accumulated intuition. I’m curious whether that matches what others here have seen from the inside.

  • sinenomine14 hours ago
    Monetary policy, software tax, post-covid hiring glut, pervasive mental health issues in HR professionals. For older pros there is also age discrimination. There is also underestimated factor of hiring by committee which more and more commonly disguises ethnic nepotism in hiring decisions.
    • Signatura11 hours ago
      I think that’s a fair list, and it highlights how much of the process sits outside the candidate’s control.

      Macro forces, internal incentives, and human bias all stack on top of each other, and the candidate only sees the outcome, not the cause. What feels particularly hard is that all of these factors collapse into a single signal for the job seeker, a rejection with no explanation.

      From your perspective, which of these has the biggest impact in practice, and which ones do you think are most invisible to candidates going through the process?

  • austin-cheney13 hours ago
    2 reasons

    1. Poor signaling. There is a bunch of noise in both job requirements and resumes.

    2. Unclear goals. Many technical job postings are not clear in what they want. This is not really the fault of the employer but more of an industry failure to identify qualifications.

    As a result you get super talented people that cannot find work and simultaneously grossly unqualified people who easily find work that is substantially over paid for the expected level of delivery and responsibilities.

    • Signatura11 hours ago
      Austin, that makes sense. The signaling problem cuts both ways: Resumes try to compress complex ability into keywords, and job descriptions try to describe real work with abstract labels. A lot gets lost in between.

      The unclear goals point is important too. When a role isn’t well-defined, hiring ends up optimizing for proxies rather than outcomes. Do you think this is mostly a language problem (how roles and experience are described), or a structural one where teams don’t actually agree internally on what success in the role looks like?

      • austin-cheney11 hours ago
        My experience tells me it is an expectation problem coupled against missing standards/baselines.

        Most employers need a person in the seat doing the work and will lower their preferences to find enough candidates for a selection. Government does not do that. If candidates fail to meet the requirements for a government contract the seat just remains empty.

        Consider how engineering works. An engineers resume will just list employment history, education, and awards. There is no need to fluff things up because engineers are required to have a license(s) and that demonstrates qualification. Software does not have that, so people have to explain their capabilities over and over.

        • Signatura10 hours ago
          That’s an interesting comparison... The licensing point highlights how much of the burden in software hiring sits on explanation rather than verification. Without shared baselines, candidates end up narrating their competence instead of pointing to an accepted signal. The expectation gap you describe also explains why requirements feel flexible in practice but rigid on paper. When the real goal is “get someone productive soon,” standards tend to bend quietly rather than evolve explicitly.

          Do you think the absence of clear baselines is something the industry could realistically converge on, or is software work too varied for that to work in the way it does for licensed engineering?

          • austin-cheney7 hours ago
            Programming is writing logic, which is a universal quality. So the way I would do is to create a fictional programming language, provide some familiarity and training time immediately before a licensing exam (at the testing location), and then having the candidate solve real problems using the fictional language for the licensing exam. It tests for the ability to deliver solutions more than memorizing patterns or reproducing familiar conventions. Too many developers cannot write original logic.

            Then there could be additional specialized qualifications above the base qualification, for example: security/certificates/cryptography, experimentation, execution performance, transmission/API management.

  • winshaurya914 hours ago
    searching job myself , how can i stand out different from the ultra showy candidates launching b2b vertical saas , when i have simpler project that solved problem for a smaller group of people that i build from scratch , every controller , every api , every fallback , making everything serverless for free deployment and decrease server load and have real interest to solve problem and giving hours into my art , without internal reach it seems hard to break into the industry , mind you i am still in college and confused that will i even stand a chance
    • Signatura11 hours ago
      I don’t think the gap you’re describing is about quality of work as much as how it gets interpreted.

      What you described, building something end to end, making real tradeoffs, and caring about the problem is exactly the kind of signal people say they want, but it doesn’t always map cleanly to how hiring filters operate.

      Being early in your career makes that mismatch louder, not smaller. Without context, depth can look like “small” and polish can look like “impact”. One thing that might help is making the reasoning behind your choices visible, not just the output.

      When reviewers can see why you built things the way you did, it becomes easier to compare substance to surface. It’s normal to feel unsure at this stage, but from the outside, what you’re describing sounds like a real foundation, not a disadvantage. I wish you all the best!

  • Signatura14 hours ago
    I’m one of the co-founders and went through this process myself. Not promoting anything here - genuinely interested in how others experience this and what helped create clarity.