This felt like a humble brag to help make their point about hiring good talent and how many people want to be a hogger (or whatever they call people that work there) but this just really highlights how brutal the job market is. Yes the market is also flooded with unqualified applicants and or bots that will apply to any job listing thats posted, but still this is ridiculous.
I really feel bad for the 6 people who had to endure the technical interview AND THEN were given the honor of attending the "SuperDay" which sounds like a full day of at least 5 interviews, 2 - 3 being technical, and still got rejected. Not sure what the technical interview is like at posthog, but assuming this is just an hour phone screen those 6 people still probably had more than 7 hours devoted just to interviewing at this place just to get rejected. That's not including any time spent preparing for interviews or anything else either.
There must be a better way to do interviews. Posthog is not Google, Posthog (or any other startup) does not need to hire to the same standard that Google does.
Let me know when you're on par with Google in terms of revenue or benefits or prestige, or anything else really that Google offers then sure I will jump through as many hoops as you want for the interview. Until then, hard pass.
First, in several countries, working at Google won’t make you rich—they don’t always offer the highest salaries in the region. You’ll have a comfortable life, but you’ll still need to work for the rest of your career. Second, Google is not a remote-first company, which is a dealbreaker for some.
My (perhaps flawed) reasoning was that, in its early days, PostHog was a very small company with a great product that people genuinely enjoyed using. If you received stock options, the potential for a big financial upside seemed high. Plus, working at a small company is simply more exciting—your contributions actually make a difference.
As you alluded to, it’s very rare for even founders to make a life changing amount of money from a company they start, it’s exceedingly rare for early employees to have this happen and should not be a reason you consider working at a small company.
The right reasons to work at a small company are the other ones you mentioned: high impact, like working in small teams, interesting work, cool product, etc… but my point is that the interview process for the small company and the big company are often times very similar even though the amount of risk, scale, future career opportunities, and potential financial gain are worlds apart from each other, which isn’t right.
The level of effort I should have to put into an interview should be proportional to what I stand to gain by getting the job. This is kind of already how it works naturally because more desirable jobs have more applicants which makes it more competitive and requires more preparation. I stand to gain much more working at Google than I do at posthog, so why am I spending around the same amount of time interviewing at each place? Is working on a smaller team and having more impact on a smaller product worth it to me to do that? Personally that answer is no which is why I don’t understand the interview similarities (mainly time spent interviewing and acceptance rate in this case).
Levels.fyi puts the total compensation of a L5 at ~$140k/year
https://www.levels.fyi/companies/google/salaries/software-en...
Remote-first also changes the dynamic of who gets promoted.
I mean each to his own, but personally I would rather bet big iff I wanted to quit my current job. I think now the PostHog sail has long gone with the risk and reward ratio.
And that’s one million before tax, before the preferred stock gets paid out to the big investors, after the lock out period where you can sell your stock a few months after the deal goes through. That’s not 1B valuation either, that’s someone buying the company for 1B in cash. Not impossible, but definitely unlikely.
If you work at google for 5 years you will almost definitely make more than you would working at posthog and getting acquired in the same amount of time, but yes if lighting strikes twice in the same place and posthog did an IPO and the stock 20xed you would miss out on that money
You only apply for one job a year? Twice a year?
There are very many good candidates a company will miss out on by asking for a full unpaid day from the candidates.
In short, it's a very well designed "build an app" take home test. There isn't a solution so to speak, but its designed to test your product-engineer aptitude as well as your execution speed, as there are things to get done and requires some thoughtfulness. One can go in a lot of different directions with it. Code can look great, but did you build the right thing, something actually relevant? That's what's important.
Its the sort of test that would instantly filter out 99% of applicants, because of the product emphasis vs the code emphasis, and good product engineers are rare. One could be the best coder around but not have a clue what to build. They want people who know what to build.
I had a lot of fun working on it and loved the challenge, but ultimately didn't know what to build and was fishing a bit, being more of a platform engineer vs product engineer.
Interviews are a game of asymmetric information. The job seeker has much more knowledge of what they can and cannot effectively do than the job offerer. And the job offerer has much more knowledge of what is and is not required for true success than the job seeker.
Given that, no, there really doesn't have to be a better way than just "interview a lot of people and take your best guess". If you stop taking the time to do that you will eventually be outmaneuvered by someone who does.
This is where interviews can and should be done differently. In my career some questions I’ve been asked in interviews are: serialize and deserialize a binary tree, create an in memory cache from scratch, design an elevator system for a building, sequence DNA strands together using dynamic programming, build a flight control system for an airport, recreate atoi function, etc…
Sure enough, none of these interview questions had pretty much anything in common with what work I would end up doing at the company, so this was an inefficient way to hire that wasted a lot of my time.
This would be like trying to find a plumber to fix my sink by having them come over, showing them the sink, then sitting them down to grill them on the theory behind some thermodynamics, Bernoulli’s principle, maybe throw in some design questions about how to redo my sink. This is surely how you find the best plumber because only the best will take the time to really understand what they are doing when they fix a sink right?
Like it or not the vast majority of work in the software industry is e-plumbing where you fix sinks and connect pipes together to start the flow of CRUD from one end to the other, which is why our way of interviewing people is insane.
As an exercise for the reader, see if you can figure out which interview questions I listed above were asked to work at a FFANG company vs small startup companies that are all bankrupt now. Pretty hard isn’t it?
It's best not framed as a burden, but a tradeoff. Companies who understand better what skills they need to hire for quite reliably higher higher quality candidates, or are able to get away with paying them less, or both.
Take McDonald's as an extreme example. I am certain McDonald's has paid good money to figure out exactly what someone needs to be able to do in order to be an able burger flipper. That's a big part of why they're able to hire such folk quickly, at scale, at $10 an hour.
Most software companies face economic conditions which incentivize them to take the other end of the tradeoff. The work often is percieved to span such a vast possibility space it's nearly impossible to precisely specify the requirements a software engineering job has, in a way anything like the McDonald's position can be.
One of the mechanisms they choose to employ to minimize false positives despite this huge uncertainty in their own requirements is "Offer a lot of money and let the cream rise to the crop via pre-hiring competition". Hiring someone at $100/hr who is obviously and clearly very smart and hardworking is a much safer bet than hiring someone at $85/hr who you think could probably handle the job with a lot of effort, assuming nothing in their personal life derails them over the next 6 months or anything.
Final aside:
>This is surely how you find the best plumber because only the best will take the time to really understand what they are doing when they fix a sink right?
Nobody's ever looking for the best. They're looking for good enough, within tolerances.
I do have an uncle who did become a plumber after a long career as a chemical engineer. He is extremely sought after, and charges to match. He doesn't usually take "change my sink" jobs.
Also, the GP seems to wonder about better ways to do interviews, not stop doing those entirely
Yeah, at first I thought it was some kind of parody, then I realized it's a serious article and was astonished.
I doubt the rank and file ICs feel this way at all. It's analytics plumbing, and it's all for the sake of the paycheck.
What this really translates to is the founders saying “we think posthog is our golden ticket to becoming rich in an exit event someday, so don’t mess it up for us”. It’s just not politically correct to say that, so it’s expressed as being “passionate about the problems the company solves” or “working on something that feels yours”.
And if you’re not someone who wants to dance and clap along with the founders as they sing “I’ve got a golden ticket!” on the way to the chocolate factory, only to be left standing behind the gate as they enter, then ya go ahead and pivot because you’re killing the vibe here…
When I’ve worked at shops that made products I, personally, didn’t care about, it was always satisfying to see a customer be excited about a feature or be thankful for the tool I’m building.
The first time that happened was when an admin thanked me in a support ticket for speeding up the generation of expense report spreadsheets way back.
Never underestimate the benefit of having the finance department on the side of IT's innovation.
As someone who has gone through the roughly 9 hour interview process in the past, was it the docs and open source product that made you want to work there?
As an engineer wanting to build a successful product, I hate the fact that this is how it is. And then there's PostHog, where each of these tools is right there, connected to one another, ready to make my job (and my company's success) that much easier. Being able to work on something that simplifies all of this for others is very enticing.
Combine that with their open-company ethos (check out their handbook), and high-trust/high-performance product-engineer mindset, and yah... sign me up. This is a company that legitimately makes other people's lives easier, and thus makes for better products. Something to feel proud about.
You’re almost 10 times more likely to be accepted to Stanford’s undergraduate program than to ever work as a hogger
You have to pay for Stanford whereas Hog pays you. Wrong direction?
I've worked at many startups over the years and I've always been very involved in hiring, I call this the MDE (most desperate engineer) effect and it's something I always try to make founders, other engineers, etc... understand when the company starts to discuss hiring processes more.
The premise is simple, the difficulty of your hiring process needs to be directly corelated to the perceived future value of said company. If you are an OpenAI right now, you can have the most difficult, convoluted, time intensive hiring process in the world and the best engineers will still sit through it because they score very high in potential future value. Other companies cannot / should not have as difficult of a hiring process because you will end up selecting from a pool of people who are willing to endure anything just to get a job, get experience, get a job that pays in USD, renew their H1B visa, etc...
This doesn't mean these people are bad engineers or shouldn't be considered, but if I had a dollar for every time a founder or CTO at a startup said that this company is only hiring the best, most passionate people bar none I would be a rich man, and a lot of the people in this category just need a job to put food on the table (which seems to be exactly the kind of people the company wants to avoid at all costs funnily enough).
We had enormous success at the startups I worked at where we would talk to great engineers who worked at FFANG companies and explain the problems we wanted to work on, how we were thinking of approaching it, why it was interesting to us, where we saw the company going, and how they could help us get there as opposed to trying to squeeze them through a long tedious interview cycle. Granted these people did have previous work experience to help vet them, but again the process is something that can be adjusted depending on where the company is and what you are looking for.
A general rant.
Do you really choose your employer based on their revenue? or prestige??
I’m not going to work my rear end off for 4 years to get 0.5% of potentially nothing and go through your dog and pony show of an interview cycle
Delusional web slop companies strike again
After some googling turns out it’s an analytics company but they behave ad they do fission.
I've seen a nontrivial number of smart engineers get bogged down in wanting to A/B test everything that they spend more time building and maintaining the experiment framework than actually shipping more product and then realizing the A/B testing was useless because they only had a few hundred data points. Data-driven decisions are definitely valuable but you also have to recognize when you have no data to drive the decisions in the first place.
Overall, I agree with a lot of the list but I've seen that trap one too many times when people take the advice too superficially.
- You have to make good decisions about what you're going to test
- You have to build the feature twice
- You have to establish a statistically robust tracking mechanism. Using a vendor helps here, but you still need to correctly integrate with them.
- You have to test both versions of the feature AND the tracking and test selection mechanisms really well, because bugs in any of those invalidate the test
- You have to run it in production for several weeks (or you won't get statistically significant results) - and ensure it doesn't overlap with other tests in a way that could bias the results
- You'd better be good at statistics. I've seen plenty of A/B test results presented in ways that did not feel statistically sound to me.
... and after all of that, my experience is that a LOT of the tests you run don't show a statistically significant result one way or the other - so all of that effort really didn't teach you much that was useful.
The problem is that talking people out of running an A/B test is really hard! No-one ever got fired for suggesting an A/B test - it feels like the "safe" option.
Want to do something much cheaper than that which results in a much higher level of information? Run usability tests. Recruit 3-5 testers and watch them use your new feature over screen sharing and talk through what they're doing. This is an order of magnitude cheaper than A/B testing and will probably teach you a whole lot more.
Steve Blank's quote about validating assumptions: "Lean was designed to inform the founders’ vision while they operated frugally at speed. It was not built as a focus group for consensus for those without deep convictions"
Is the Lean Startup Dead? (2018) https://medium.com/@sgblank/is-the-lean-startup-dead-71e0517...
Discussed on HN at the time: https://news.ycombinator.com/item?id=17917479
"We were just running an experiment; we do lots of those. We'll stop that particular experiment. No harm no foul" is much more palatable than "We thought we'd make that change. We will revert it. Sorry about that".
With the former people think: "Those guys are always experimenting with new stuff. With experimentations comes hiccups, but experimentation is generally good"
With the later; now people would wanna know more about your decision-making process. How and why that decision was made. What were the underlying reasons? What was your end goal with such a change? Do you actually have a plan or are you just stumbling in the dark?
In my career I had the discussion to explain that this is not the case more times than it's appropriate.
A/B testing is possibly the most misunderstood tool in our business, and people underestimate even the effort it takes to do it wrong... let alone to do it right.
Erm, isn't it three times, or am I missing something?
You have what you are currently doing (feature Null), feature A, and feature B.
Otherwise, you can't distinguish that the feature is what is causing the change as opposed to something else like "novelty" (favoring a new feature) or "familiarity" (favoring an old feature).
If all you have is "what you are currently doing" as "feature A" and "new thing" as "feature B", you're going to have a murderous time getting enough statistical power to get any real information.
Why though? Can't you have it dynamically look up whether the experiment is active for the current request and if so behave a certain way? And the place it looks up from can be updated however?
I do know that Google strategically used A/B testing to determine the precise shade of corn flower blue for some UI widgets.
> Run usability tests.
Maybe the point of A/B testing is to avoid all that battle tested and validated '90s era process and methodological nonsense.
(As though a design could stumble thru a solution space to some local maximum. Like an optimization problem.)
I also kept thinking of that SQA cliché "You can't test your way to quality."
Meaning TEAM A has a want, and TEAM B has a want.. and its not about best feature its about "I told you so"
AB testing is great for sussing out use-flow-patterns, but oft may be used for ego (design team) testing...
For a lot of what you want to know the above will give better information than a 100% polished A/B test in production. When people see a polished product they won't give you the same type of feedback as they will for an obvious quick sketch. The UI industry has gone wrong by making A/B in production too easy, even though the above has been known for many decades.
(A/B in production isn't all bad, but it is the last step, and often not worth it)
Design-time user testing has been a thing for much longer than A/B tests. They are a different thing.
I mean, your point stands. But you can't do A/B tests on anything that is not production, those your are recommending are a different kind of tests.
Most orgs should just be shipping features. Before starting an Experiment Program teams should be brainstorming a portfolio of experiments. Just create a spreadsheet where the first column is a one-line hypothesis of the experiment. Eg. "Removing step X from the funnel will increase metric Y while reducing metric Z". And the RICE (Reach-Impact-Confidence-Estimation) score your portfolio.
If the team can't come up with a portfolio of 10s to 100s of experiments then the team should just be shipping stuff.
And then Experiment Buildout should be standardized. Have standardized XRD (Experiment Requirements Doc). Standardize Eligibility and Enrollment criteria. Which population sees this experiment? When do they see it? How do you test that bucketing is happening correctly? What events do analysts need? When do we do readouts?
That's just off the top of my head. Most orgs should just be shipping features.
Capturing analytics is a no brainer. however, most data in most products at most companies starting out just fundamentally does not matter. It's dangerous to get in the habit of "being data driven" because it becomes a false sense of security and paradoxically data is extremely easy to be biased with. And even with more rigor, you get too easily trapped in local optimums. Lastly, all things decay, but data and experimentation runs as is if the win is forever, until some other test beats it. It becomes exhausting to touch anything and that's seen as a virtue. it's not.
Products need vision and taste.
22. I saw design system fail in many companies. It is very hard to get right people and budget for this to succeed. For most startups are better to pick existing UI toolkit and do some theming/tweaking.
27. I disagree, If you put Product manager as gatekeeper to users you will transform the organization into a feature factory. Engineers should be engaged with users as much as possible.
So a disempowered founder-lite? What's their incentive?
> Trust is also built with transparency. Work in public, have discussions in the open, and document what you’re working on. This gives everyone the context they need, and eliminates the political squabbles that plague many companies.
This seems prone to feedback loops; it can go both directions. If there are political squabbles, discussion may be driven private, to avoid it getting shut down or derailed by certain people.
It takes a lot less energy to throw shit than it does to clean shit. There's infinite signals. Big egos take a lot of energy to repel and redirect to maintain it. I think it's absolutely worth it when it's possible, but yeah.
You wouldn't think so until you've done it, but it's really hard to get 6+ adults together where everyone's primary goal in that team is to make a thing. Seems like there's always one or more people who want to peck others, build fiefdoms, hold court.
Process is important when work is handed off from one team to another team. Any company with a non-trivial product will have a non-trivial team size; and thus it'll need to standardize how people hand off work.
It doesn't have to be onerous: The best processes are simply establishing boundaries so lazy people don't throw their work over to the next person. (IE, "I won't work on a bug without steps to reproduce and an unedited tracelog that captures the failure" is a good boundary.)
Do not start with an idea. Start with a problem, and then construct a solution. The solution is the product. The problem implies someone who has that problem: this is the customer. How much of a problem it is tells you how much they want it and how much they will pay for it. Because an idea doesn't necessarily have a problem, it results in a product that doesn't necessarily have a customer.
> As 37Signal’s Jason Fried says “You cannot validate an idea. It doesn’t exist, you have to build the thing. The market will validate it.”
Similarly, don't set out to change the world. Put your product into the world, and the world will decide whether to change as a consequence.
Regarding the superday controversy: my best interviewing experience was a conversational technical interview followed by a paid-either-way-the-decision-went 2 week project. I did quite a bit of competitive programming so leetcode interviews are not a dealbreaker for me, but I feel there's just too much at stake for 1-2h coding exercise, and projects allow you to showcase all of your skills, not just speedcoding.
This is a great point. I've seem teams apply lean startup by testing -> changing something -> testing -> changing something -> testing ...
The problem is that the changes are so small that statistically you end up testing the same thing over and over and expecting different results. You need to make a significant change in your Persona, Problem, Promise, or Product to (hopefully) see promising results.
Is this like a trial day where you're invited to do a day of work for free?
Story time. I interviewed for a job at posthog. I knew that I really loved their communication style. But I hadn't used their product and didn't know a ton about them except that their writing is fantastic.
The 'product for engineers' focus that they is cool but when I had an interview, it was clear that I wasn't a 'product for engineering' person.
When they proposed the Super Day. I was like, I'm not sure because it's awesome to get paid for a day, but it's also not an unstressful event. And I sort of said I had some more questions before we moved on to the Super Day.
And they basically just said: we don't think it's going to work out. It was actually a pretty positive experience. I think that they correctly assessed the situation pretty quickly and my hesitation was a real signal to them.
(This is my recollection - I could have the details wrong.)
But yeah, super day is a day of very structured work in the role that they setup. And its paid.
Nice to get paid, but even getting paid is stressful if you're used to working PAYE and not having to think about how to do taxes on foreign income. The process works out very well for Linear, but as a candidate... not so much. It was probably the most stressful week of my professional career. You can't really simulate a typical work day or week with the stress of a sword of damocles over your head.
The thing I was tasked to do was pretty simple on the surface, but had some unexpected dead-end avenues that ate up quite a bit of time. So ended the week with "you're not fast enough". If I'd hit on the correct approach from the start i'd have probably easily finished early, so it really felt like a coin toss to me.
> And they basically just said: we don't think it's going to work out.
Ouch, so their tight-knit, no-shortcuts hiring process is only thorough for them, not for the engineer applying.
I’d be surprised if any startup failures were due to a dev team not being absolutely cracked. It’s always something like poor sales, PMF, refusal to pivot, lack of focus, etc.
Sam Altman argued a startup's chance of success is “something like Idea times Product times Execution times Team times Luck, where Luck is a random number between zero and ten thousand”
A lot of the success in startups is not due to getting the initial idea right, but what founders do once they realize that their initial idea is wrong.
If you if you don't have salespeople then you need to make a product that works and fulfills user needs. And it has to be good enough for word of mouth...which is where posthog's experience comes in.
It's not precisely so. They do and have had partners that did sales / consulting work.
In my limited experience partners are usually small integrators that do maybe a small urban area. Did they get a big consulting partner/implementor or something?
Those do exists. Partner was a gray area for them. Some services like certain support was also outsourced to partners. In a way the partners grew with the company so at some point (when it peaked) there were a lot of "large" partners.
That's why I said it is confusing. There was a time when you contact them on the website looking for a solution someone would redirect you to a partner. It was essentially outsourced sales / solution team(s).
Now if you’ve built something big that grew in the past few years organically, there’s more to learn from that success.
We get serious leads only via networking of our C-levels and sales no customer cares about our landing page and leads when we cared about landing page were not serious and waste of time.
And I have to say that "Technical Content Marketer " is one of the most dubious job titles I have ever seen.
> The single platform to analyze, test, observe, and deploy new features
my reaction is "Wha?"
But what the hell - though I do think the job title is bad, but then I've had a few of those myself.
I've used PostHog and it's pretty good. I don't know if I'd classify all of those as different products, you rarely want one of those without the other.