The question is whether the key value of your product can benefit from the strengths of AI? If not, don't go there. If so, you then need to determine if your team can actually deliver an AI-driven vision that enhances the existing value prop. Again, if not, don't go there. If so, do it.
But from your description, your CEO is not asking those questions - they are asking, "How do we get more funding?" Which tells me that your CEO doesn't give a crap about building a product, they are just trying to make money and get some nice bullet points on their resume about the size of a company they led.
That puts you in the position of choosing whether you want to go on a VC-driven startup ride just to have the experience, or whether you want a product-driven role. People have their reasons for both directions, but if you want a product-driven role, you are out of alignment with your CEO and probably shouldn't work for them.
Folks work in mission driven organizations. In organizations that are optimizing for dollars you see quality fall by the wayside.
Mission driven organizations that understand the levers of capital are the most enjoyable workplaces of my career.
You want a CEO who is customer obsessed, not a CEO who is maximizing for dollars in the bank at all costs.
It has to be mission first for a startup to be great, really for any critical endeavor.
So it doesn't always have to be an: you owe me the money, so I own you.
Savings, bank loans, lines of credit, private loans between people, local angel investors, and so on are how most businesses bootstrap.
I co-own a technology business that has never taken outside money (well, we do have a credit card) and I’m not really interested in doing that. I think VCs are more trouble than they’re worth because they’re so obsessed with growth that little else matters to them, and we’ve seen where that gets us in terms of customer relationships and product quality. And they always want just enough control to eventually oust executives they don’t agree with.
We’ve built our business up the old fashioned way: from a personal capital contribution from each co-founder and modeling pricing based on what the business needs. Clients who see value in that approach from their critical technology vendors are not impossible to find at all.
Without the runway of functionally blank checks, you do have to continually monitor your business model though.
Can you give any hints as far as industry or product?
Succeed based on what criteria? I struggle to think of a single product with "AI" that I'd call successful (excluding the already existing and established niches of recommendation algorithms and computer vision which have been rebranded AI and maybe are what you're referring to).
Buy him some Viagra. And tell him that profesional help is available.
I've seen folks wanting big changes and we sit down and draw things up and surprisingly it looks like the old roadmap. Everyone is happy and I don't say anything about the similarities, and most importantly we're on the same page ;)
Sometimes when leadership asks for big changes, even when they say big changes, you might find if you get into the details it's not THAT big a change.
In the meantime don't let your mind run too wild assuming and worrying about what they might be asking for. I do that too, it's a developer thing I think, we start considering possibilities or even misused terminology and we get into a loop ;)
> I've seen folks wanting big changes and we sit down and draw things up and surprisingly it looks like the old roadmap.
I've found myself doing this also. It also helps to reason through new nuances found on the same evidence, because we're more wise looking at the same data in the future, or in this case, in the present (looking from the past).
Include as much non-AI work as you can in the AI teams' projects, pitch an "AI efficiency initiative" that minimizes new spend on AI with the justification of other teams picking up the slack, talk up whatever ML you're already doing, etc.
(Also, huh - I wonder if that’s actually directly doable; “assembled by AI out of programmed outputs”?)
E.g. If you don't work on AI now, and AI models keep improving, how likely is it that a competitor who integrates AI well will eat your lunch? If it's >50%, it seems worth it to shift some focus to AI regardless of the series C round.
This post from a few days ago has some great tips on how to integrate AI _well_: https://koomen.dev/essays/horseless-carriages/
I like the post and what it goes over. I agree with huge swaths of it, especially the misuse of ai tools right now. I don't agree that anyone doing it that way (like the gmail team called out over and over here) is either stupid or naive. I don't agree every consumer tool should become an agent building playground. I don't agree building said playgrounds are easy (I think it's much much harder to design a good agent building version of a product than a good product). I don't agree consumers want everything they use to work this way. I also think it ignores the very real problem of ai bullshitting and the handcuffs that puts on people using it for mission critical things like paying the mortgage.
Ironically I think this post falls into its own trap of not thinking about the next step. Yeah a really good email agent product as demod sounds great in a world where nothing works this way yet. However, a world where every product I use has to be re-engineered from scratch, with various unknown and non-customizable (and enshittifying) LLMs under it, with various training and fine tuning, unknown access to data, and no interop is the wood-frame horseless carriage the author is mocking. That would be a terrible situation worse than the current one.
Rethinking for a world of ai agents, it would be better if products empowered consumer agents instead of tried to supplant them. In THAT world products stay simple and just expose a bunch of tools and patterns that each consumer's custom agents can effectively use. Making that work well for your own product is actually viable AND helpful AND doesn't force your users to change their behavior if they don't want to. Anything I, the consumer, do to handle or prompt my own agent pays off across my entire ecosystem and investment can be focused on the right things by the right people (ie gmail should make email tools not agent tools).
If investors only want AI, then you will either be forced to do that (whatever it means), do something else but make it look to them like AI, convince them they're wrong, or quit.
Things are a little more nuanced if your existing investors side with you but all new investors are AI-driven. Then you don't have to quit, but you do have to run the business without new outside capital. This may or may not be possible. If not, you still have to quit.
Well, then you're bound to the amount of money customers can give you.
It's usually less than a VC partner with a god complex can give you.
You’re _always_ bound by what the customer is willing to pay.
Speaking more expansively, it’s just a proxy for whether the market will bear your prices/wants your product, and whether you are in the right place and/or positioning your product correctly.
VCs just add a temporary runway so you don’t have to be as concerned about that.
Note that starting a business with your own capital is not a way to escape it either, since you still have to answer to customers.
I think you’re going a little far with absolutes on your reply but it’s not really relevant anyway.
Couldn't be more excited for this kind of short-sighted, AI-instead-of-people thinking to become shameful. How many people need to aim the gun at their foot and pull the trigger before they're not bragging as they do it?
I am sorry I have to burst your bubble, but that is not going to happen, ever.
This is the new normal now. We have to accept that reality and adjust accordingly. Maybe with new hiring filters or similar. Hopefully these filters won't be just a LLM hallucinating random things, but realistically speaking... Yeah, it's gonna be that, isn't it?
Its not going away. Its an elevator of low ability.
those with low skills or ability can make great use to elevate their skills and abilities.
Those with high skills and abilitities will not use it for much, but will notice that the skill gap between themselves and those with lower skills will get smaller, faster and with little effort. They're unlikely to match them, but it will level the playing field very quickly and with little effort exerted.
I don't understand what you mean by this. Do you think LLMs are going away? Or is that you think AI-related technologies won't continue to improve?
What is driving the market to assign such a huge value to models is that they can sell themselves as the solution to any and every problem, even if they aren't
Investors are already contracting the over-investment in AI because it’s so hard to turn an ROI. Satya Nadella was recently on the record as saying he doesn’t think AI has proven a sustainable ROI yet, and that it needs to in order to sustain the current levels of investment. As a matter of cause and effect he is of course right. But no one can predict the timelines and outcomes with exactitude.
Tomorrow Nvidia could be seized by a regulator and frozen, decimating the GPU market. SoftBank could decide they’re going all in on some new thing instead. (Incredible examples, of course.)
Personally I don’t think AI/LLMs/et al are going away. But the way that it’s getting bolted into -everything- is likely to eventually subside.
It’s been interesting to see companies rebrand around AI even when they don’t actually produce or integrate any of those technologies. That’s driving part of the overvaluation of AI at large.
AI and the collection of technologies that get called AI aren’t new. They’re just suddenly in focus and all the rage.
We’ve seen this cycle before in technology and we’ll see it again with something else.
https://www.datacenterdynamics.com/en/news/wells-fargo-amazo...
If I had to, I guess I'd pay $100/month for ChatGPT.
But I also don't see how these things don't keep getting cheaper. What recent technology has gotten more expensive over time?
As the engineer you need to support all three, while being the voice of realism (internally). But don't get confused when they conflict -- they always conflict to some degree.
This is the argument for virtually every company going AI-first: it's just a way to scam VCs.
It seems crazy but it really works.
Where could it have gone?
Your product is not really the product. So the only thing that really matters is how your company is perceived to those who could potentially buy it and give your C-suite an exit.
I once was in a company during the "everyone is going to replace their laptops with tablets next year era." The CEO pushed heavily for mobile, and underinvested in the actual product.
A year later, all of our customers were still using their laptops. We were sold to a private equity firm, the CEO was pushed out, and (almost) everyone who worked on the failed mobile product was out of a job.
In your case, I'd try to decide if your CEO is just following a fad. If so, next year they will follow a different fad. Are you critical to your business? If so, you'll be critical to your business with or without AI. If not, figure out how to make yourself critical to your business without being vulnerable to this year's fad...
(...And perhaps ask ChatGPT for advice. If you can't beat it, join it.)
So many companies invested vast engineering resources in building out iOS/Android apps only to later find that their users only interact with their tool once every few weeks, which wasn't often enough for them to earn a place in that relatively tiny list of a few dozen apps that any one individual actually uses.
For many users with cheaper phones asking them to install your app is asking them to pick which of their photos they're going to delete to make space for it!
Seemed kinda obvious at the time; but that probably explains why the leadership was forced out.
If it's not an AI story for your business you need exceptional metrics. Like growing from 8M to 24M in a year for a C round.
The CEO needs to tell the investors that the company is all-in on AI. At the same time, the CEO needs to tell the engineers to use AI judiciously to support real customer needs, but not go hog-wild with it where it doesn’t make sense. This sort of situation, where you need to tell one group one set of facts to get them to act but another group a different set of facts, is fairly common in business (and politics).
The way big corporations handle this is to have many layers of management and a very diffuse set of responsibilities so that nobody is outright “lying”, they’re just interpreting what they’ve been told in a way that makes sense relative to their job responsibilities. The way startups handle it is to lie and hope that by the time they get caught, their results speak for themselves.
"If the the average CIO is committing 10 or 15% of their budget to AI. If you're not in that you're getting shrunk."
This will crash in 1-2 years as CIOs realize most of this is useless, because at that point all software will be "AI Software" according to the marketing department, and we'll reset to CIOs making decisions on some other metric, such as a guy they met at a conference.
You've two outcomes from this, either you do find a disruptive AI angle and move a sufficiently-large part of your team to it, or you don't, but figure out a minimal effort that would satisfy the "investor positioning angle". The third option, to do nothing or aggressively push back against AI and the CEO's desire, would potentially yield to no Series C or a down-round, which is something that you, your CEO, and your customers would not like.
Most AI "deployments" I have seen, are glorified knowledge search engines, or query engines. There are some content/code assistants as well. It is also the least intrusive way to be "AI - First". There is too much noise, too little value , but yet a significant pressure to be an "AI company". Most startups are feeling this (including the one I work at).
Unfortunately, VCs are always blinded by the next cool thing to invest in. So I think of it as a marketing feature that you have to build, rather than any real roadmap thing.
> Great team, great technology, great PMF and good progress on revenue targets.
Good progress on revenue targets from _your_ perspective sounds like not great growth, but communicated softly.
> The argument seems to be that they've realized the only way to achieve the next round of funding is to be "AI-first". There is no product roadmap for what this looks like, or what features might be involved, or why we'd want to do it from a product point of view. Instead the reason is that this is the only way to attract a big series C round.
This is not a thing a company does when they have good enough growth. Adequate revenue growth when you need a series C must be very good growth. Adequate growth when you'e profitable is a very different thing.
> Instead of working on useful, in-demand product features, it feels like we're spending a lot of time looking at a distant future that we'll struggle to reach if we take our eye off of the ball. Is this normal?
Yes, very normal. If I'm you, I'm wondering if there's really product/market fit. Useful, in demand product features almost don't matter. Building more does not make more money happen.
But also, the AI stuff is disruptive. In a way that the internet was. I think what you should probably judge most strongly here is if there is ANYONE with influence that has a take on how that looks for your company. That take probably includes "how this makes a lot of what we do irrelevant".
Absent that, I'd just write the company off. If you like the job great. If you don't like the work enough to be happy while the company atrophies, I think you're going to have a rough time.
Turns out someone already made a version with AI as the desirable technology du jour.
https://www.reddit.com/r/ProgrammerHumor/comments/1dckq74/so...
If not, I'd say make a data-backed case for another course of action, or find another job.
Most CEOs I've worked with would be open to hearing a well-reasoned argument for another course of action.
Also presenting a good case can be a nice career move even if it's not adopted.
"useful, in-demand product features" have to generate revenue greater than cost. If its too low or execs think there are better ways to grow the firm then these decision happen all the time.
imho, I would like to think tough times are ahead of us - that in my mind, neutralizes my 'tech optimism'.
2. It's more important that you describe a good story for your CEO to use, than actually implementing it.
3. Find/hire someone who can help you write the story. Just make sure that person has a weakness that ensures they can't completely replace you (see item 1).
4. Manage development speed, team expectations and priorities to the actual best of your abilities.
Sounds like the CEO is more interested in selling the company than building the product. IMHO that's a bad sign. It's also notable that you're aware of AI and use it to some extent. That means you have understanding of what it can and can't do. So why then is the CEO telling you how to do your job?
Sounds like a win-win.
You say there is PMF but this statement contradicts that. It seems that your customers are the investors and you need to sell to them in the next round.
That's really the only important question at this point.
What I can deduce is that the company (as PG would put it) probably is "default dead" if it needs so many investment rounds.
Since the first seed round, the company has been subject to investors' whim. Exit early, go very fast and see what sticks.
> The argument seems to be that they've realized the only way to achieve the next round of funding is to be "AI-first".
And what investors want is to jump on the wagon, regardless.
I don't despise AI; in fact, I love it. The idea of MCPs got me hooked up hard on it, now I'm the solutions guy looking for a problem.
The issue here is: irresponsible AI adoption (or any emergent technology for that matter, e.g., blockchain, VR, etc) will break companies.
I'm all in for AI, but implement it where it adds the most value and cascade with it to the next area it adds value until its implementation hits a diminishing returns ROI.
> I'm not well-informed enough to know if this is the correct approach to scaling. Instead of working on useful, in-demand product features, it feels like we're spending a lot of time looking at a distant future that we'll struggle to reach if we take our eye off of the ball.
Your internal compass is pointing at the right north star...
> Is this normal? Are other organizations going through the same struggle? For the first time in five years I feel completely out of my depth.
But unfortunately, this is becoming more of the norm, we can't help ourselves. This is what market meltdowns and market corrections look like months before they happen.
Think about it like a research project. The company or its culture may not be prepared for that, but that's what it is.
The lack of roadmap etc, welcome to the wall between technical lead and executive. They have a roadmap, you do not get to see it. I mean you might survive, but they're probably going to reduce their dev headcount significantly and offshore most of them too.
You want to start with at least an AI Roadmap that outlines whatever you want to do, and sprinkle in some AI love.
You just need to have the conversation with the CEO, but in the right order.
Come to him/her with objective data, but without your conclusions or concerns, just your observations about where this is headed and if he sees at what rate this AI focus is eating ressources. Ask the CEO what it means for the company. Let him/her do the talking.
From this point, the CEO have the data and you have the company policy. Know you can talk on equal grounds on the matter with him or her. Good luck.
This may or may not be bad. If done right, the majority of the new features will be AI related while the core team will do non-AI enhancements and bug fixes. If they aren't worked to death, the main product will remain stable, just with shifting priority to AI related features over regular features. Maybe 60/40 AI features, shifting to 70/30 over time, etc. Eventually you'll run out of AI ideas or tech.
Having said that, might as well get on the train, AI is a desired skill now so like anything, it'll keep you relevant for a while.
Reminds me of not too long ago when lots of companies were tacking NFTs and metaverses onto totally-unrelated things.
If he/she fires you, just say "OK" and walk away. Let him or her burn.
As it stands you’re stuck. If there’s no product roadmap showing how ai will help your product, being “ai first” Is meaningless bullshit.
But maybe if you reframe your question you’ll get a different answer. What does your company do? (You left out all the relevant information that might lead to a positive answer so you’re not even making room for it to be possible. If this is going to work out, you need to make room for the possibility, so start there. “What would it take for this to work? And what can I do to help”)
As a technical person you might be best suited to explore how AI could make your product better. A less technical person can dream things up but without knowing what is technically possible it might not be useful.
The absolutely essential next step in the conversation with your CEO is how do you envision AI being useful for our product. you might need to assist in this ideation (if not you, the most senior technical person at the company, even if you’re not most senior sometimes fresh eyes see something key)
Imagine you make a smart thermostat like a product built around the recent post about the bloke who reverse engineered his landlord’s thermostat to enable smart features and mobile control.
Is there room in that scheme for AI? Absolutely. If he rebuilt his project to be AI first he’d probably build it around voice control or integration so it can predict the user’s needs. He might grant the user the ability to ask conversationally for added features (I’ll be away for 3 weeks let’s turn the heat down and save some money. Turn it back on when you get an alert that my phone is 2 subway stops away.) Or he could build it around silently predicting what the user needs. But honestly it might not be worth recreating that project to be ai first. If it works and does the 5 things he wants what more is there to do? Saying “this is not a place that benefits from AI” is perfectly reasonable.
One place I’ve found AI to be useful is when interacting with the infinite variability of the real world. AI can look at data and transform it into the proper shape in a way that would take structured programming an infinite number of corner case loops.
Another area is offering conversational interface with large data sources or complex systems.
Another way is in rapidly building one-off integrations. (Staying on the smart thermostat page maybe there’s a new site where the system needs a new integration, you might use cursor to rapid prototype the integration)
In the same way there are places that are not a good fit for ai. Enumerate those too. Show up and run the good places for ai and bad places for ai against the list of features and technology you currently employ or to your known or imagined customer needs.
Ai is hugely powerful but just shoehorning it in everywhere is stupid and wrong. You should only use it if there’s a thing you can do better by using it. If you can’t see what that is you won’t ever be able to do it.
What CEO is likely looking for are 'PR' points, not often not a real strategy. If they can announce and pretend they're going all-in on AI, that's what's needed.
From your side, having AI mentioned in everything you do will help the conversation. If your code's docs are improved with an AI IDE, you're going hard on AI. Ignore the time you spend on fixing AI's errors.
Doing things for 'funding' and doing things that gets the work done are not always the same. One is a marketing/PR act, the other one is a product development act.
If funding is a real concern, the CEO's approach might be valid, because without funding, you won't have a job, and there won't be a product. So split your time in helping the CEO to achieve what s/he wants in getting the right message out.
As you're saying, if the CEO has built a great team, and great technology, we can't think the CEO is completely ignorant on what's going on.
Your CTO/CIO (if any) will know more about what realistically possible and what's not. If you have an 'AI Team', then there should be a CTO/CIO, and you're not directly talking to CEO about strategy?