- before 2012 there was no smartphone
- before 2001 there was no wikipedia
- before 1995 less then 10 percent of the rich country home users had internet
- before 2023 there was no ai available to home users.
Hardware has been getting faster by a factor of 100 in 10 years and ~10‘000 in 20 years. Ai currently develops faster because of a combination of software and hardware improvements. Even if the best current system is only right 1/100 times right now, its likely nearly allways accurate in 10 years.
I also like to remind people that the phone i am writing this on (iphone 12), has the same computing power as the earth simulator in 2003. that was the fastest computer on the earth back then.
Imagine this development and think what changes might come.
That's still almost three orders of magnitude from the iPhone 12 (0.02 Linpack TFLOPS, 4GB RAM, 256GB storage).
edit: you are right, this source is wrong, but we are getting closer fast.
A19 seems to be getting 2.3 tflops (still only 10%, but still a whole floor of computers vs a smartphone is crazy!).
It only makes sense to compare specific, well-calibrated benchmarks, such as Linpack, which is what I did.
Through a sufficiently narrow lens, any technological advancement can be perceived as a threat. If your job was to perform calculations for your company using a microscope and calculator (computer, the job title) then the invention of the computer (the machine) was absolutely a threat to your job security. That's not to say that there aren't challenges to adapting or considerations for how to do it well, but it has always been the case that the old way is a casualty of the new way.
I am neither anti-AI nor an AI evangelist but I think a more productive viewpoint is to think about how these advancements could open the door to new opportunity. For example, democratization of learning. It has never been easier for anyone in the world with an Internet connection and a computing device to have access to a personal math tutor or nutrition coach.
You on the other hand, are not bred to be consumed. And in fact the fatter you are, the more expensive and less useful you become.
So what you get is more likely starvation, if you aren’t culled to free resources.
Every morning the dog rejoiced and said to himself "oh joy, I'm such a lucky dog, I don't have to do anything, the food is plentiful, I just eat and shoot the breeze the whole day long, what an awesome life!". And the dog went on to have an awesome life of plentiful food, shooting breeze and leasure. The end.
But now, you can hire 1 customer service person, who could then use AI agents to provide the top quality customer service. Previously, you needed to hire 5 people, which wasn't worth it.
So you went from no customer service employee to 1.
I suspect that this is what will happen. Many companies will hire their first customer service person or more. Many big companies will layoff most of their customer service people. The net effect might actually increase total customer service employment.
I suspect that job openings for customer service employees will actually be higher than now but companies won't be able to find enough AI-skilled people to fill the job. We're going to read about how there are more job openings than ever but companies can't find the AI skillset they need. This is why I think people who adopt AI now, learn it, understand it, get good at it, will be in high demand.
Disclaimer: I'm an AI compute investor.
LLM performance is already plateauing; models will get more efficient. Good-enough models will be deployed on chips, the same way H.264 is a good-enough video codec but used ubiquitously.
Edit: maybe the model efficiency you mentioned is the key, we'll see.
My assumption is that OpenAI, Anthropic, etc will go bankrupt and eventually be subsumed into Microsoft/Google/ByteDance & friends. New entrants will take their pioneering work and sell inference for pennies on the dollars without investing in massive R&D spend.
People thought AI being better than a human at reading medical images would put radiologists out of a job. But instead, radiologists had more demand than ever because it made getting a scan more affordable, more accurate which led to more customer demand.
Same can happen for customer service. AI makes customer service cheaper, better, faster. More companies offer good customer service in order to stay competitive. More customers demand customer service because it's better now and they expect it since all companies big or small can afford quality customer service.
If you remove the human from the loop in customer service, you won't gain a thing.
I do all the sales and customer service myself, because it's a genuine selling point for my customers that they can talk to the owner if they have issues, and because these customers are the lifeblood of my company, and I want to stay as close to them and their needs as I can.
But it's still time-consuming.
On the customer service side, my next crack at automation will just be having an agent triage inbound requests, queuing up the actions that need to be taken in response (cancel account, upgrade, split team, whatever), and then giving me the whole thing for approval and replying to customer. That alone should easily cut my time spent on CS by 80%, while keeping a more personal touch. I should also note that some of the customer support burden will be lifted by having more self-serve options to do things, better docs, etc. But given that my customers are non-technical, there are always going to be some of them that just want to dash off a text or email because they hate tech and don't want to hassle with it.
On the sales side, I've thus far been 100% sales driven, but I'd like to introduce a self-serve signup flow that targets the 80% of customers who have simpler needs and could probably sign up on their own, and save the sales calls for bigger or more complicated deals.
* Customer wants the human touch
* The company's systems were broken and the customer wouldnt have called at all if they could quickly and easily do what they wanted online.
* Customers are routinely furious and want to complain and/or understand and the company wants to brush them off.
AI doesnt help the first two, it only helps with deflection (what they call the last one).
I’m actually ROFL. Are you brain damaged? Or have you simply been in a coma over the past decade when businesses have outsourced their support to automation.
Hint: the result nearly universally has been closer to bottom quality support, if it even exits.
Been in this space over a decade and this time really is different. It’s hard for humans to perceive the exponential, it will be slow then sudden.
Bear in mind this is a B2B enterprise company with a mix of legacy and greenfield. And management has invested heavily into designing a robust spec/context-based workflow for using agents. Might be different elsewhere.
Personally I don't think scrums, planning, retros etc were better than kanban even before AI, at least if you have switched-on, motivated and smart people on your team. They actually made things less agile, and story-points give a false sense of predictability. Imo the crucial factor may be that AI agents are smart and switched-on (with the right context).
True, but also there are perception biases that lead us to believe progress is exponential, even though it might as well be an S-curve.
I'm having a hard time finding the right terms, but I'm sure there is some bias to think that "the line goes up".
(Let’s not talk about my blockchain startup and my VR startup and my NFT startup). My house is nice though.
What exactly will these agents be able to do with enough consistency, accuracy, and reliability that people will want to hire them over humans?
In my experience with even the most basic implementation of agents, i.e. customer service chat bots, I literally cannot stand interacting with them even once. They are extremely unhelpful and I will hang up or immediately ask to speak to a human.
I had same opinion till a few months ago, now would prefer the [redacted company so as to not give free marketing] AI agent. You’ll start seeing this wave in around 3-6 months as most are in trial
That's sort of the whole point of talking to customer service though. Getting something done that you want that involves them having to do work for you. AKA you taking value from the company.
So yeah they're basically always going to be useless garbage if put together according to business requirements.
Other services should just be automated already.
I don't want Codex dammit! I'm a Claude Code man.
The anecdote in there is about complex B2B enterprise software. That's not the majority of customer support, and is very heavy on escalating to actual experts.
You don't have to remove 100% of the jobs to have huge effects. Automating large parts of a few sectors would already create significant disruptions.
I think this mentality must have its own imminent apocalypse. Gifted with an enormous increase in potential productivity, the decision is to do the same but cheaper? Who allocates capital to such spiritless commodification? It all feels like using a printing press to make one bible a month.
There must be a role that can be more productive. It might not necessarily be our skillsets that fit those roles - and the roles might be more stratified - but someone is going to be able to be do more, be paid more.
"Triaging by LLM before sending task to any human" can work for almost anything, not just support calls. On another story I saw someone mention that they'd like something like an ad-blocker, but for content - a "content-blocker". Not too hard running even a local model that, via a browser extension, scnas the current page and places it into one of several bins: Read verbatim, summarise with ChatAI, Ignore completely, Read and mark for re-reading.
Software dev? Bin a ticket into "complex", "simple", "talk to lead dev".
Software proposal? Bin the proposal into "CotS available", "FOSS available", "Quick dev", "Too costly to proceed".
Bookkeeping? Accounting? They all have tasks that can be binned.
What does this all mean, I hear you ask? Well, you no longer need as many employees if some of the bins are "ChatAI and/or agent can complete this" with human review.
So, yeah, a lot of people are going to be out of work if this works like they say it does.
If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable. Why would tbe company get rid of them just at that moment?
Well, some will, because some companies are badly-run. Others will take advantage of the opportunity.
You're assuming unbounded demand for whatever product the company is producing. If demand for their product is bounded, having 1 dev produce the output of 5 devs means that the company is going to have devs simply sitting around doing nothing for most of the day.
> If a dev produces value for the company, and then the company can automate away the least valuable part of the dev's job, the dev is now more valuable.
I don't follow this argument - there is a practical limit to how much development a company requires. In the past they may have had a team of 10 to satisfy that limit. If the limit is satisfied by a team of 2 the company... does what exactly?
After all, a limit is a limit.
Where are these businesses that only ever want to sell the same amount of the same stuff forever?
So has every company I've ever been in, but at the same time, there problem was never production, it was always sales.
No company I have ever been in had the problem of "demand is so large that even if we double output we still cannot satisfy it".
Both things are true at the same time - companies want to produce more, but their rate of production is not the limiting factor, the rate of sales is.
> Where are these businesses that only ever want to sell the same amount of the same stuff forever?
Where did I make that claim? What companies want is to sell more stuff, but production is not what is preventing them from selling more stuff.
Doubling production in a company does not lead to doubling sales - an increase in one never causes the other.
* I couldn't sell our product because our competitor's has a certain feature. How soon can we have that feature? * I can't make any new sales, but prospective customers keep telling me they need a solution for a similar problem. Could we expanded our product line? * Some customers could be using a certain feature of our product, but they find it too confusing. What could we do about this? * A big customer told me they have a problem our current product doesn't solve, so I told them we would be able to solve it by the beginning of next month
As you say, the sales department is the driver of development work, not vice versa.
1. When a manager at some client says "How much will it costs us for you to add $FOO to the product?" I don't even bother updating the sales forecast with the quote I send them.
2. When they say "How soon can we have this?", that's when I actually update the sales forecasts.
So if your sales guys are saying "Look, customer said they'd go with us if only we had $FOO", they're failing the Mom Test[1] - the person they spoke just didn't want to be too negative, didn't know how to say "No" to a charming and likeable person[2], etc.
Sales is a function of the demand in the market. When the demand is (for example) 200 units/m of something, doubling your output does not let you sell 400 units/m.
Also, it sounds that your argument is for software products only, which is a tiny part of the economy. I was really talking about companies that sell non-software products/services - their sales is not limited by software development, it's limited by their market reach.
Even if those companies doubled their developer headcount, it'll have pretty much zero impact on revenue.
I mean, look, I can see you're arguing in good faith here, so I'm trying to do the same, but IME productivity simply doesn't have any effect on revenue, all it can do is lower costs.
----------------
[1] This is such a short and valuable read, that I recommend it to everyone I meet who is trying to do sales.
[2] If you're not charming or likeable, then you shouldn't be in sales in the first place.
These days it's hard to get people to read an email longer then 5 lines - yet people are super excited about abundant masses of text generated by LLMs. It does not compute....
Get prepared. Something is coming *soon*
And how any even slightly skepctical commend gets downvoted to hell. One may start thinking there are bots promoting the narrative.
Or maybe you're choosing to perceive bots when actually a lot of people disagree with you?
On the other side the doom posters tend to be awfully mediocre professionals (or again, conmen leveraging FOMO). Skeptics like in the article tend to be dismissed. I'm also a skeptic, and someone who you would define as a 10x i think, except a few years ago i would have just been, you know, good at my job?
Please let me know when i'm going to be automated so i can start becoming good at something else. The future may not be bright for a number of reason, but i still have not submitted to doom.
For things where the end customer doesn’t care if they’re interacting with an AI, reading content by an AI, etc. – or if the company doesn’t care what the customer thinks (see: automated phone customer support lines for the last twenty years) – the work will be replaced by AI work. Examples are any kind of rote documentation, generic digital asset creation like blog images, low level customer support, and most things where the company doesn’t really care about the customer, because the company is getting paid regardless.
If it does matter what the end customer thinks, the role will become increasingly humanistic in nature. Examples are high-end enterprise sales, personality and expertise-driven media and content, and anything where being “revealed” as an AI is perceived negatively.
infact, i go and implement dumb AI models in many companies and executives immediately show "how many people they can fire with this advancement".
So while there may be lots of consultants losing their jobs, that’s not because AI tools do the work better. It’s because management thinks investors will accept the story that AI tools will do the work better and save money. Management, and investors, don’t know, can’t judge, and honestly don’t actually care if it’s better or worse. And they run things so poorly it would be impossible to tell anyway.
AI will enable significantly faster economic growth, which is something the EU has been making impossible with legislation designed to destroy Europe's economic advantage.
(actually, MEGA would be a great acronym, but Trump's friends in the EU are more focused on dismantling it rather than making it great)
That's something that needs to be addressed by lawmakers ASAP. There needs to be a right to speak to a human, or (the perhaps overly tech optimistic route) a prohibition of AI that doesn't have adequate decision-making power.