What people who know nothing about creation/production think only matters in the short term, and over a long enough time frame they will be proven wrong.
I've used LLMs via agents and chat for what I do and I have zero confidence that it could be a productive part of a team without a very knowledgeable handler that knows exactly what they want and how to correct errant output... Meaning you'll still have to hire an engine programmer in order to get a game engine, then you can pretend that they'll have to use a LLM to get their work done (but given that the "you" in this scenario is completely out of the loop when it comes to production you wouldn't be able to tell that they did all their work manually, except perhaps if you notice that velocity went up, bug count went down, and there was more confidence when it came to estimations).
Additionally using it as a pretext to fire lots of workers like Amazon and others seem to have been doing. Some friends mentioned their companies using it as a way to offshore to cheaper locales while getting less bad press.
I am not even sure if they even think that. It can be a placeholder for any other reason
In Italian banking and insurance companies it's all about writing Gemini "gems" (essentially custom agents) and leveraging NotebookLM, occasionally Microsoft Copilot. Every innovation department out there is all about promoting and bonusing employees that can show the best savings in time and efficiency through LLMs.
So far I'm not seeing much success, because the people shoving those are mostly clueless about what LLMs are good at, they are desperately looking to be able to show that "anything" went from X hours of effort to X/2 or better and this pressure more often than not is alienating most employees, not because they don't appreciate AI, but because atm it's mostly an _additional_ task on top of their already existing work.
I myself, as an independent consultant I'm tasked by all my clients to automate and automate and bring the tools as close as possible to stakeholders, effectively making myself redundant at least on the software side (albeit I like to think not on the engineering and processes one, which is why I have the same clients since 2022...).
It's like forcing someone who has never driven a car to figure out how to make it go faster
I don't know where the ceiling is. And how much of the improvement was due to better context engineering, and how much to better models. I would expect the context engineering to plateau very soon. Not sure about the models.
An even more dramatic change for the whole economy will be when non-IT, non-creative office clerks are replaced. This is mostly a matter of redesigning the interfaces around them. AI could probably do already most of the work, but getting the tasks to the AI, using their output, and communication with third parties are still a major challenge. Like someone processing insurance claims. AI needs a way to get the claim, to contact third parties (write emails to humans, communicate with other AI agents, maybe even call humans), and then to initiate the payout. It's already doable with today's technology, but still a lot of work.
I mean to say that a year ago there was talk on forums of "fear of AI replacing developers" but companies were not losing 20/30% in one day because of this.
Now, besides talking about it among nerds, the situation is having a real impact in the economic/financial world.
True, but junior developers used to provide a lot of value while doing this. Now their value, while they are still figuring it out, has gone down immensely. For a company, there is no value in letting a junior dev write code anymore. And for reviewing the AI output, you need someone more experienced.
Save your old machines that run old software. Use them to debug virtual machines that will let you continue. Or, reduce the software overhand of your business as much as possible to minimize damage.
Source: https://arxiv.org/abs/2507.09089
> But I know that using in AI in software engineering reduces productivity by almost 20%.
So why are these companies losing billions in a few months?!
Are the big hedge funds stupid or is a pre-print not considered reliable?
What is a bit irrational is something I have noticed - that the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.
Its just likely that people can't deal with uncertainty and fear change - they would resort to opposing change with arguments from all dimensions even if they contradict eachother.