> the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent
So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
A lot of AI predictions are based on the same premise. That AI will impact the economy in certain sectors, but the productivity gains will create new jobs and grow the size of the pie and we will all benefit.
But will it?
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
For me it was a website with turotirals on how to make flash games. It literally launched my career and improved the quality of life for my whole family by an order of magnitude.
I am primarily the naysayer of AI but I admit that current LLMs could have easily replicated the whole website.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
We're in the greatest era of symphonies IMO, it's just that they're hiding in surprising places; movies, TV shows, games, etc.
I would also point out that composing for a medium like a game or a movie places a great deal of constraints upon the composer, in terms of theme, cost of instrumentation, duration and most importantly: what is safe and palatable for an executive to approve of.
The future is going to suck.
You have just discovered the fully enshittified version of the business model ai companies hope to reach.
Absorbing information doesn't make you "educated". Learning how to employ knowledge with accountability and trust with beings in the real world is what's important, and a machine can't teach you how to do that.
> or is the value just in the co-opting of another human being's attention?
Why is it "co-opting" if it involves a mutually consenting exchange?
The value comes from applying an expert's wisdom and skill to the problem at hand.
You get neither from LLMs.
The world you're describing is one where the entire economic value of humanity is in reminding the AI to put out the food bowl and refill the water dish at the appropriate time.
It would be funny if the sleepwalkers weren't trying so hard to drag humanity along.
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
There is no way to separate this process from the product of the process.
You're not buying the sound of the music. You can just stream that. As far as that is the product, it has already been automated and scaled so millions of people can hear it at once, whenever they feel like it.
You're buying the sound AND the people sitting in their formal clothes manually moving their strings over a violin, with painstaking accuracy developed through years of manual practice.
You couldn't make a robot do it, for example. You could maybe make a robot play a violin, but that again isn't what the product is.
The product is tied to an expectation of what it is that does not allow for it to be done more effectively.
By contrast manufacturing processes are not tied to this expectation. If I buy a loaf of bread, I don't care whether the wheat was manually harvested or harvested by a huge machine.
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
This is an area where a confident, but wrong information is extremely costly. It’s like saying an LLM can give you high quality directions on how to tap into a high voltage transformer. Sure, but when it’s wrong, it’s very very wrong with disastrous consequences. That’s why professions like doctors and Engineers are more regulated than others.
You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.
Same for self-driving. Just hold each car like a normal driver, the owning AI company has liability. So after ~20 tickets and accidents in a week, a few ambulances being blocked, the only option is to revoke the driver's license (of which, all the cars share one, as they have the same brain).
This would make AI companies more cautious and only advertise capabilities they actually have and can verify. They would be held to the standard of a human. I think that's reasonable (why replace humans if the outcome is worse, and why reduce protections for individuals).
To make the analogy more clear: even if a telemedicine doc sees 10,000 patients a day all over the world, they would be held liable for any medical malpractice. Bad enough, and their license would be revoked, regardless of the fact that they see many patients all over the world. Same deal with AI / LLM -- if ChatGPT is making medical advice and it hurts someone, that's the same as a human doing so -- its malpractice and lawsuits can happen.
If they are somehow licensed, well then that license can be revoked. We would revoke a human's license for a single offense in some cases, the same should occur with AI.
By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.
TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.
I'll replace my doctor with AI immediately after the tech bros do
lol
This is cited so often. We tried it at a large scale with some of the best engineering talent but unfortunately the humans on the other side preferred speaking to and interacting with a human by a wide margin.
We are still trying with the latest AI models but humans are still doing better at serving other humans.
In one of our studies, we observed by a large margin that our customers would hang up immediately on knowing that they are interacting with an AI system.
I have heard this from others as well.
We contact support services to fix material problems. 'This booking is wrong.' 'I want a refund for that.' AI systems aren't empowered to solve these problems. At best they can provide information. If the answer is information - the user can likely already find it online themselves (often from a better AI model than they're going to find running your support line). If they're calling, they most often want something done.
So customer support needs to know how the systems works and need to understand what the data means, but also has to know when the system is factually incorrect. Customer support has to know when the second party is speaking the truth.
As we argue on the orange site, companies are paying Sierra AI to integrate voice and text agents into their systems to look up account information and process refunds. Fallbacks to human agents are built in to these systems.
We all hate phone trees because they never have the capability to handle exceptions to the most basic functions. We shout "speak to an agent!" into the phone because their website and phone trees only handle the happy path.
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.
AI gets everyone out of every job and into nothing.
Why is mechanized thinking going to do that? When mechanized labor didn't?
You're right. There is technically a category of work that relies on neither our ability to do physical labor nor excessive thinking. It just relies on being a human.
The conclusion is thus obvious: AI is going to push us all into careers as photo models, OF-creators, and social media influencers! /s
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
This person can no longer get a customer service job, but why can’t they get another job? Customer service is hardly career with a huge sunk cost in training and with a non-fungible skill set.
If they go get another job, compared to the base case of economy = customer service, we now have economy = customer service (AI) + new job.
But it is not infinite; eventually, we reach a point where we no longer need additional ditch diggers.
The future is bleak. If this is the sort of dystopia I can look forward to, then I would rather have AI simply wipe out humanity as a whole.
The supply of jobs exceeds the supply of workers, so yes, you should be able to go and get another job.
> It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
Huh, what? What kind of multiplier stuff are you talking about here?
The central bank looks at the overall spending in the economy (well, including forecasts), and compares that with its targets. They adjust their policy stance accordingly to try and hit their targets.
If people become more or less likely to spend their money ('multipliers') the central bank can and will adjust the amount of money available.
AI will allow higher production of goods and services. If producing goods and services becomes cheap enough (and it's looking like it will become dirt cheap), then it will not take much redistribution for it to reach the masses.
I think the true crisis will be one of purpose. That we live meaningless lives of leisurely abundance.
Why do you say this? How does AI helping us lower prices of goods?
It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
That depends very much on the computer.
https://christmas.musetechnical.com/ShowCatalogPage/1990-Sea...
Commodore 64C,1990, $159.99
My parents were working class in the 80s and we got a used Tandy that plugged into the TV and ran BASIC.
That's not true of many "objectively" poor people in the world, who even if they could buy the computer, they might not have had access to electricity to run it.
What, like a yearly vacation? Maybe they stayed home for Christmas one year instead of flying to visit family
Our childhood vacations were single-day (so we didn't have to pay for a hotel) road trips to a nearby state to go to an amusement park, or multi-day trips (also within driving distance) where my dad had to go somewhere for work and the hotel was paid for by his employer. It was a huge huge deal for us when, in the late 90s, we drove down to Disney World (a 13-hour drive) for a several-day trip.
And we never traveled around Christmas; that was one of the most expensive times of the year to travel!
Not sure when or where you grew up, but most middle-class folks in the US in the 80s didn't have a lot of discretionary income, and flights were (inflation adjusted) quite a bit more expensive than they are today.
I'm not saying that middle class families flew all the time in the 80s, but they absolutely could afford to if they wanted to make it a priority
A cursory google search seems to bear this out. Cheap flights in north america started in 1978 with some air travel deregulation.
We did have a computer but it was really a one time expense. At the time computers were improving quickly so I scavenged parts which wealthy areas that threw last Gen hardware away but were better than what we had (and I was a kid with a lot of time on my hands.) Giving a computer to a kid for Christmas in '83 is a very different value proposition than even a family vacation because a vacation is something the whole family does.
Even in the 90s, we kept relying on cast-offs from my dad's employer, and when I was preparing to go to college in '99, my parents scrounged to buy me the parts for a computer to build and take to college. But even then, my dad bought the parts at a discount through a former co-worker's consulting company, and vetoed a couple of my more expensive component choices.
And now that I think about it, my first laptop in 2003 was my dad's old work laptop that had been decommissioned.
US median household wage was $24k in 1985 and a c64 $150
More likely your parents decided to spend the money on something else. Like a $400 19” tv
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
Consumer electronics are cheaper; this is the trend for substitutable goods.
Love me the right 20-30 year old car, but the dramatic cost rise around covid times means the savings is only relative to new. A 3x increase in old car prices hasn’t been matched by 3 fold wage increases for most.
And of course we’re discussing this in a larger conversation about automating away 1980s jobs.
We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
And the execs invest that money back into the economy.
...which didn't work so well during the Reagan administration, but I guess we're on course to try it again.
No one is “eviscerated.”
And it’s disingenuous to use that term for any proposal that has even the slightest public traction in the US. The most extreme proposals require single digit taxes on hyperwealth which might not have impact beyond stabilizing it and certainly wouldn’t make anyone not-wealthy.
No one is talking about eviscerating the wealthy. Yet. But if we pretend the only options are (a) unencumbered hyperwealth with attendant hyper income inequality and (b) eviscerating the wealthy for long enough, it’s more likely some people will eventually embrace the latter.
And this is particularly relevant for the age of LLMs. None of them approach intelligence with reliance on a huge data commons (and likely even data that isn’t intended for the commons) they’re an enterprise with a natural arrow from the commons to the common wealth, if we can remember a culture that sustains it.
> No one is talking about eviscerating the wealthy.
See Bernie Sanders!
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
We are already now in the time were training one LLM seems to be more cost effective to train for everything than training a million people the same thing over and over (after all, people loose knowledge when they get replaced).
LLM don't even need to become AGI to continue this trend. They just need to be good enough 'executors' of these tasks we expected people to do.
Which also means that every new job, which needs any form of training, will not be created because we will train ONE llm (or three, doesn't matter) to do it right and again you optimized the new people away.
If I'm reading this correctly, the interpretation should be that a third of them were transferred to new branches.
0.66 (two thirds retention) * 1.4 (40% more branches) = 0.84, so we only expect ~16% were made redundant.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Today: 100 software companies employ 1,000 developers each[0]
Tomorrow: 10,000 software companies employ 10 developers each[1]
The net is the same.
[0]https://x.com/jack/status/2027129697092731343
[1]https://www.linkedin.com/news/story/entrepreneurial-spirit-s...
Right now, software is really expensive; so 1) economics tends to favor large pieces of software which solve many different kinds of problems, and 2) loads of things that should be automatable simply aren't being automated with software.
With the cost of software dropping, it makes more sense to have software targeted towards specific niches. Companies will do more in-house development, more things will be automated than were being automated before.
Of course nobody knows what will happen; but it's entirely possible that the demand for people capable of driving Claude Code to produce useful software will explode.
Plenty of businesses need very custom software but couldn't realistically build it before.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
If this were seven government funded teams solving the same problem, people would lose their minds over the 'waste' But when private companies do it, we call it efficient market competition. The duplication is the same - we just frame it differently.
Edit: fixed some typos caused by fat fingers on a phone keyboard
>If this were seven government funded teams solving the same problem
The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
This assumes that the duplicated effort arrives at a solution that is better than if it were done by a single team.
> >If this were seven government funded teams solving the same problem
> The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
I think you're saying that 5% is worth it when the free market does it, but 5% gain isn't when the government does it?
I'm hoping you're not because that's impossible - the end result is precisely the same
It is not. Seven teams all working under one leadership is quite different to seven leaderships each working with one team.
When different governments (e.g. USA and USSR), and thus different leaderships, are both trying to solve the same problem (e.g. travel to the moon), that too is considered efficient competition.
If a government did this (e.g., seven independent agencies competing for a moon landing), people would call it "fragmented," "uncoordinated," and "bureaucratic infighting."
When complete organizational separation is introduced, the concerns you speak of go away. In the USA, the ARPA (you might recognize that name from the thing you're using right now) program regularly enables "seven" independent leaders to tackle a problem and this is widely considered a resounding success.
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
IE. If a top tier dev make $1m today, they'll make $5m in the future. If the average makes $100k today, they'll maybe make $60k.
AI likely enables the best of the best to be much more productive while your average dev will see more productivity but less overall.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
How silly of me to rely on reality when it’s so obvious that AI is benefiting us all.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally aren’t replacing people with AI.
Long-term, they will need none. I believe that software will be made obsolete by AI.
Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Why have AI build a Microsoft Excel clone, when you can just wave your receipts at the AI and say "manage my expenses"?
Enjoy your "AI-boosted productivity" while it lasts.
I think this is a bit hyperbolic. Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
For SaaS platforms you’ll see a dramatic reduction, maybe like 80% but it’ll still have a handful of devs.
Factories didn’t completely eliminate assembly line workers, you just need a far fewer number to make sure the cogs turn the way it should.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
I didn’t, and thanks for clarifying for me.
This doesn’t pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then what’s the differentiator as a business? Everything can be in chatGPT but that’s not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I don’t think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you won’t need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
What the customer wants only matters insofar as they are willing to pay for it. Sure, I'd rather talk to a person... But I'm not willing to pay 100x as much for a service that's only marginally better. Same reason I don't fly first class, as miserable as coach is.
Someone may want to pay for a boutique human lawyer/banker/coder/professor, maybe as a status symbol, the same way people pay $20k for an ugly handbag. But I think most people will take the cheaper and almost as good option, when the difference in quality is far overshadowed by the difference in price.
> someone needs to train the models, which requires code.
I'm not sure that training llms is a coding problem, but it doesn't much matter: llms can train each other.
> If AI can do everything for you, then what’s the differentiator as a business?
Good question. My gut says there isn't: all money flows to the model providers, everyone else is a serf at best parasiting on someone else's model.
How does TurboTax implement the latest tax changes? My guess is that before the decade is over, the answer is "an LLM does it."
Anyways, formulas are a lot better than one shot.
Same goes for chess, there will always be a chance that it makes an illegal move. Same goes for code, there will always be a chance that it produces the wrong code.
Maybe a new AI technology will be developed that doesn't have the innate non-determinism, but we don't have that now.
Speed, cost, security, job/task management
Next question
All of that will inevitably be solved.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Also, battery life of mobile devices.
But now, we not only have laptops, we run horribly inefficient GUIs in horribly inefficient VMs on them.
The dollar-per-compute trend goes ever downward.
Yes. That's precisely why my company runs dBase 7 on a fleet of old 286DX machine from Compaq. /s
Running obsolete software will be cheaper, but the value provided by the newer technology will make the difference insignificant.
Why do 50,000 tasks with an LLM when I can do 64,467,235 without an LLM that the LLM created for the same cost on probably far lower cost hardware?
Because you'll be outcompeted by people who make the best of the nondeterministic system.
I used the Perspective tool in an image editor to give a rough idea of what the first graph would look like adjusted for population change:
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
Net result ATM’s likely cost ~30-40% of bank teller jobs.
Population is really important to adjust for in employment statistics. Compare farmers in the USA in 2025 vs 1800, and yes the absolute number is up but the percentage is way down.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
For as long as a human remains the customer.
Once humans become the proverbial horse supplanted by the automobile... I don't suppose glue really cares.
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
To derive an alternate system you need alternate axioms. The axioms of our liberal society are moral equality and peaceful coexistence. Among such equals, no one person, group, or majority has the right to dictate to another. What axioms do you propose that would constrain The Engineer? How would you prevent enslaving him?
Eeeeeerrrr, wrong! This is garbage hypercapitalist/libertarian ideology.
Did you earn your public school education? Did you earn your use of the sidewalk or the public parks and playgrounds? Did you earn your library card? Did you earn your citizenship or right to vote? Did you earn the state benefits you get when you are born disabled? Did you earn your mother’s love?
No, these are what we call public services, unalienable rights, and/or unconditional humanity. We don’t revolve the entire world and our entire selves solely around profit because it’s not practical and it’s empty at its core.
Arguably we still do too much profit-based society stuff in the US where things like healthcare and higher education should be guaranteed entitlements that have no need to be earned. Many other countries see these aspects of society as non-negotiable communal benefits that all should enjoy.
In this hypothetical society with The Engineer, it’s likely that The Engineer would want or need to win over the minds of their society in some way to prevent their own demise and ensure they weren’t overthrown, enslaved, or even just thought of as an evil person.
Many of my examples above like public libraries came about because gilded age titans didn’t want to die with the reputation of robber barons. Instead, they did something anti-profit and created institutions like libraries and museums to boost the reputation of their name.
It’s the same reason why your local university has family names on its buildings. The wealthiest people in society often want to leave a positive legacy where the alternative without philanthropy and, essentially, wealth redistribution, is that they are seen as horrible people or not remembered at all.
Go on then, how do you decide what people deserve? How do you negotiate with others who disagree with you?
> examples above like public libraries
I agree! The nice part about all these mechanisms is that they’re voluntary.
If you’re suggesting that The Engineer’s actions should be constrained entirely by his own conscience and social pressure, then we agree. No laws or compulsion required.
Everything else, all the 'isims' and ideologies are abstractions.
These examples aren’t generally voluntary once implemented. I can’t get a refund from my public library or parks department if I decide not to use it.
The social pressure placed on The Engineer is the manifestation of law. That’s all law is: a set of agreed-upon social contracts, enforced by various means.
Obviously, many dictators and governments get away with badly mistreating their subjects, and that’s unfortunate, shouldn’t happen, and shouldn’t be praised as a good system.
I think you may be splitting hairs a little bit here and trying really hard to manufacture…something.
What if you are in the minority? Do you just accept the hypercapitalist dictates of the majority? Why not?
Law is more than convention. What distinguishes legitimate from illegitimate law?
The only way for people who disagree axiomatically to get along is to impose on each other minimally.
You figure out your own economic security, I’ll manage mine.
You are, in short, a tiny little microcosm of why humanity is doomed as a species.
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
If goods aren't being sold, then the price will increase.
In many past cases where new technology eliminated jobs it was accompanied by new jobs related to the new technology that the people whose jobs were eliminated could do, or could reasonably learn to do, and with good enough pay to maintain their standard of living.
Lose your job working in a horse drawn wagon factory because companies are switching to motorized trucks for deliveries? Those trucks are way more complicated to build than wagons so there should be plenty of new jobs in the truck factories.
With AI it seems much less likely for that to generate new jobs for people replaced by AI in as direct a way as trucks did for wagon makers.
We've spent over 300 years doing the Luddite song and dance. To be clear, I have no problem with Luddites and do not view them negatively, but to imply that this productivity enhancer is magically special in a way no other one was needs some kind of incredibly solid explanation.
edit: as an aside, I do wonder how, if ever, we'll make the transition over to a world where people don't need to work. It seems like every time we think we might be getting closer, the first response is fear.
There's nothing magic about it. My point is that in the past it was often the case that building the machines that replaced jobs often created enough new jobs to greatly reduce the net job loss. The number of machines needed was proportional to the number of jobs the machines replaced so it scales.
When it is not new physical machines replacing jobs but rather software, often running on machines the employer already had, you won't get that kind of balancing job creation.
Maybe I’m wrong, and I certainly have no studies backing up my feelings, but not having to work seems like it would be a massive psychological disaster.
Having external reasons to get up in the morning (providing for your family, being apart of some organization, etc) feel really important.
with ATMs, they wouldn't hand count money for withdrawals and deposits as much. they'd be doing more interesting and challenging things.
same thing will happen with AI automation -- the easy parts disappear, and youre left with undiluted 'hard parts' in your job. some people might like the change, but we'll probably learn that you need a good mix of deep/hard problems and light/breezy problems to keep mentally engaged and prevent burnout.
I dont think the race to shove an LLM into everything is going to grow the pie.
But I also dont think it is impossible that a use case will present itself that will create further jobs.
The issue is that its largely unpredictable.
Its a bit like, we are sitting around in the 1950s trying to predict how computers will affect the economy.
It is going to take more than 1 successful deductive leap to get us from 1950s computing -> miniaturisation -> computer in every home -> internet communications.
Every deductive leap we take is extremely prone to being wrong.
We simply cannot lie back and imagine every productive relationship in the economy and then extrapolate every centaur and anti centaur possible for it.
What we do know is that theres a bit of a gold rush to effectively brute force every possible AI variant into every productive relationship in the economy. The fastest way to get the answer to your question is to do it. Possibly the only way to get the answer is to do it.
For instance, someone might imagine LLMs simply eating a whole bunch of service industry jobs. At the same time, theres a mid state where it eats some, but the remaining staff are employed to monitor the LLMs to prevent them handing out free shit to smart shoppers. Its also easy enough to imagine that LLMs never quite get there and the risk is too large for foul play, so they just dont gain that kind of traction. Its also possible to imagine an end state where LLMs can get to 0% risk if they are constantly trained on human data coming from humans doing the same job, and that humans are gainfully employed in parallel with LLMs. Its possible that LLMs are great at business as usual, but the risk emerges when company policies change, and the cost of retraining LLMs makes it impractical for move fast and break things companies to do anything but hire humans. My favourite scenario is one where humans are largely AI assisted, trained on particular people, and theres a massive cybercrime industry built around exfiltrating LLM training weights trained on high functioning humans and deploying them without humans to the third world to help them get 80% of the quality of first world businesses, making them heavily competitive.
We dont know what we dont know.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
No, because if you think about Startrek the endgame is replicators. Well the concept that 100% of basic needs are met.
At some point work becomes unnecessary for a society to function.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
Even replicators need feedstock - people who own the rocks or sand or whatever feeds them will start charging an arm and a leg. Sure, I could feed it dirt and rocks from my own property, but only for so long before I'm undermining the foundation of my own house. To say nothing of people who live in apartments.
And then, if everyone has equal $$, how do you decide who gets to live in the better locations / nicer housing?
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
If you go in with the attitude that work is hell and humiliation, that's what life is going to give you.
And right now, due to having to work, maintenance on my house is a bit behind.. Would also prefer to catch up on that - but again, no one is paying me to do that.
Your misunderstanding is separating this in your mind.
That doesn't mean it has to be wage labor though.
But it is usually only people who enjoy work who manage to do something different with their life than wage labour.
More like something closer to 100%. The ATM was notable for enabling a complete change in mission. The historical job of teller largely disappeared, but a brand new job never done before was created in its wake. That is why there was little change in the number of people employed.
> because of deregulation and a booming economy and whatever else.
The deregulation largely happened in the 1970s, while you're talking about 1988 onward. The reality is that ATM actually was the primary catalyst for the specific branch expansion you are talking about. Like above, the ATM made the job of teller redundant, but it introduced a brand new job. A job that was most effective when the workers were closer to the customer, hence why workers were relocated.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
That's not quite my read - the original says per branch there was a 1/3 reduction, but your comment appears to say 1/3 total redundancy.
There was, according to the original, a 40% increase in number of branches, meaning a net increase in tellers (my math might be off though)
edit:
100 branches → 140 branches = +40%
100 tellers/branch → 67 tellers/branch = -33%
140 × 67 = 9,380
100 × 100 = 10,000
net difference -620 or just over 6% (loss)
There's an important point here that you're glossing over. The increase in the total number of branches doesn't have to be unrelated to the decrease in the number of tellers each branch requires to operate. The sharp drop in the cost of operating one branch directly means that you can have more branches. This means it isn't true that "a third of bank tellers were made redundant" - some of them were reallocated from existing branches to new ones.
Is it? Maybe with survivor bias but what about all the laid off tellers? Did their situation improve? Walmart grew a lot over this time period, maybe most of them had to downgrade and be cashiers for a generally bad employer.
Also, and this might be a different analysis and topic, but tellers in the 80s had a pretty good job. It was often a decent wage with a pension and good benefits. Maybe on par with a teacher or government employee - granted not the highest pay but good, was considered a “profession”. Compare that to how it’s changed, it’s a low hourly rate on par or only slightly above retail and fast food work, heavy part-time status so as to avoid paying benefits.
I wouldn’t say that was a great example and is likely to be what may happen elsewhere once the routine work is sufficiently devalued.
First: Most people believe it was Netflix that killed Blockbuster, but that's not strictly correct. It was the combination of Netflix and Redbox that really sealed the deal for Blockbuster (and video rental generally). It normally takes not one, but at least two things to really fill the full functionality of a old paradigm. Also it's human nature to focus heavily on one thing (Blockbuster was aware of Netflix) but lose sight of getting flanked by something else.
Second: Not listed here is how banks themselves have changed to be almost entirely online, which in many cases is more of a outsourcing play than a labor destruction play. My favorite example of this is Capital One, where the vast majority of their credit card operations literally cannot be solved in a branch. You must call them to say, resolve a fraud dispute. Note that this still requires staffing and is (not yet) fully automated, just not branch staffing. It doesn't make sense to staff branches to do that.
If Blockbuster had kept pouring money into the new service, maybe it would have lost it all - I see no reason to think Blockbuster's movie rental franchise business would have 'transferrable skills' to allow it to succeed at streaming.
If it had been trying to pivot into a pizza delivery business (perhaps more transferable, in terms of locating franchises etc) would Icahn still have been 'killing' it?
My point is, maybe it was already dead and Icahn just prevented it from wasting a lot of money on the way down the drain.
Sorry what? Was this not the central theme of the article? (albeit with a title that used the word "iPhone" to be catchier)
Instead of chastising people with another guess you could find the source. The founders of blockbuster knew it would eventually fail. Short version, they knew once people watched the huge initial backlog revenues would plummet. The plan was to build everywhere and capture that initial high income. Afterwords, well whatever.
Built to Fail: The Inside Story of Blockbuster's Inevitable Bust
Is an app really that much easier to use?
Just like with a lot of things. Sure you could do a thing better, faster, more efficiently on a PC, but some people just don't care when 80% is good enough.
You can choose to not allow location tracking on those apps if that's your concern.
BTW newer mobile phones offer "desktop mode" (the Samsung Dex, and what came to AOSP), so you can attach them to a TV.
I log in to transfer money, to take a photo of a check to deposit it, to check my balance.
All of that is fine on a phone screen. Actually, it's a lot easier to take the check photo.
And a banking app is a whole lot more secure than a browser tab running extensions that might get hijacked, on a desktop OS whose architecture allows this like widespread disk access, keyloggers, etc.
It’s free, it’s transparent, you can read the profile… And it takes two minutes.
It doesn't matter what used to be, we're discussing what is now. We now have mobile devices that are much cheaper for people to obtain than a computer. For most, that device is more powerful than a computer they could afford. Arguing the fact that a vast number of people's only compute device is their mobile is just arguing with a fence post. It serves no purpose.
Even now, the mobile deposit limit seems sufficiently low that I still go to the bank with more frequency than I’d like. Luckily, the ATM at the bank has a check scanner now that doesn’t have a limit so that’s usually easier and faster. It’s the daily $5000 limit I hit the most, a single check and put me over it and require a trip to bank. I think the monthly limit is $30000 and that doesn’t get in my way often. I think $5000 is too low of a daily limit. It’s common enough that I have to make a $5k+ settlement with friends/family that usually always has to be done by check. (For curious, This is usually travel that I pay for and we settle up later.)
Less common, but sometimes I need to get a bank check (guaranteed funds) or a money order. Way less frequent is need to get/give cash funds. Usually can use ATM for this unless it’s a larger withdrawal or if I need some particular denomination. This whole paragraph accounts for about 1-4 annual trips in any given year though.
Paying billed is easier on the phone in the sense that bills in Denmark have a three part number, e.g. +71 1234567890 1234678 where the first is a type number, second is the receiver and the last is a customer number with the receiver. The phone allows to just use the camera to scan the number.
Transferring money is terrible on both platforms, because it's designed to be doable on the phone, meaning having three or four screen, but it gives you no overview. There's plenty of space on a computer for a proper overview giving you the feeling of safety, but it's not used. Same for account overview. Designed to the phone, but doesn't adapt to the bigger screen and provide you with more details, so you need to click every single expense to see what is is exactly.
1) Because of regulations, I need to use my phone to log in into internet banking and to confirm every transaction (including online card payments) anyway. If I already have to find a phone, I might just use it all the way.
2) Invoices have QR codes on them nowadays, that you can scan with your phone and it will prefill all the account numbers, amounts, etc. That's easier than copy-pasting or rewriting it.
Now this is all actually terrible, because to live in a society, you need a bank account, and to have a bank account, you need Google or Apple-controlled smartphone. (there are some legacy banks that allow you to use SMS for second factor, but it's less and less common)
I actually switched to a credit union last year from Chase partly for this reason. Chase used to have m.chase.com, which was PERFECT for most of the banking I did while being extremely fast, even back in the 2G days. They Web 2.0'ed it in 2017 and deprecated m.chase.com in 2018 or so.
The provider that maintains my bank's online banking platform made it fast and lightweight, much like m.chase.com of yesteryear, while also adding more modern authentication security (2FA vs SMS).
I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
How? Across multiple browsers?
> I think Android and iOS are safer platforms than PCs and that's why banks want you to use your phone.
This statement fills me with revulsion and rage lol. The only real "safety" involved here is the removal of user agency. I have a lot more trust in a machine I can actually control, secure, and monitor than the black box walled-garden of phoneland.
- Push notifications within seconds of swiping your card - Frictionless to check your balance/budget/cards with bio auth - Mobile check deposit (as others here have stated) - Instantly locking/unlocking your cards - Budgeting built-in
If, to you, "doing online banking" means "sitting down at my computer and scrolling through the PDF statements on Chase's website" (I don't blame you, I've been there), then yes, doing that on a desktop is much easier. I'd encourage you to take a look at how far banking apps have come recently.
Many countries have functioning giro systems. The U.S. is just an outlier.
What about manufacturer rebates?
The only time I really saw checks used was when I was a child ~30-35 years ago and my parents used them. I did once cash a check from an elderly relative, but that was very unusual and only happened once. I didn't even know it was still possible to do that, my reaction was more like if someone had handed me a stack of punch cards to run on my computer.
There hasn't been anything an average person used checks for in the last decades in Germany. Except a few elderly people, nobody uses checks and there are no rebates via checks at all.
Receiving a check however is even rarer.
Granny can always give you cash or just send it directly to you account in the same way.
As it turned out, my bank rejected both because they were made out to [middle name] [surname] rather than [firstname] [surname]. Ironically the former is unique (probably) whereas they had another customer with the latter.
On a more serious note, the last time I saw a cheque in the UK was my grandfather balancing his cheque book in the mid 80s. It really has been that long since they were in general use in the UK, at least.
Just like with the prevalance of Apple/iPhones, the US banking system is global outlier.
Things you can't do with my banking app you can do with the web site:
- Extract your transactions to excel/csv
- Use OpenBanking
- See all my accounts on screen at once
- Sharedealing
- International transfers
But people are right, banks trust the mobile app more, and realy on it as an MFA device, so even if you use the website you still need the app.
On the premium end of banking, where users generally aren't stressed about money, offering an app is more about catering to however the user prefers to interact.
Versus
Drive to the bank, wait in line, talk to someone who misunderstands me, fill out a deposit/withdrawal slip, and also if it’s not 9AM - 5PM I just can’t do this at all.
I use both. In the beginning I used to prefer the web version. I can use my large monitor to see more data and use a full keyboard and mouse. But I have started to use the mobile version more. For Wells Fargo at least, the mobile version is faster to log into because of face ID support. The website requires a lot more clicks and keystrokes. Also, the mobile app makes it easy and possible to deposit checks if and when I get them.
It's the Internet that killed bank tellers.
Generally yes the apps tend to be easier to use for most things, especially with a high-speed internet connection. Customers prefer them, banks build them since customers prefer them.
If you don't have a scanner, nearly all laptops have a webcam built in, and many people have one for their desktop as well.
On top of all that, there's no reason you can't use your smartphone camera to upload an image into a website through the mobile browser. I've done it many times for things. Just this morning I "scanned" a receipt into Ramp by taking a picture with my smartphone in the mobile browser.
You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
You're basically the only person in America doing this. Tens of millions of folks are just scanning it with the app on their phone and it's objectively a much better experience lol. The resolution of the photo taken on your smartphone is beyond good enough, there's no need to over-engineer something here.
> You can't invade the user's privacy nearly as well in a browser (which is great for analytics/marketing), so there's a lot of incentive to the app creator to force a mobile app. But I think we should be honest that it's not for the user, it's for the company.
I agree with your first sentence, but not your second one.
Banking applications can certainly get more/different data on you from using the app, but the job of the bank is to protect money and to know their customer. Privacy is secondary, of course outside of things like other people knowing your account balance, unauthorized access, &c. That's for the bank, because they don't want to lose your money, but it's also for you because you don't want other people getting access to your money.
The quality of the check images is not as big of a deal as you might think. No one is actually inspecting these unless the amount of deposit is near a limit or the account is flagged for suspicious activity. You definitely do not want to throw away the physical copy until the bank confirms the deposit.
(I'm guessing you are because in the USA they spell it check, not cheque.)
I asked because the USA still seems to be stubbornly check-focused.
Everything else allowed either credit card or direct debit on top of allowing checks.
credit card: - often extra fees or minimums for nontrivial expenses - privacy of course
direct debit: - payee gains ability to debit any amount, and while resolution plays out, you are stuck with the consequences - limited ability to cease payments
check: - fixed payment amount; violating this would be clear fraud not attributable to "mistakes" that can happen with DD
Landlords, IME, insist on a physical check for the first payment. I think they're performing some sort of blood ritual with it in the back of the office. After the sacrifice is complete, though, they'll switch to ACH.
The only other place I've ever had to use checks is for large purchases, where the amount exceeds that which cards are capable of. Even these would be pretty rare for most people, since there's a likelihood you would finance a large purchase with a loan instead.
I do find the money transfer options where I am in Europe much easier, though, and they do make checks and PayPal/Zelle/Venmo pretty obsolete too, IMO.
But in the US, there's probably a general expectation that you can send or receive checks at least now and then. There are often other options but that's probably the lowest friction one even if my bank can send checks if needed, albeit with some delay.
I have refused to install the bank app on my phone because I see no point in it and just downsides in case I get mugged (bad experience in my teenage years)
The 1 check I get a year takes about a minute to deposit at the ATM on my way to work.
It seems like a natural evolution of the technology and adoption rates to me. There was rudimentary online banking in the 2000s, then we saw banks shift to fully online presences in the 2010s. Maybe it wasn’t “the iphone” but just the fact that by the 2010s, everybody had a device in their pocket.
Native apps can provide a bit more streamlined UX (e.g. Face ID), while also being able to provide more robust features (mobile deposit).
The downsides are arguably higher development costs / OS compatibility, and having to install a separate app.
Personally, I don't think this is about banking apps. I'm kinda surprised an article talking about ATMs and teller jobs barely mentions cash, checks & cards and doesn't mention paypal or venmo at all. I used ATMs less when it became less of a necessity to carry cash.
You don't use cash to buy things online. Even in person, outside of brick & mortars, paypal/venmo became in vogue at some point in the past. Those are banking apps in their own way.
Also, here in the UK we don't really use Venmo or anything like that, so normally transferring cash to and from friends and family happens by bank transfer as well.
Also since you are already using 2FA, you are already on the phone so might as well do basic operations there.
I can also look at transactions in my bed before going to bed so that is nice.
If I need to look at a support ticket or look at transactions more deeply, i still use the desktop approach.
- Remembering that you need to do banking, but waiting to do it until you're at home in front of your computer. This is impossible now, and if I don't follow the impulse the moment it occurs, the impulse will forever escape into the ether.
- Even the mere mention of needing to observe a URL is often far too scary. Typing one in, or using a browser bookmark is of course, impossible.
- Using a keyboard and mouse. It's just too onerous to use tools that are efficient and accurate. Modern users would much rather try to build a mental map of the curvature of their thumb, so that when they touch their touchscreen and obscure the button they're hitting, they they can reference that 3D mental map to guess at what portion of the screen they've actually pressed. Getting this wrong 30% of the time does not detract from the allure of touch screens.
- Using a normal-sized screen that allows you to actually see a lot of data at once, or even use multiple tabs. Again, this is really unthinkable. Of course it be be completely unacceptable to need to wait to do your banking until you're in front of a computer. It's 2026, and I cannot be bothered to remember to do a task later. But, in needing to always follow every impulse immediately, it doesn't matter that my phone screen only displays a small amount of information at once, or that tabbed browsing is impossible in a banking app. Those inconveniences are acceptable, or even welcome!
So they decided to reduce the number of offices. The ATMs were very specifically placed in the same location where the closed offices were, often renting just a fraction of the former space (usually a small cubbyhole attached to an outer wall). From 140 branches over a really small area they went to a small fraction of that, and ATMs took up the slack. Many people even preferred dealing with the ATMs rather than with the tellers because the ATMs were (at least initially) open 24x7.
Bank offices have all but disappeared. I think there are still two regional centers here and that's it. All deposits and all withdrawals of cash - as long as we still have cash - is handled by the ATMs. The iPhone came decades later.
You can’t state with any certainty that the ATM’s increased efficiency had anything to do with the expansion of bank branches. That could have simply been due to the strong population and economic growth. It’s quite possible (and I’d assume it to be true) that if the ATM had never been invented, there would have been far more bank tellers in 2005 than there were.
You also can’t assume the iPhone had that much to do with it. With the exception of depositing checks, there was nothing I couldn’t do on my computer in 2005 that I could on my phone in 2025. And you could always deposit a check at an ATM. It wasn’t like in 2006 we were all like “well I can only check my bank balance on my laptop so I’m going to drive there instead.”
It seems quite likely that other trends caused all of this.
What exactly is your competing theory?
First, ATMs increased the demand for bank branches, which more than made up for the decrease in tellers per branch.
Second, mobile banking decreased the demand for physical branches.
They are the only way to get non-20 cash in many areas; the ATMs that can dispense other bills are quite rare. And if you want $100 in ones you're going inside.
They are the first line of human-to-human contact with customers. They are able to sell new services or upsell existing services to customers, especially with the customer's data right in front of them. A new pleasant conversation plus "Oh by the way, did you know that you could get service ABC that would help you?" is something that an LLM or ATM can't do reliably.
There's a tremendous amount of opportunity available with well-trained tellers.
I started by trying to think about ways of running a vending machine company autonomously using a finite state machine + agents. It turns out most of "automating" a vending machine company doesn't need LLM agents at all, and simply buying machines with reliable telemetry + a database + automated inventory could get you much further than replacing every or even some components with an LLM. The LLM could replace the person on the phone texting the laborers who refill and service the machines, perhaps autonomously order refills (but hey so can a cronjob).
The troubling thought I had is that AI does not displace the technicians, or the vending machines. It replaces the manager. The human manager is the component that is unnecessary. The entire global economy can eventually reflect this reality where most of the wealth is technically owned by humans but where the majority of financial transactions and decision making will be done by machines (at a level not yet seen)
Macroeconomic metrics will go up along with wealth and standard of living, but for actual flesh and blood humans, much of this will be irrelevant.
But like, I as a manager try and delegate the coordination role yes. Unlike an IC, loosely speaking the more ‘tasks’ I’m doing as a manager, the more I consider myself to be failing at the job.
This is really why ai will have a more profound impact on the society: it is fundamentally changing the hierarchy of conpetence we have gotten so accustomed to.
What I noticed however is a noticeable decrease in service quality in bank branches while online (desktop browser) options became better. Banks pushed customers out of their branches progressively. In the early 2010s tellers couldn't do anything you couldn't do online by yourself. For services like dealing with large quantities of cash, or coins, they made it so that you couldn't do more than what the ATMs allowed you to do, limiting the amount of cash the branch had access to and increasing how much you could withdrew from ATMs.
They didn't get the idea to fire all their tellers when Steve Jobs announced the iPhone. It was a decision at least a decade in the making. It is just that people tend to resist change so it happens slowly, especially for big, serious business like banking. And I don't think it is a bad thing.
Humans would attend a gas station or fetch items in a store. Why? They're completely unneeded, I can do (and WANT to do) that myself.
I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
> I always feel sad about these people, trapped in an economic system that forces them into useless labour when they could spend their time learning actually useful skills.
It's useful labor. Yes you could do it yourself, but it gives them a job which they can ultimately use to afford food and where they live.
I mostly only feel bad for kids doing that sort of labor as it means they aren't getting an education. But for an adult? It speaks to something a bit right about their economic situation that they can stay a float by merely fetching items in a store.
I wish in the US that it was possible for someone to make a living doing doordash or instacart.
Because the presence of a human likely prevents shoplifting and / or vandalism. It must make economic sense for the gas station owner to employ a human, and I suppose this is the sense.
What actual useful skill do you think the gas station keeper could learn? Is their employment the thing that prevents them from learning these skills?
I mean, it's possible there are useful skills they could learn but there's not the interest or desire to learn those skills. It's completely possible that person is perfectly content doing that work.
1. You can fill your own car with gas, but some people can't, or prefer someone more knowledgeable to do it for them. Some people like the comfort of having someone bag their groceries for them, or have disabilities that necessitate it. Some people are old. Today you learned.
2. Your economic system is not different than theirs. Everybody NEEDS a job to support themselves, their families and to be functioning members of society. That means jobs that can easily be automated won't be automated. Also, you may make a lot more money than that kid bagging groceries to make a few bucks for himself, but at least what he does actually helps someone. What we here on Hacker News do is mostly build imaginary products that will be gone and forgotten quicker than you can say "Al Bundy".
3. Not only that, all of us here have basically written our own replacements and made ourselves obsolete. Something tells me your job isn't really needed too.
Tying this back to your first point, the revealed preference is that people would rather fill their own gas tank, rather than be forced to wait for someone to come and fill it for them.
Bagging groceries is different, however the revealed preference is that people would prefer the lower price/lower service supermarket, and those that need the help have to ask for it.
You are correct that everyone needs to earn a living, I think that most people would prefer that others can earn a living doing a somewhat meaningful job, in a somewhat safe manner.
The reason that much of this isn't automated has nothing to do with ensuring that jobs exist, but rather that the cost of automation is higher than the cost of labour. This is what op is talking about.
Also I think it is preposterous to claim that these people are trapped.
Do you WANT to do that?
I've tried to run my own items at the corner store via the automatic checkout. Whenever I buy lightweight items or items that lose weight during the day (fresh bread) the anti fraud weighing system lights up. And I like my fresh bread.
So I've gone back to the one manned checkout. Judging by the lines I get sometimes, so have most other customers.
Helping someone fill their car with gas or sell them an item is useful as well, not everyone should be a software developer. Before feeling sad for other people, think about yourself as well.
we all need to do something
We've pretty much locked ourselves into an economic system that requires everyone to work, even though our productivity has skyrocketed many orders of magnitude. The end result is most people are doing meaningless work just because they have to in order to survive, and most jobs do not need to exist. This is true even in office work. It usually manifests as moving stuff from A to B and then maybe back to A. Basically, not creating, just moving. And not physically moving either.
If you look at the graph, the number of bank tellers from 1980 to 2010 went from roughly 500k to 550k (a 10% increase). However, the U.S. population grew from 220M to 305M in the same period (a 40% increase). To me, that seems to indicate that less and less people were becoming bank tellers after the invention of the ATM. Although from the graph again, you can see that the correlation is quite poor anyway.
HTTP 402 "Payment Required" has been a reserved status code since 1997, unused for nearly 30 years. Now protocols like x402 and L402 are finally implementing it: a server returns a 402 with a payment instruction, the client pays (stablecoins or Lightning), and gets access. No signup, no API key, no billing relationship.
This isn't replacing Stripe any more than ATMs replaced tellers. Most API providers will keep using traditional billing. But there's a new category of consumer that can't use the old model at all: autonomous software agents. An AI agent can't fill out a signup form, pass KYC, or manage a credit card. Per-request micropayments over 402 let agents acquire API access without any human in the loop.
The parallel to the article is exact. ATMs automated a task within the existing branch paradigm. Mobile banking eliminated the need for the branch. Similarly, better developer portals automated API key management within the existing billing paradigm. Machine-to-machine micropayments eliminate the need for the billing account entirely.
Checks could be deposited in the deposit drop, or later at an ATM. My payroll went to direct deposit as soon as that was possible.
But to get cash, before ATMs, you went into the bank, unless you had check-cashing privilges somewhere else (supermarkets used to offer this). To deposit cash, you went into the bank so the teller could count it in front of you and agree on the amount. It was risker to deposit cash in a deposit drop or ATM.
The move to cashless transactions for almost everything, and the resultant rare need to carry cash, is IMO the main reason why we don't need very many bank tellers anymore.
It's also easier to scan payments via app than go to the bank, something that is only possible via native like apps
Banking apps came later, long after banks had moved most interaction online.
Nowadays, I must visit a bank once or twice a year tops. My manager frequently sends me messages, but invariably he is trying to sell me something.
I've noticed that branches have really cut down on tellers and in my latest visit the branch didn't even have a teller, just someone helping people use the ATM and lots of desks (most were empty) for you to handle more complicated business with your account manager.
That technology doesn't exist yet.
But I constantly had issues with debit cards being rejected, wire transfers having to be done on a branch, etc. I doubt there is a modern bill payment system yet.
Where as in Denmark, I've bought house, mortgage, wired >100k, bought stonks, none of it required me going to a branch.
I pay a manual bill maybe once or twice per year. I do it online or in an app, I hate the process. But automatic bill payment takes care of 99% of my bills!
Archived docs:
https://bitsavers.org/pdf/ibm/4700/
Article on the history of ATMs:
https://computer.rip/2026-02-27-ibm-atm.html
(ChatGPT was of no use figuring this out)
The behavior of companies has changed dramatically. Checks have almost vanished, you can often set up automatic payments, and you can get bank balance notification emails/messages. A large portion of banking interactions are fully automated.
Any time I needed anything advanced, I get shuffled to someone else.
Getting rid of them isn't a good thing.
Entry-level jobs are important.
Lies, damn lies...
AI is more iPhone than ATM IMO.
Why? Seems like basically the same paradigm to me, I can just do it without going anywhere.
I think the idea raised about "Automated Firms" is a bit off in the picture painted in that linked article. I think the David Oks intention is to paint a picture of a fully automated company, but the linked article gives this impression:
> Future AI firms won’t be constrained by what's scarce or abundant in human skill distributions – they can optimize for whatever abilities are most valuable. Want Jeff Dean-level engineering talent? Cool: once you’ve got one, the marginal copy costs pennies. Need a thousand world-class researchers? Just spin them up. The limiting factor isn't finding or training rare talent – it's just compute.
In that above paragraph the author is saying to the reader that a human will be able to spin up and get these armies of intelligent workers, but at the end of the day their output is given to a human who presumably needs to take ownership of the result. Intelligent workers make bad choices or bad bets, but those AI machines cannot "own" an outcome. The responsibility must fall on a person.
To this end, I think the fully autonomous firm is kind of a fallacy. There needs to be someone who can be sued if anything goes wrong. You're not suing the AI.
This idea of an automated firm relies on the premise that AI will become more capable and reliable than people.
And yes, presumably there would be a person who set the firm up, or else our legal system would need to change quite fundamentally.
It’s strictly an attempt to shoehorn the new tech into an existing paradigm, just because right now the system prompt makes an “agent” behave differently than the one with a different prompt.
It’s unimaginative to say the least.
There is no clear link to the iPhone causing lower teller employment.
This article does have a glaring omission: The 2008 financial crisis effects on the banking industry in general. When there are fewer local banks there are naturally fewer tellers employed. Bank failures peaked in 2010 in the aftershocks of the crises, which lines up nicely with the articles timeline.
Since I refuse to implement their "security" "feature," I just walk into their office every time I need a simple balance inquiry/transfer. They probably hate that I have just enough money deposited to consider my inconveniencing them profitable.
Worth the $1.00 monthly "in-person banking fee"
That’s not a bank teller’s job, at least not in the U. S. You’re confusing that job with something else.
By the end that bank only dealt with mortgages, other loans, and saving accounts.
Online banking and the rise of card use was a huge reason for that. It is almost 20 years since I last time went to a physical bank to withdraw or deposit money, or pay a bill. Probably even longer for paying bills.
But the $15 bank has a call center that is dreamy - reliably connected to a competent focused individual in under 3 seconds.
It doesn't matter how good the tech & automation is I place an economic value on that ability to pick up the phone and talk to a human. LLMs are crushing it but I'm not fuckin paying $15 for an LLM.
I mean, there is definitely a turndown period in labour force when a new tech is introduced, but it will defintely produce more jobs tho, as an evolution of human history. <3
That huge job loss also means no hiring. If you were a bank teller you would seriously need to consider a job switch
why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?
edit to add: these are genuine questions, not meant to be rhetorical :)
it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.
People have been saying, “the computer is thinking,” while webpages are loading or software is running for as long as I’ve been consciously aware. I agree there’s something new about describing AI as, “literally a machine that can think,” but language has always had fuzzy borders
though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.
Pretty funny how this is being twisted into what feels like AI booster shillery. Smart people are talking about AI as being similar to ATMs (I prefer the analogy of a spelling and grammar checker in a word processor) or other marginal increasers in human productivity/efficiency. They absolutely will increase productivity. They mean less people can do more. But the the roles don't go away completely because they have clear technological limitations. They spout probably likely text, and straight up lie, and you can't trust 'em. That's a limitation in what they are just like an ATM needs to be in a big metal box and they only dispense cash.
AI can't do the automated firm linked to (to be fair, didn't read that linked substack, as it looked as ridiculous as that other sci-fi fanfic by Citroni Research or whatever it was). Not AI as it is now known, namely an LLM chatbot. /A completely different technology/ might. A technology that might be informed by AI. Sure. Just like I'm sure mobile banking was informed by the technology in ATMs. But we're not calling smartphones with mobile banking apps "mobile ATMs". Because if we were, then you could get away with it. And the future technology that could remove "labor shaped holes" (or however the author phrased it) could be twisted into an AI nomenclature. Just like Machine Learning (ML) got twisted into AI nomenclature. But the iPhone probably didn't need the ATM to come first. It needed things the ATM uses. The next thing could very well use ML. But not enough to be called "AI" except to boosters shills.
Overall, this sounds like the usual AI boosterism that Ed Zitron complains about often. And I agree with his critiques. This article says nothing about how a /new/ technology needs to come about from AI. If it did, it would also have to comment on whether we need to spend insane amounts on data centers and circular deals to get to it. Because my guess is the answer is, no, it takes R&D and a truthful "we don't know what it looks like yet and we can't promise you shareholders when it will come" to get to it.
Ironically the author says the ATM story was used to come up with two incorrect interpretations, and then provides what I feel like was another. Still interesting, if possibly irresponsible in how it frames AI as iPhone--and not the ATM it still feels like. [EDIT: a word.]