"A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s
The slope between insignificant and significant actions is so enormously long and shallow, it isn't going to impede machine decision making unless some widely accepted red line is defined and institutionalized. Quickly.
If we can't agree that super-scaled predatory business models (unpermissioned or dark permissioned surveillance, corporate sharing or selling of our information, algorithmically feed/ad manipulation based on such surveillance or other conflicts of interest, knowledge appropriation without permission or compensation, predatory financial practices, ... etc.) are not acceptable, and apply oversight with practical means for making violations reliably risk-adjusted deeply unprofitable or criminally prosecuted, the decision making of machines isn't going to be impeded even when it is obviously causing great but not-yet-illegal harm.
After all, the umbrella problem is scalable harm with unchecked incentives. Ethics and accountability overall, not machines in particular.
Scaling of harm (even if the negative externalities from individual incidents seem small), has to be the redline. I.e. unethical behavior.
As a community, I think most of us are aware that the big automated bureaucracies that make up tech giant aggregators' "customer service" are already making life changing decisions, too often capriciously, and often with little recourse for those unfairly harmed.
I have personally been inflicted by that problem.
We are going to need both effective brakes, and reverse gear, to prevent this being an uncontrolled descent.
(Not being cynical. But if something is to be done, we need to address the actual scale and state of the problem. There isn't time left in human history for more slow incremental wack-a-mole efforts, or unrewarded attempts at corporate shaming. Those have failed us.)
In the hyper-scaled world, ethics mean nothing if not backed up by economics.
Key points from The Human in the Loop..
- The author pushes back on the idea that AI has made software developers obsolete, arguing instead that it has shifted where human effort matters.
- AI is increasingly good at producing code quickly, but that doesn’t remove the need for human oversight—especially for correctness, security, edge cases, and architectural fit.
- The “human in the loop” is not a temporary bottleneck but the accountable party who must understand, review, and take responsibility for what ships.
- Senior engineers’ most valuable skill has always been judgment, not typing speed—and AI makes that judgment even more critical.
- The author warns against blaming AI for bugs or bad outcomes; responsibility still lies with the human who approved the result.
- Software practices, team structures, and workflows need to evolve to emphasize review, verification, and intent over raw code production.
Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).
So that’s the baseline intellectual rigor we’re dealing with here.
I tend to agree, but I do think we'll get there in the next 5-10 years.
The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.
Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production.
Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right?
Aristotle blamed literacy for intellectual laziness among the youth compared to the old methods of memorization
When you look at technical people who grew up with the imperfect user interfaces/computers of the 80s, 90s and 00s before the rise of smartphones and tablets, you see people who have a naturally acquired knack for troubleshooting and organically gaining understanding of computers despite (in most cases) never being grounded in the low-level mathematical underpinnings of computer science.
IMO, the imperfections of modern AI are likely going to lead to a new generation of troubleshooters who will organically be forced to accumulate real understanding from a top-down perspective in much the same vein. It's just going to cost us all an absurd amount of electricity.
Smart and industrious people will focus energy on economically important problems. That has always been the case.
Everything will work out just fine.
And in my experience they don't really do that. They trust that it'll be good enough.
If you have to ask, then you'd be better putting that effort into fixing the test coverage.
I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse.