If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.
If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.
However if we throw enough money and smart people at the problems and get enough value from the early sigmoid curves, the effective impact of a large number of stacked sigmoids could theoretically average to a linear impact, but if the sigmoids stay of a similar magnitude (on average) and appear at a higher velocity over time, you end up with an exponential made up of sigmoids*
* To be fair, it has been so long since I have done math that this may be completely incorrect mathematically - I'm not sure how to model it. However I think in practice more and more sigmoids coming faster and faster with a similar median amplitude is gonna feel very fast to humans very soon - whether or not it's a true exponential.
I'm honestly having a very hard time thinking through the likely implications of what's currently happening over the next 2-10 years. Anyone who has the answers, please do share. I'm assuming from Cynafin that it's a peturbated complex adaptive system so I can just OODA or experiment, sense and respond to what happens - not what I think might happen.
I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.
All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.
It's worth noting that after ~50 years[edit: to preempt nitpicking, yes I know we've been using computers productively quite a bit longer than that, but that's roughly the time when the computerized office started to really gain traction across the whole economy in developed countries], we've only extracted a tiny proportion of the hypothetical value of computers, period, as far as benefits to the economy and potential for automation.
I actually think a lot of the real value of LLMs is "just" going to be making accessing a little (only a little!) more of that existing unrealized benefit feasible for the median worker.
My expectation is that we'll also harness only a tiny proportion of the hypothetical value of LLMs. We're just not good enough at organizing work to approach the level of benefit folks think of when they speculate about how transformational these things will be. A big deal? Yes. As big a deal as some suppose? Probably not.
[edit: in positive ways, I mean. I think we're going to see huge boosts in productivity to anti-social enterprises. I'd not want to bet on whether the development of LLMs are going to be net-positive or net-harmful to humanity, not due to the "singularity" or "alignment" or whatever, but because of the sorts of things they're most-useful for]
Furthermore, regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans, who as a race are cognitively extremely diverse and adaptive.
AI isn't one thing though. Really its kind of a natural evolution of 'higher order life'. I think that something like a 'organization', (corps, governments, etc) once large enough is at least as alive as a tardigrade. And for the people who are its cells, it is as comprehensible as the tardigrade is to any of its individual cells. So why wouldn't organizations over all of human history eventually 'evolve' a better information processing system than humans making mouth sounds at each other? (writing was really the first step on this). Really if you look at the last 12,000 years of human society as actually being the first 12,000 years of the evolutionary history of 'organizations', it kinda makes a lot of sense. And so much of it was exploring the environment, trying replication strategies, etc. And we have a lot of different organizations now, like an evolutionary explosion, where life finds various niches to exploit.
/schitzoposting
Not that the singularity has any relevance here, either - except maybe that the robots take over, and the billionaires have missed the boat? I don't know.
I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.
But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.
Literally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious to most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.
There is, of course, the Paul Krugman quote from 1998 that by 2005 the Internet would be no more important than a fax machine. [1]
Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]
I remember, being at Google in ~2011, we used to laugh at the Wall Street analysts because they would focus on CPC numbers to forecast a valuation, which is important only if the number of clicks is remaining constant. We knew, of course that total Internet usage was still growing quite rapidly and that queries had increased by roughly 4x over the 2009-2013 timeframe.
And a lot of people will say "If you're so smart, why aren't you rich?", and I'll point out that many people who assumed the Internet had lots of room to grow in 2005-2007 did end up very rich. Google stock has increased roughly 20x since 2007 (and 40x from its 2009 lows). Meta is now worth $1.6T, a 100x increase over the $15B valuation that everyone thought was insane in 2007. Amazon is also up about 100x. It would not be possible to take the other side of the trade and make these kind of profits if the majority of people did not think the Internet was largely over.
[1] https://www.snopes.com/fact-check/paul-krugman-internets-eff...
Didn't we only pass 50% of households having a home PC in like... '00 or '01 or something? And I mean just in the US, which was way ahead of the curve.
> Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]
I actually think that's correct... if the smartphone hadn't taken off right after that. The "consumer" Internet and computing, the attention economy, et c., functionally is the smartphone. A desktop computer and even a laptop aren't in use when driving, at the store, at the park, every moment on vacation, et c. It'd still only be nerds lugging computers everywhere if nobody'd managed to make a smartphone that's capable-enough and pleasant-enough-to-use to expand the market beyond the set of folks who might have had a beeper in earlier years (the part of the market Blackberry was addressing). A gigantic proportion of the "GDP of the Internet", if you will, exists because smartphones exist.
Almost definitely professional ragebaiters in Wired or Time or whatever, yeah.
You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.
And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.
It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.
>followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off.
These will be rare boutique affairs. Based on how mass production and cheap shipping played out, most people value price over quality. The economy will rearrange itself around those savings, making boutique products and services expensive.
>mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit.
We have this today. And that's not a "same as it ever was" dismissal. Today, there are a lot of terminally online people posting the equivalent of propaganda (and actual propaganda). Social media pushes hot takes in audiences' faces, a portion of them reshare it, and it spreads exponentially. The only limitation to propaganda today is how much time the audience spends staring at the "correct" content provider.
In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.
It is the opposite of slop I am seeing. And that at a lower cost.
Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
I don't doubt it completed the initial coding work in a short time, but the fact that you've equated that with flawless execution is on the concerning-scary spectrum. I can only assume you're talking "compiles-runs-ship it"
The danger is not generating obvious slop, it's accepting decent and convincing outputs as complete and absolving ourselves of responsibility.
Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.
My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.
So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?
This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?
I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.
I think this is the idea you need to entertain / ponder more on.
I largely agree with you, what I don't agree with is the weighting about the individual elements.
My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.
We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.
In particular, we have dropped the external backoffice tool, so we have a single mono repo.
An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.
Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.
Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.
I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.
It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!
Until that day we had roughly zero Ai code in the code base (additions or subtractions). So in all reasonable terms I am a late adopter.
For code bases Ai does not concern me. We have for quite some time worked with systems that are too complex for single people to comprehend, so this is a natural extension of abstraction.
On the other hand, am super concerned about Ai and the society. The impact of human well being from "easy" Ai relations over difficult human connection. The continued human alienation and relational violation (I think the "woke" discourse will go on steroids).
I think society is going to be much less tolerant. And that frightens me.
[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.
What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit
This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.
If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?
But we run 90% test coverage, e2e test etc. None of which had been altered, and are all passing.
Migrations are generally not that high risk if you have a code base in alright shape.
Social media would like a word...
I know this sounds like "the moderate position" to people but you are accepting that something logarithmic is somehow in fact exponential (these are inverse functions of one another) based on no evidence or argument.
Here is Sam Altman, the one man in the world with the most incentive to overstate AI capability, accepting the extremely-well-known logarithmic growth: https://blog.samaltman.com/three-observations
What we see in reality is a basically-linear growth pattern due to pushing exponentially more resources into this logarithm.
Even using the models we have today, we have revolutionized VFX, video production, and graphics design.
Similarly, many senior software engineers are reporting 2-10x productivity increases.
These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.
These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.
We don't even need new models.
But are they making 2-10x compensation compared to before these tools? If not, these tools are not really useful to you, they are useful to your employer. The most shocking thing I find about LLM-assisted development is how gleefully we are just handing all this value over to our employers, simultaneously believing that they are great because we're producing more. Totally bonkers!
You could turn the table and say that you can now launch your own business with far fewer resources.
Who needs financial capital if you can do it all with solo / small team labor capital?
Gossip Goblin ditched his studio and now a16z is trying to throw him money, which he's turned down. He's turning everyone down.
https://www.youtube.com/watch?v=-Rzl7nUdEs4
Dude is legit talented and doesn't need studio capital anymore.
This is the end of the Hollywood nepotism pyramid, where limited production capital was available to only a handful of directors.
We're kind of at the start of a revolution here. I'd be way more worried if I were Disney or Paramount.
Couldn't you take a sabbatical and end it with a brand new SaaS you own and control? That's entirely within reach now.
The people this is going to hurt are the ICs that don't have a go-getting type personality where they take full-stack ownership: marketing, branding, design, customer relationships, etc. If you can do those things, you're going to be a rock star with total autonomy.
You ought to see what the indie game devs are doing with AI (when they aren't getting yelled at on Steam by the haters). It's legitimately incredible. Game designers are taking on full-stack ownership over the entire experience, and they're making some incredible stuff.
What percentage of developers can do these things? 1%? 0.1%? 0.01%? A very small percentage of developers have the desire to take on the full-stack, the temperament of good entrepreneurs, the product judgment of good Product Managers and ability of good Project Managers to juggle dependencies and timeframes. What about the rest of them? The remaining 99+% of us are just handing value over to our employers and getting a 5% raise in return--if we're lucky.
So, the fact that a small percentage of rockstar developers can capture the full value of AI-assisted development reinforces the point that a small number of people/businesses are capturing that value. The vast majority of workers are not capturing any value.
A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.
We are in era of pre pentium 4 in AI terms.
[1](https://tailstrike.com/database/01-june-2009-air-france-447/)
My other immediate thought -- Tesla's autopilot. I've never used it so I'm not sure I'm fully correct here, but apparently it requires you to be vigilant and take over in certain situations? Wonder how well that works out in practice.
I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...
What I do feel the issue is with I just having to do everything to keep costs down because hiring another dev vs doing it with AI consideration is real and it has collateral damage: I spend more time trying to build AI agents to do the work and there is 1 or 2 fewer jobs I create.
[0] https://www.bma.org.uk/news-and-opinion/medical-degree-appre...
Also you forgot the link?
But, thanks to all the companies working on open-weight models, I'm starting to think this might no longer happen. Currently open-weights models are said to be just months behind the top players (and I think we should really try to do what we can to keep it that way).
I'm wondering what the predictions would be in the case where AI becomes very powerfull, but also models are generally available.
Two possibilities come to mind, the first one where all the money no longer spent on employment would go towards hardware. New hardware manufacturers or innovators could jump in and create a bit more employment, but eventually it would probably all progress in one direction, which is the only finite resource in the chain, the materials/minerals needed for the hardware. Those materials might become the new "petrol". It's possible that eventually we would have build enough chips to power all the AI we need without needing more extraction, but I wouldn't underestimate our ability to waste resources when they feel aboundant.
In the second possibility, alongside a very powerful open-weight LLM, there could be big performance advancements, which would make the hardware no longer the bottleneck. But I'm struggling to imagine this scenario, maybe we would all be better off? Maybe we would all just be deppressed because most people won't feel "usefull" to society or their peers anymore?
I would encourage folks to look at the following industries: nuclear safety, commercial aviation, remote surgery. These industries have dealt with the issues of automation for much longer than we have as programmers.
In the research I've done, these industries went through a similar journey in the 20th century as we are now: once something becomes automated enough, the old way simply won't work. You have to evolve new frameworks and procedures to deal with it.
So in the case of aviation they developed CRM and SRM - how to manage the airplane as a crew and how to manage it as a solo operator. Remember that modern airplanes are highly automated!! The human pilot is not typically hands-on-wheel for most of the flight.
In the case of surgeons, they found that de-skilling without regular practice can occur in as little as four weeks! So to combat that, some surgeons are now required to practice in simulated environment to keep their skills sharp.
My feeling is that 'aphyr is right in the short-to-medium term. Current market forces and US regulatory posture (or lack thereof) makes it so that there are less rules and less enforcement. IMHO the results are depressingly predictable but the train has left the station with enough momentum that there's no stopping it. If we survive long enough to make it past the medium-term things will change.
Learning to Learn by the late Dr Richard Hamming. See especially Chapter 2.
A point Hamming makes is that when transitions from hand to machine production occurred, usually what is built ends up changing as the old techniques don't transfer 1:1 from the old world.
So for instance, we went from nuts and bolts to rivets and welding (Dr Hamming's literal example). This required builders to produce an equivalent product to the old, built with different techniques - and crucially! - under tighter control limits.
The reason things are going all over the place with AI at the moment is that it's speed, speed, speed. They had an all hands at my company recently where the top brass talked about AI. The only thing mentioned was speed - go faster, do more, etc. Not a single soul talked about quality.
But if you know your software engineering wisdom you know that you can only pick two when it comes to speed, scope, or quality. It's going to get real dumb for a while until people realize/remember quality is how you achieve speed.
I can only guess as to how much content you would have to explore on that axis.
My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.
I highly recommend reading Marx. Your content has related Marxist topics like the 'Fetishism of Commodities' (Software as Witchcraft) and the Labor Theory of Value.
brushing the socialism aside (been there seen that), it talks about the deskilling as inevitable technology consequence. IMO LLMing puts that on steroids, and eats higher up the mental-chain
Might I suggest a viewing of the 2025 film "Bugonia"?
And who are you? An account created for one post? There is a pattern of green account with usernames vaguely related to the subject matter of their comments.
An unintended side effect that I’ve noticed is that it normalizes bad behavior of CEOs for those who invest a lot of “CEOs bad” grist (Reddit, Threads, even Hacker News). When someone, usually early career, takes a job with a bad CEO after years of reading “CEOs bad” content online, they can go into a learned helplessness mode because they think the behavior they’re seeing is normal. They don’t believe changing jobs would help because they’ve learned from social media to believe that their CEO’s bad behavior is actually normal.
This has becoming a frequent topic when in a rotational mentorship program where I volunteer: Early career folk join some toxic startup and stay because the internet told them all CEOs are like this. We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.
> We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.
Not everyone does have options, though. This is why instead of telling people to just avoid the bad CEOs, workers should unionize and collectively bargain against the bad CEOs. I'm sure I'll be seeing a lot of class warfare generalizations about "unions bad" in response to this suggestion.
I literally did this 12 years ago based on this reasoning, its good you're trying to counter that with the next generation.
With that said, I do wish there was more discourse around systemic issues rather than the usual finger-pointing towards rival social groups. Unfortunately I feel like our language gets in the way, systems issues are more abstract, but "bad people" are more visceral and easy to talk about.
>They don’t believe changing jobs
Um, yea, where did you get these ideas.
Most CEOs want to be CEOs for the potentially vast amounts of wealth they can make from the position. When you're making 20-200x the average person going back to a regular job is pretty much out of the question.
Then when you start making that kind of money you quickly become disconnected from the rest of humanity. [Insert meme: "How much does a banana cost? Like $10 dollars?]
Vast wealth disparity commonly causes the issues that you are saying being normalized by people online, so I think you'd need quite a bit more evidence that is the case then with the already existing hypothesis.
Mainly because "CEOs and billionaires" have fucked us over time and again, with their with their lobbying and bribing, with their power grabs, with their consolidation of news, entertainment, streaming, and social media properties, with their participation in the millitary industrial complex, with their censorship and partisanship, and with their rent seeking and worsening of their products...
The endless re-rise of Marxism has made people assume that any punching is appropriate in the first place, and it's just a question of who. Saying "these are the people it's okay to punch" is dystopian.
This sort of prompting is only necessary now because LLMs are janky and new. I might have written this in 2025, but now LLMs are capable of saying "wait, that approach clearly isn't working, let's try something else," running the code again, and revising their results.
There's still a little jankiness but I have confidence LLMs will just get better and better at metacognitive tasks.
UPDATE: At this very moment, I'm using a coding agent at work and reading its output. It's saying things like:
> Ah! The command in README.md has specific flags! I ran: <internal command>. Without these flags! I missed that. I should have checked README.md again or remembered it better. The user just viewed it, maybe to remind me or themselves. But let's first see what the background task reported. Maybe it failed because I missed the flags, or passed because the user got access and defaults worked.
AI is already developing better metacognition.
I recently discovered an example of this phenomenon in a completely unrelated area: navigation. About a week ago, I realized that I couldn't remember the exact turns to reach a certain place I started driving to recently, even after having driven there about 3-4 times over a period of a month. Each time I had used Google Maps. When I used to drive pre-Google-Maps, I would typically develop a good spatial model of a route on my third drive. This skill seems to have atrophied now. Even when I explicitly decide to drive without Google Maps, and make mental notes of the turns, my retention of new routes is now much weaker than it used to be. Thankfully, routes I retained before becoming Google Maps dependent, are still there.
> And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.
It feels like hexing the technical interview come to real life ;)
For example I'm now relying on Soteria, the greek goddess of safety, salvation and preservation from harm to act as my database administrator.
It’s only fair that they would receive the same amount. But then how can the former category continue to fulfill their obligations?
They can't. Just like the steel workers who lost their jobs in the 1970's.
<h1>Unavailable Due to the UK Online Safety Act</h1>https://www.theregister.com/2025/02/06/uk_online_safety_act_...
According to the Ofcom regulation checker [1] (linked to by The Register article), the Online Safety Act does not apply to this content.
Here's the most pertinent section (emphasis mine):
> Your online service will be exempt if... Users can only interact with content generated by your business/the provider of the online service. Such interactions include: comments, likes/dislikes, ratings/reviews of your content including using emojis or symbols. For example, this exemption would cover online services where the only content users can upload or share is comments on media articles you have published...
[1]: https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...
(conveniently, there is no risk to yourself if you happen to be wrong or misinformed.)
no, you are doing more than that.
you are saying that everyone who has a different interpretation of the parts you are quoting is misinformed.
that is an opinion, which you are stating as fact, as someone unaffected by the outcome.
My point is simply that the Ofcom quote clearly states that user comments on an article are not subject to the Online Safety Act. I assume this is a fact, as it's from the horse's mouth.
Some people appear to be basing their opinions on the assumption that the OSA does apply to such comments (hence my use of the offending word).
I mean even the site itself says it really shouldn't be used for legal advice...
On top of that, none of this matters until said law is settled under a case. Most often it's the first judge and the set of appeals after that point that define how the law is actually implemented. Everything before that is bluster and potential risk.
As soon as your blog allows comments which other people can read, then you're allowing people to interact with content not generated by your business.
> If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally increasing for 40 years, the top earner pre-tax income shares are nearing their highs from the early 20th century, and Republican opposition to progressive tax policy remains strong.
I think we are in general a highly naive, gullible class of people: we were conditioned, programmed and put into environments where being this was the norm and rewarded. The leaders and those extracting resources, who we gullibly allow to trample over our dignity and our rights, take advantage of this and reinforce it through lobby and influence of the mainstream culture and media campaigns around us. Further, if social media becomes a threat to their statuses, they have been shown to employ their influence there too through censorship and more; we therefore, may be best to learn how to not to be gullible and grow some balls.
There it is, an actual em-dash in the wild, written by hand.
For what it's worth I think it's pretty reasonably good prose, not merely somewhat passable
Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.
Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.
Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.
Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.
And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.
The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.
Specifically, AI companies want to inflate the utility of AI because that's how they make money. There should be guardrails where appropriate. Unfortunately, as usual, we need to make mistakes before we can learn from them.
Robotaxis do exist, but they are not made equal. Tesla's for instance are 4x worse than humans: https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...
Read up on Cluster B personality disorders (borderline, narcissism, sociopaths/psychopaths) and you see the similarities. Love bombing, gaslighting, a shared fantasy, etc. It's very interesting and scary at the same time.
Humans are also distinctly bad at noticing certain kinds of bugs in software. Think off-by-one errors, deadlocks, or any sort of bug you've stared at for days and not noticed the one missing or extra semicolon. But LLMs can generate a tsunami of subtly wrong code in the time a reviewer will notice one typo and miss all the rest.
Fruit flies like a banana. Time flies like and arrow.
[1] The movie Arrival is based on this novella.
I believe the technical term is vigilance degradation?
>> You would fire these people, right?
Okay, now imagine a different colleague. One who writes a solid first draft of any boilerplate task in seconds, freeing you to focus on architecture instead of plumbing. A dev who never gets defensive when you rewrite their code, never pushes back out of ego, and never says "that's not my job." A pair programmer who's available at 3 AM on a Sunday when prod is down and you need to think out loud. One who remembers every API you've forgotten, every flag in every CLI tool, every syntax quirk in a language you use twice a year, or even every day.
You'd want that person on your team, right? In fact, you would probably give them a promotion.
Here's the thing: the original argument describes real failure modes, but then commits a subtle sleight of hand. It personifies the tool as a colleague with agency, then condemns it for lacking the judgment that agency implies. But you don't fire a table saw because it doesn't know when to stop cutting, right? You learn where to put your hands.
Every flaw in that list is, at the end of the day, a flaw in the workflow, not the tool. Code with security hazards? That's what reviews are for. And AI-generated code gets reviewed at far higher rates than the human code people have been quietly rubber-stamping for decades. Commits failing tests? Then your CI pipeline should be the gate, not a promise. Deleted your home directory? Then it shouldn't have had the permissions to do that in the first place. In fact, the whole "deleted my home directory" shit is the same thing as "our intern deleted the prod database". We all know that the response to the latter is "why did they have permission to prod in the first place??" AI is the same way, but for some god damn reason people apply totally different standards to it.
Er, just to be clear, I am not personifying these tools. This entire section is a critique of the attempt to frame LLMs as "coworkers".
If I purchased a table saw and that table saw irregularly and unpredictably jumped past its safeties -as we've plenty of evidence that LLMs [0] do-, then I would [1] immediately stop using that saw, return it for a refund, alert the store that they're selling wildly unsafe equipment, and the relevant regulators that a manufacturer is producing and selling wildly unsafe equipment.
[0] ...whether "agentic" or not...
[1] ...after discovering that yes, this is not a defective unit, but this model of saw working as designed...
Scary scenarios like AIs deleting home directories are the result of the developers explicitly bypassing those safeties.
(And before anyone brings pitch forks out, this is what they wrote in a previous article:
> “Cool it already with the semicolons, Kyle.” No. I cut my teeth on Samuel Johnson and you can pry the chandelierious intricacy of nested lists from my phthisic, mouldering hands. I have a professional editor, and she is not here right now, and I am taking this opportunity to revel in unhinged grammatical squalor.
My life was made poorer for knowing that semicolons are apparently a sin, but richer for the rebellion.
That seems very practical and well-reasoned to me.
My fault for reading this article half asleep and wanting to thank Aphyr for their writing. I should have instead written 5 paragraphs pedantically criticizing minor aspects of their post while completely missing the point. Or maybe I should be offering my expert legal advice (I watched Suits once) on the UK Online Safety Act.
Welcome to web development buddy
> how ML might change the labor market
Human labor is expensive. If LLMs do make things cheaper and faster to produce, you don't need that many humans anymore. Again, assuming the improvement is real, there absolutely will be shrinkage for existing businesses in headcount. What remains to be seen is how much cheaper machines make work. 1.5x? 2x? 10x? 100x?
> unlike sewing machines or combine harvesters, ML systems seem primed to displace labor across a broad swath of industries [...] The question is what happens when [..] all lose their jobs in the span of a decade
It's more like hand tools -> power tools; a concept applied to many things. Everyone will adopt them, and you'll need fewer workers who'll work faster with less skill. You get a gradual labor force shrinkage, but also an increase in efficiency, so it's not like a hole is opening up in your economy. A strong economy can create new jobs, from either private or public sources.
> ML allows companies to shift spending away from people and into service contracts with companies like Microsoft
The price of hardware, as it always has been, is a downward trend, while the efficiency of open weights is going up (it will plateau eventually but it's still going up). We already spend $20,000 on servers, whether it's buying them once on-prem, or renting them out in AWS. ML is just another piece of software running on another piece of hardware
> if companies are successful in replacing large numbers of people with ML systems, the effect will be to consolidate both money and power in the hands of capital
That ship left port like 30 years ago dude. Laborers have no power in the 21st century.