And this is probably coming, a few years from now. Because remember, Apple doesn't usually invent new products. It takes proven ones and then makes its own much nicer version.
Let other companies figure out the model. Let the industry figure out how to make it secure. Then Apple can integrate it with hardware and software in a way no other company can.
Right now we are still in very, very, very early days.
These kinds of risks can only be _consented to_ by technical people who correctly understand them, let alone borne by them, but if this shipped there would be thousands of Facebook videos explaining to the elderly how to disable the safety features and open themselves up to identity theft.
The article also confuses me because Apple _are_ shipping this, it’s pretty much exactly the demo they gave at WWDC24, it’s just delayed while they iron this out (if that is at all possible). By all accounts it might ship as early as next week in the iOS 26.4 beta.
[1]: https://simonwillison.net/2025/Mar/8/delaying-personalized-s...
OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.
The OS maker does not have to make all the killer software. In fact, Apple's pretty much the only game in town that's making hardware and software both.
For example: https://x.com/michael_chomsky/status/2017686846910959668.
The one you linked to looks clearly like a pump-and-dump scam, though.
These days it is insecure however because they backdoored the e2ee and kept it backdoored for the FBI, so now Signal is the only messenger I am reachable on.
Blue bubble snobbery is presently a mark of ignorance more than anything else.
But if you connect those dots you've got people trying to date by having an AI respond to texts from potential dates which seems like you're immediately in red-flag-city and good luck keeping that secret for long enough to get whatever it is you want.
Forget about dating. If you want the AI to be able to send texts from your number, and you own an iPhone, I think your only other choice would be to port your number to Google Voice?
Yeah I’m trying to wrap my head around what sort of reads like “It is messed up that people avoid talking to eachother because of software because it messes up people’s ability to use software to avoid talking to eachother”
(Yes android users are discriminated against in the dating market, tons of op eds are written about this, just google it before you knee jerk downvote the truth)
If someone is shallow enough to write you off for that, is that someone you want as your partner?While this was true about ten years ago, it's been a while since we've seen this model of software development from Apple succeed in recent years. I'm not at all confident that the Apple that gave us Mac OS 26 is capable of doing this anymore.
Privacy is definitely good but it's not at all an example of the success mentioned in the parent comment. It's deep in the company culture.
The software has been where most of the complaints have been in recent years.
A "bicycle for the mind" got replaced with a "kiosk for your pocketbook".
The Vision Pro has an amazing interface, but it's set up as a place to rent videos and buy throwaway novelty iPad-style apps. It allows you to import a Mac screen as a single window, instead of expanding the Mac interface, with its Mac power and flexibility, into the spacial world.
Great hardware. Interesting, but locked down software.
If Tim Cook wanted to leave a real legacy product, it should have been a Vision Pro aimed as an upgrade on the Mac interface and productivity. Apple's new highest end interface/device for the future. Not another mid/low-capability iPad type device. So close. So far.
$3500 for an enforced toy. (And I say all this as someone who still uses it with my Mac, but despairs at the lack of software vision.)
I've thought this too. Apple might be one of the only companies that could pull off bringing an existing consumer operating system into 3D space, and they just... didn't.
On Windows, I tried using screen captures to separate windows into 3D space, but my 3090 would run out of texture space and crash.
Maybe the second best would be some kind of Wayland compositor.
AirTag is a perfect example of their hardware prowess that even Google fails to replicate to this date.
Tiny open source projects can just say "use at your own risk" and offload all responsibility.
An agent that can truly “use your computer” is incredibly powerful, but it's also the first time the system has to act as you, not just for you. That shifts the problem from product design to permission, auditability, and undoability.
Summarizing notifications is boring, but it’s also reversible. Filing taxes or sending emails isn’t.
It feels less like Apple missing the idea, and more like waiting until they can make the irreversible actions feel safe.
All steps before it are reversible, and reviewable.
Bigger problem is attacker tricking your agent to leak your emails / financial data that your agent has access to.
Imagine if the government would just tell everyone how much they owed and obviated the need for effing literal artificial intelligence to get taxes done!
>> respond to emails
If we have an AI that can respond properly to emails, then the email doesn't need to be sent in the first place. (Indeed, many do not need to be sent nowadays either!)
Actually most of the things people use it for is of this kind, instead actually solving the problem (which is out of scope for them to be fair) it’s just adding more things on top that can go wrong.
Except this doesn't stand up to scrutiny, when you look at Siri. FOURTEEN years and it is still spectacularly useless.
I have no idea what Siri is a "much nicer version" of.
> Apple can integrate it with hardware and software in a way no other company can.
And in the case of Apple products, oftentimes "because Apple won't let them".
Lest I be called an Apple hater, I have 3 Apple TVs in my home, my daily driver is a M2 Ultra Studio with a ProDisplay XDR, and an iPad Pro that shows my calendar and Slack during the day and comes off at night. iPhone, Apple Watch Ultra.
But this is way too worshipful of Apple.
There are lots of failed products in nearly every company’s portfolio.
AirTags were mentioned elsewhere, but I can think of others too. Perfected might be too fuzzy & subjective a term though.
Both of which have been absolutely underwhelming if not outright laughable in certain ways.
Apple has done plenty right. These two, which are the closest to the article, are not it.
And then some of its misinterpretations were hilariously bad.
Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:
Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."
> And this is probably coming, a few years from now.
Given how often I say "Hey Siri, fast forward", expecting her to skip the audio forward by 30 seconds, and she replies "Calling Troy S" a roofing contractor who quoted some work for me last year, and then just starts calling him without confirmation, which is massively embarassing...
This idea terrifies me.
Happened to me too while being in the car. With every message written by Siri it feels like you need to confirm 2 or 3 times (I think it is only once but again) but it calls happily people from your phone book.
That's a pretty optimistic outlook. All considered, you're not convinced they'll just use it as a platform to sell advertisements and lock-out competitors a-la the App Store "because everyone does it"?
It's a huge, diverse ecosystem of players and that's probably why Android has always gotten the coolest stuff first. But it's also its achilles' heel in some ways.
First Mover effect seems only relevant when goverment warrants are involved. Think radio licenses, medical patents, etc. Everywhere else, being a first mover doesnt seem to correlate like it should to success.
See social media, bitcoin, iOS App Store, blu-ray, Xbox live, and I’m sure more I can’t think of rn.
There are plenty of Android/Windows things that Apple has had for $today-5 years that work the exact same way.
One side isn’t better than the other, it’s really just that they copy each other doing various things at a different pace or arrive at that point in different ways.
Some examples:
- Android is/was years behind on granular permissions, e.g. ability to grant limited photo library access to apps
- Android has no platform-wide equivalent to AirTags
- Hardware-backed key storage (Secure Enclave about 5 years ahead of StrongBox)
- system-wide screen recording
Google has been making their own phone hardware since 2010. And surely they can call up Qualcomm and Samsung if they want to.
Ten years from now, there will be no ‘agent layer’. This is like predicting Microsoft failed to capitalize on bulletin boards social media.
Apple will either capitalise on this by making their operating systems more agentic, or they will be reduced to nothing more than a hardware and media vendor.
Things actually can "do what I mean, not what I say", now. Truly fascinating to see develop.
It’s not a critical flaw in the entirety of the LLM ecosystem that now the computers themselves can be tricked into doing things by asking in just the right way. Anything in the context might be a prompt injection attack, and there isn’t really any reliable solution to that but let’s hook everything up to it, and also give it the tools to do anything and everything.
There is still a long way to go to securing these. Apple is, I think wisely, staying out of this arena until it’s solved.
My point is that it won’t be a ‘layer’ like it is now and the technology will be completely different from what we see as agents today.
The current ‘agent’ ecosystem is just hacks on top of hacks.
Kids can barely hand write today.
Once neural interfaces are in, it's over for keyboards and displays likely too.
That was...like 4 macbooks ago. I still have keyboards from that era. I still have speakers and monitors from that era kicking around.
We are definitely, definitely not the last generation to use keyboards.
I love keyboards, I love typing. I'm rocking an Ergodox daily with a wooden shell that I built myself over ten years ago, with layers of macros that make it nearly incomprehensible for another person to use. I've got keyboard storage. I used to have a daily habit of going to multiple typing competition websites, planting a flag at #1 in the daily leaderboard and moving on to the next one.
Over the last year the utility of voice interfaces has just exploded though and I'm finding that I'm touching the keyboard less and less. Outside of projects where I'm really opinionated on the details or the architecture it increasingly feels like a handicap to bother manually typing code for a lot of tasks. I'm honestly more worried about that physical skill atrophying than dulling on any ability to do the actual engineering work, but it makes me a bit sad. Like having a fleet of untiring tractors replacing the work of my horse, but I like horses.
Of course AI will keep improving and more automation is a given.
It's obviously broken, so no, Apple Intelligence should not have been this.
It would be fine if I could just ignore it, but they are infecting the entire industry.
which obviously apple can't do. only an indie dev launching a project with an obvious copyright violation in the name can get away with that sort of recklessness. it's super fun, but saying apple should do it now is ridiculous. this is where apple should get to eventually, once they figure out all the hard problems that moltbot simply ignores by doing the most dangerous thing possible at every opportunity.
lol,no, you don't "put skin in the game for getting security right" by launching an obviously insecure thing. that's ridiculous. you get security right by actually doing something to address the security concerns.
This is not a train that Apple has missed, this is a bunch of people who’ve tied, nailed, tacked, and taped their unicycles and skateboards together. Of course every cool project starts like that, but nobody is selling tickets for that ride.
What people are talking about doing with OpenClaw I find absolutely insane.
Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.
people are buying Mac Minis specifically to run AI agents with computer use. They’re setting up headless machines whose sole job is to automate their workflows. OpenClaw—the open-source framework that lets you run Claude, GPT-4, or whatever model you want to actually control your computer—has become the killer app for Mac hardware
That makes little sense. Buying mac mini would imply for the fused v-ram with the gpu capabilities, but then they're saying Claude/GPT-4 which don't have any gpu requirements.Is the author implying mac minis for the low power consumption?
> Look at who’s about to get angry about OpenClaw-style automation: LinkedIn, Facebook, anyone with a walled garden and a careful API strategy.
Browser automation tools have existed for a very long time. Openclaw is not much different in this regard than asking an LLM to generate you a playwright script. Yes, it makes it easier to automate arbitrary tasks, but it's not like it's some sort of breakthrough that completely destroys walled gardens.
If you’re heavily invested in Windows, then you’d probably go for a small x86 PC.
I use agentic coding, this is next level madness.
I don't understand why, but I've seen it enough to start questioning myself...
If Apple were to ever put something like that into the hands of the masses every page on the internet would be stuffed with malicious prompts, and the phishing industry would see a revival the likes of which we can only imagine.
This could have come in any form, a platform as the author points out for instance.
I have a couple of ideas, how about a permissions kit? Something where before or during you sign off on permissions. Or how about locked down execution sandboxes specifically for agentic loops? Also - why is there not yet (or ever?) a model trained on their development code/forums/manuals/data?
Before OpenClaw, I could see the writing on the wall. The ai ecosystem is not congruent to Apple's walled garden. In many ways because they have turned their backs on those 'misfits' their early ad-copy praised.
This 'misfit' mentality is what I like so much about the OpenClaw community. It was visible from it's very beginning with the devil-may-care disregard for privacy and security.
It sounds to me like they still have the hardware, since — according to the article — "Mac Minis are selling out everywhere." What's the problem? If anything, this is validation of their hardware differentiation. The software is easy to change, and they can always learn from OpenClaw for the next iteration of Apple Intelligence.
Why is Apple's hardware being in demand for a use that undermines its non-Chinese competition a sign of missing the ball versus validation for waiting and seeing?
Reality is the exact opposite. Young, innovative, rebellions, often hyper motivated folks are sprinting from idea to implementation, while executives are “told by a few colleagues” that something new, “the future-of foo” is raising up.
If you use openclaw then that’s fantastic. If you have an idea how to improve it, well it is an open source, so go ahead, submit a pull request.
Telling Apple you should do what I am probably too lazy to do, is kind of entitlement blogging that I have nearly zero respect for.
Apparently it’s easier to give unsolicited advice to public companies than building. Ask the interns at EY and McKinsey.
Maybe the author left out something very real. Apple is a walled-garden monopoly with a locked-down ecosystem and even devices. They are also not alone in this. As far as innovation goes, these companies stifle innovation. Demanding more from these companies is not entitlement.
It’s a 1987 ad like video showing a professor interacting with what looks like the Dynabook as an essentially AI personal assistant. Apple had this vision a long time ago. I guess they just lost the path somewhere along the way.
(Ok, I suspect this is one of the main problems.. there may be others?)
Are people's agents actually clicking buttons (visual computer use) or is this just a metaphor?
I'm not asking if CU exists, but rather is this literally the driver of people's workflows? I thought everyone is just running Ralph loops in CC.
For an article making such a bold technological/social claim about a trillion dollar company, this seems a strange thing to be hand wavey about.
Let OpenClaw experiment and beta test with the hackers who won't mind if things go sideways (risk of creating Skynet aside), and once we've collectively figured out how to create such a system that can act powerfully on behalf of its users but with solid guardrails, then Apple can implement it.
What if you don't want to trust your computer with all your email and bank accounts? This is still not a mass market product.
The main problem I see here is that with restricted context AI is not able to do much. In order to see this kind of "magic" you have to give it all the access.
This is neither safe or acceptable for normie customers
This is because the simple reality of this new technology is that this is not the local maxima. Any supposed wall you attempt to put up will fail - real estate website closes its API? Fine, a CUA+VLM will make it trivial to navigate/extract/use. We will finally get back to the right solution of protocols over platforms, file over app, local over cloud or you know the way things were when tech was good.
P.S: You should immediately call BS when you see outrageous and patently untrue claims like "Mac minis are sold out all over.." - I checked my Best Buy in the heart of SF and they have stock. Or "that its all over Reddit, HN" - the only thing that is all over Reddit is unanimous derision towards OpenClaw and its security nightmares.
Utterly hate the old world mentality in this post. Looked up the author and ofcourse, he's from VC.
Don't underestimate the capitalists. We've seen this many times in the past--most recently the commercialization of the Internet. Before that, phones, radio and television.
"An idiot admires complexity, a genius admires simplicity." Terry A. Davis
You're right on the liability front - Apple still won because everyone bought their hardware and their margins are insanely good. It's not that they're sitting by waiting to become irrelevant, they're playing the long game as they always do.
They don't say here is a 1000 $ iphone and there is a 60% chance you can successfully message or call a friend
The other 40% well? AGI is right around the corner and can US govt pls give me 1 trillion dollar loan and a bailout?
Author spoke of compounding moats, yet Apple’s market share, highly performant custom silicon, and capital reserves just flew over his head. HN can have better articles to discuss AI with than this myopic hot take.
I used to think this was because they didn’t take AI seriously but my assumption now is that Apple is concerned about security over everything else.
My bet is that Google gets to an actually useful AI assistant before Apple because we know they see it as their chance to pull ahead of Apple in the consumer market, they have the models to do it, and they aren’t overly concerned about user privacy or security.
> the open-source framework that lets you run Claude, GPT-4, or whatever model you want to
And
> Here’s what people miss about moats: they compound
Swapping an OpenAI for an Anthropic or open weight model is the opposite of compounding. It is a race to the bottom.
> Apple had everything: the hardware, the ecosystem, the reputation for “it just works.”
From what I hear OC is not like that at all. People are going to want a model that reliably does what you tell it to do inside of (at a minimum) the Apple ecosystem.
and the very next line (because i want to emphasize it
> That trust—built over decades—was their moat.
This just ignores the history of os development at apple. The entire trajectory is moving towards permissions and sandboxing even if it annoys users to no end. To give access to an llm (any llm, not just a trusted one acc to author) the root access when its susceptible to hallucinations, jailbreak etc. goes against everything Apple has worked for.
And even then the reasoning is circular. "So you build all your trust, now go ahead and destroy it on this thing which works, feels good to me, but could occasionally fuck up in a massive way".
Not defending Apple, but this article is so far detached from reality that its hard to overstate.
However this does not excuse Apple to sit with their thumbs up their asses for all these years.
They've been wildly successful for all of those years. They've never been in the novel software business. Siri though one could argue was neglected, but it was also neglected at Amazon Alexa and Google home stuff still sucks too (mostly because none of them made any money and most of their big ideas for voice assistants never came true).
> They could have charged $500 more per device and people would have paid it.
I sincerely doubt that. If Apple charged $500 for a feature it would have to be completely bulletproof. Every little failure and bad output would be harshly criticized against the $500 price tag. Apple's high prices are already a point of criticism, so adding $500 would be highly debated everywhere.
Welcome to the future I guess, everyone is a bot except you.
I don't pretend to know the future (nor do I believe anyone else who claims to be able to), but I think the opposite has a good chance of happening too, and hype would die down over "AI" and the bubble bursts, and the current overvaluation (imo at least. I still think it is useful as a tool, but overhyped by many who don't understand it.) will be corrected by the market; and people will look back and see it as the moment that Apple dodged a bullet. (Or more realistically, won't think about it at all).
I know you can't directly compare different situations, but I wonder if comparisons can be made with dot-com bubble. There was such hype some 20-30 years ago, with claims of just being a year or two away from, "being able to watch TV over the internet" or "do your shopping on the web" or "have real-time video calls online", which did eventually come true, but only much, much, later, after a crash from inflated expectations and a slower steady growth.*
* Not that I think some claims about "AI" will ever come true though, especially the more outlandish ones such as full-length movies made by a prompt of the same quality made by a Hollywood director.
I don't know what a potential "breaking point" would be for "AI". Perhaps a major security breach, even _worse_ prices for computer hardware than it is now, politics, a major international incident, environmental impact being made more apparent, companies starting to more aggressively monetize their "AI", consumers realising the limits of "AI", I have no idea. And perhaps I'm just wrong, and this is the age we live in now for the foreseeable future. After all, more than one of the things I have listed have already happened, and nothing happened.
This is my guess for the demand side: most people will drift away as the novelty wears off and they don't find it useful in their daily lives. It's more a "fading point" than a "breaking point."
From the investment/speculation side: something will go dramatically against the narrative. OpenAI's attempted "liquidity event" of an IPO looks like WeWork as investors get a look at the numbers, Oracle implodes in a mountain of debt, NVidia cuts back on vendor financing and some major public players (e.g. Coreweave) die in a fire. This one will be a "breaking point."
So yeah, the market isn’t really signaling companies to make nice things.
Nah if they are actually out of stock (I've only seen it out of stock at exceptional Microcenter prices; Apple is more than happy to sell you at full price) it is because there's a transition to M5 and they want to clear the old stock. OpenClaw is likely a very small portion of the actual Mac mini market, unless you are living in a very dense tech area like San Francisco.
One thing of note that people may forget is that the models were not that great just a year ago, so we need to give it time before counting chickens.
I’m sure apple et al will eventually have stuff like OpenClaw but expecting a major company to put something so unpolished, and with such major unknowns, out is just asinine.
Steve Jobs
Saved you a click. This is the premise of the article.
I guess now I’ll just use an AI agent to do the same thing instantly :(
Straight up bullshit.
I do not like reading things like this. It makes me feel very disconnected from the AI community. I defensively do not believe there exist people who would let AI do their taxes.
OpenClaw is a symbol of everything that's wrong with AI, the same way that shitty memecoins with teams that rugpull you, or blockchain-adjacent centralized "give us your money and we pinky swear we are responsible" are a symbol of everything wrong with Web3.
Giving everyone GPU compute power and open source models to use it is like giving everyone their own Wuhan Gain of Function Lab and hoping it'll be fine. Um, the probability of NO ONE developing bad things with AI goes to 0 as more people have it. Here's the problem: with distributed unstoppable compute, even ONE virus or bacterium escaping will be bad (as we've seen with the coronavirus for instance, smallpox or the black plague, etc.) And here we're talking about far more active and adaptable swarms of viruses that coordinate and can wreak havoc at unlimited scale.
As long as countries operate on the principle of competition instead of cooperation, we will race towards disaster. The horse will have left the barn very shortly, as open source models running on dark compute will begin to power swarms of bots to be unstoppable advanced persistent threats (as I've been warning for years).
Gain-of-function research on viruses is the closest thing I can think of that's as reckless. And at least there, the labs were super isolated and locked down. This is like giving everyone their own lab to make designer viruses, and hoping that we'll have thousands of vaccines out in time to prevent a worldwide catastrophe from thousands of global persistent viruses. We're simply headed towards a nearly 100% likely disaster if we don't stop this.
If I had my way, AI would only run in locked-down environments and we'd just use inert artifacts it produces. This is good enough for just about all the innovations we need, including for medical breakthroughs and much more. We know where the compute is. We can see it from space. Lawmakers still have a brief window to keep it that way before the genie cannot be put back into the bottle.
A decade ago, I really thought AI would be responsible developed like this: https://nautil.us/the-last-invention-of-man-236814/ I still remember the quaint time when OpenAI and other companies promised they'd vet models really strongly before releasing them or letting them use the internet. That was... 2 years ago. It was considered an existential risk. No one is talking about that now. MCP just recently was the new hotness.
I wasn't going to get too involved with building AI platforms but I'm diving in and a month from now I will release an alternative to OpenClaw that actually shows the way how things are supposed to go. It involves completely locked-down environments, with reproducible TEE bases and hashes of all models, and even deterministic AI so we can prove to each other the provenance of each output all the way down to the history of the prompts and input images. I've already filed two provisional patents on both of these and I'm going to implement it myself (not an NPE). But even if it does everything as well as OpenClaw and even better and 100% safely, some people will still want to run local models on general purpose computing environments. The only way to contain the runaway explosion now is to come together the same way countries have come together to ban chemical weapons, CFCs (in the Montreal protocol), let the hole in the ozone layer heal, etc. It is still possible...
This is how I feel:
https://www.instagram.com/reels/DIUCiGOTZ8J/
PS: Historically, for the last 15 years, I've been a huge proponent of open source and an opponent of patents. When it comes to existential threats of proliferation, though, I am willing to make an exception on both.