A decade and a half is insane timeline in tech industry, and huge majority of users use Siri the same way today as 15 years ago, setting a timer or an alarm clock.
If they had 0 improvements over these 15 years the situation wouldn't be much different than today.
Every few years I would try to use it for a few days, then quit in frustration at how useless it was. Accidentally activating Siri is a major frustration point of using Apple products for me.
All chat bots suffer this flaw.
GUIs solve it.
CLIs could be said to have it, but there is no invitation to guess, and no one pretends you don’t need the manual.
And furthermore - aren't there shells that will give you the --help if you try to tab-complete certain commands? Obviously there's the issue of a lack of standardization for how command-line switches work, but broadly speaking it's not difficult to have a list of common (or even uncommon) commands and how their args work.
(spends a few minutes researching...)
This project evidently exists, and I think it's even fairly well supported in e.g. Debian-based systems: https://github.com/scop/bash-completion.
"Hunt the verb" means that the user doesn't know which commands (verbs) exist. Which a neophyte at a blank console will not. This absolutely is a problem with CLIs.
Many users like myself enjoy a good manual and will lean into a CLI at every opportunity. This is absolutely counter to the value proposition of a natural language assistant.
But the point is none of that is intrinsic or interesting to the underlying idea, it’s just of annoying practical relevance to interfacing with APIs today
> but there is no invitation to guess, and no one pretends you don’t need the manual
which is basically what you're saying too? the problem with voice UIs and some LLM tools is that it's unclear which options and tools exist and there's no documentation of it.
I’m looking at you, Photos sync.
EDIT: just noticed this exact problem is on the front page in its own right (https://eclecticlight.co/2025/11/30/last-week-on-my-mac-losi...)
I literally just experienced this with RCS failing to activate. No failure message, dug into logs, says userinteractionrequired. Front page of HN, nobody knows, apple corp response, 'thats interesting no you cant talk to engineering'.
Read the RCS spec definition document to fall asleep to after the board swap and the call saying they won't work on it since issue resolved, answers exactly what that meant, Apple never implemented handling for it, my followup post: https://wt.gd/working-rcs-messaging
Knowing that a company had competent product designers that made a good product, but then shitcanned the working product for a bunch of amateur output from people that don't understand dry very basics of UI, from the one company that made good UI its primary feature for decades... well it just felt like full on betrayal. The same thing happened with absolutely shitty Apple Music, which I never, ever use, because it's so painful to remember what could have been with iTunes...
Will it sync? When? Who knows? You’re on WiFi with a full battery and charging? So? Might be a minute, might be an hour. Oh, you restarted Photos? Who cares? Not Photos.
I've found I appreciate having Siri for a few things, but it's not good enough to make it something I reach for frequently. Once burned, twice shy.
This is why LLMs are the first conversational interface to actually have a chance of working, once you give them enough tools.
are there solutions to the error rates when picking from dozens or even hundreds of tools i'm not aware of?
Casuals are in there--nontechnical folks for whom "brick breaker deluxe wants to access your contacts" might raise an eyebrow. The stalked are in there--malicious apps that track location of the install-ee are unfortunately not uncommon. The one-device-for-multiple-lives folks are in there (if your work email/contacts are on your phone, it's a good thing that your dating app has to ask permission before acquiring your phone's contacts). So are the forgetful--that periodic "hey, this app has had permissions for ages, do you still want it to have that access?" thing not only helps folks clean up their perms, it reminds lots of folks about services they forgot they paid (or worse, forgot they are still paying) for.
It’s such trash. Constant conditioning for garbage.
Timers and alarm clocks it is.
If it doesn't know where you are then you might live in a Faraday cage.
I get the sci-fi "wow" appeal, but even the folks who tried to build Minority Report-style 3D interfaces gave up after realizing tired arms make for annoyed users.
You know I have talked to chatGPT for maybe a 100 hours over the past 6 months. It gets my accent, it switches languages, it humors. It understands what I am saying even if it hallucinates once in a while.
If you can have chatGPT level of comprehension, you can do a lot with computers. Maybe not vim level of editing, but every single function in a driving car should be controllable by voice, and so could a lot of phone and computer functions.
Not to mention the likely need for continuous internet connectivity and service upkeep. Car companies aren't exactly known for good software governance.
I think well-done voice commands are a great addition to a car, especially for rentals. When figuring out how to do something in a new car, I have to choose between safety, interruption (stopping briefly) or not having my desires function change.
Most basic functions can be voice-controlled without Internet connectivity. You should only need that for conversational topics, not for controlling car functions.
I don't own a car but rent them occasionally on vacation in every one I've rented that I can remember since they started having the big touch screens that connect with your phone, the voice button on the steering wheel would just launch Siri (on CarPlay), which seems optimal—just have the phone software deal with it because the car companies are bad at software.
It seems to work fine for changing music when there's no passenger to do that, subject to only the usual limitations with Siri sucking—but I don't expect a car company to do better, and honestly the worst case I've can remember with music is that played the title track of an album rather than the album, which is admittedly ambiguous. Now I just say explicitly "play the album 'foo' by 'bar' on Spotify" and it works. It's definitely a lot safer than fumbling around with the touchscreen (and Spotify's CarPlay app is very limited for browsing anyways, for safety I assume but then my partner can't browse music either, which would be fine) or trying to juggle CDs back in the day.
Just that... nobody is willing to pay much for a thing that will do some basic search, dictate a recipe, or do unit conversion, or add a thing to a list.
> Alexa, turn on the bedroom lights.
> OK lights turn on
In the evening:
> Alexa, turn on the bedroom lights.
> I'm sorry, I don't know a device called "bedroom lights".
How is it even possible to build a computer system that behaves like this?
Also quite good for making shopping lists, with some bonus amusement when you get a weird transcription and have to try to work out that "cats and soup" is "tonkatsu sauce" several days after you added it to the list.
1. Checking the current temp or weather
2. Setting an alarm, timer, or reminder
3. Skipping a music track or stopping the music altogether roughly 3 seconds after hearing the command, or 1 second after you assume it didn't work
<end of list>
for example I say: "play comically long album title by artist on Spotify", it thinks about that for five seconds, does the bing noise, then says "playing comically long album title [special remastered edition] by artist on Spotify", and then a few seconds later starts playing the album, and if you don't wait through that whole thing it will just decide that actually you didn't want to hear the album
It is ridiculously useless for most things though. Like I’ll ask it a question on my Apple Watch and it will do a web search and give me a bunch of useless links.
Home Assistant can even share devices with Home app, so you can still use "Siri, turn off the lamp" to have it answer "you don't have any alarms set".
A whole bunch of assistants have gotten way worse in the last decade by chasing features at the expense of utility. I don't care about whatever new feature my speaker has, but if it fails to play a song or check the weather, I'm PISSED.
I was excited when I recently got an iPhone 16 Pro - it comes with Apple Intelligence! Surely this is how Siri leaps into the future and starts doing things like translating for me, or responding with a photo and some basic facts when I ask who Ariana Greenblatt is, or letting me convert from Krore to USD (it gives results for rupees every time it seems?) or...
Anyways, I asked it something basic, and Siri said it would have to use Apple Intelligence. Not like, prompting me if I want to use it, just saying it's needed, then turning off. I'm pretty confused as to what Apple Intelligence is at this point, since I assumed it would be Siri. "Hey Apple Intelligence" doesn't do anything, so I ask ChatGPT. It informs me that AI is, in fact, part of Siri. I... do not know why it gave me that response.
Back to timers and alarms.
Edit - this is your daily reminder that you can NO LONGER SHUT OFF IPHONES BY HOLDING DOWN THE POWER BUTTON.
it used to be able to set a timer or alarm 100% of the time, now sometimes it decides it needs to ask chatGPT for help.
What is it about Siri’s architecture that causes “Set bedroom light to 30%”, a command that worked for years, to randomly stops working on a random Tuesday with no OS update or home change?
I mean, what on earth are they doing on the back end…?
Things that seemed to work reliably for me 10 years ago but now do not:
1. "Call mom". Siri has apparently forgotten who my mother is. I tried "Hey Siri <name> is my mother" and I got an error. I'm sure it's resolvable but come on.
2. "Directions to <destination>" This always used to fail when it couldn't find places, but lately, when I'm driving, Siri will respond "Getting directions to <destination>" and then... nothing. No directions come up. I have to do it 2-3 times to have the directions actually start.
Interesting that you've also had that problem.
A feature I would love is to toggle "answer calls on speakerphone" based on location, so that I can answer a call with my phone on the desk while I'm at home and not have my ear blasted off taking a call when I'm walking down the street.
Edit: to be clear, Siri doesn’t. Still no reason it shouldn’t be able to.
When it works!
I’ve spend days where it goes wonky and says something went wrong for anything I ask. How is it that with modern phones the voice recognition and whatnot isn’t running locally?
Then one time, in a job interview of all things, (I'm a PM and was asked for an example of a product I liked or didn't like) I went into this spiel, and what do you know, Siri texted the photo to my wife. :-)
So somewhere along the way they did add that feature, and apparently I didn't realize I had missed checking for it.
When discussing a Jeopardy answer with my wife, I say "Hey Siri, who was Pol Pot". Siri said, "OK, calling Scott". So it woke up my friend at 1am..
And if I hear another "I found this on the web", I'm going to scream.
Siri is so bad it makes me want to go back to a pixel.
"Hey Siri, what city am I in?"
"Calling Ian"
"Siri, play some music"
"Sorry you will need to unlock your phone to do that"
"Siri, play some music"
<music starts playing>
I'm not saying that way to solve a problem, but I refuse to believe that it has to be as bad as it is. The worst part is that it's still better than the alternative of leaving the iOS ecosystem.
Apple is just so bizarre in general. I would say "nowadays" but I think they have always been like this. It took them how many decades to add a unit converter to the iPhone? And after all that time, they buried it in a menu in the Calculator app?
People increasingly seem to forgo the idea of retaining the data for themselves because they find AI products so fascinating / useful that they're just not caring, at least for the moment. I think this might swing back in the favor of Apple at one point, but right now it is kind of fascinating how liberally people throw everything at hosted AI models.
Not to mention the iOS keyboard has gotten so bad in the last year that it took me 3x longer to type this comment (I use the swipe keyboard). I had to fix at least a dozen typos.
Every now and then when they screw up, they’ll have a mea culpa with the press. They haven’t done that with Siri or the keyboard yet.
Anything you ask an Android device to do, or an Alexa device goes to their clouds to be 100% processed there.
Apple tried to make a small and focused interface that could do a limited set of things on device without going to the cloud to do it.
This was built around the idea of "Intents" and it only did the standard intents... and app developers were supposed to register and link into them.
https://developer.apple.com/documentation/intents
Some of the things didn't really get fleshed out, some are "oh, that's something in there?" (Restaurant reservations? Ride Booking?) and feels more like the half baked mysql interfaces in php.
However, as part of privacy - you can create a note (and dictate it) without a data connection with Siri. Your "start workout" command doesn't leave your device.
Part of that is privacy. Part of that is that Apple was trying to minimize its cloud spend (on GCP or AWS) by keeping as much of that activity on device. It wasn't entirely on device, but a lot more of it is than what Android is... and Alexa is a speaker and microphone hooked up to AWS.
This was ok, kind of meh, but ok pre-ChatGPT. With ChatGPT the expectations changed and the architecture that Apple had was not something that could pivot to meeting those expectations.
https://en.wikipedia.org/wiki/Apple_Intelligence
> Apple first implemented artificial intelligence features in its products with the release of Siri in the iPhone 4S in 2011.
> ...
> The rapid development of generative artificial intelligence and the release of ChatGPT in late 2022 reportedly blindsided Apple executives and forced the company to refocus its efforts on AI.
ChatGPT was as much a blindside to Apple as the iPhone was to Blackberry.
1. Apple is big enough that it needs to take care of edge cases like offline & limited cell reception, which affect millions in any given moment.
2. Launching a major UI feature (Siri) that people will come to rely on requires offline operation for common operations like basic device operations and dictation. Major UI features shouldn't cease to function when they enter bad reception zones.
3. Apple builds devices with great CPUs, which allows them to pursue a strategy of using edge compute to reduce spend.
4. A consequence of building products with good offline support is they are more private.
5. Apple didn't even build a full set of intents for most of their apps, hence 'remind me at this location' doesn't even work. App developers haven't either, because ...
6. Siri (both the local version and remote service) isn't very good, and regularly misunderstands or fails at basic comprehension tasks that do not even require user data to be understood or relayed back to devices to execute.
I don't buy that privacy is somehow an impediment to #5 or #6. It's only an issue when user data is involved, and Apple has been investing in techs like differential privacy to get around these limitations to some extent. But that is further downstream from #5 and #6 though.
Are you referring to https://security.apple.com/com/blog/private-cloud-compute/?
The only way that AI will ever be able to replace each of us, is if it gathers our entire audio, text, etc history. PCC seemed like the only viable option for a pro-AI, yet pro-privacy person such as myself. I thought PCC was one of the most thoughtful things I had every seen a FAANG create. Seriously, whoever pushed that should get some kind of medal.
Are you saying that there is no technical solution for privacy and AI to coexist? Not only that, but that was the blocker?
I am genuinely interested if anyone can provide a technical answer.
They should actually make something useful first, and then work backwards to making it private before releasing it.
If Apple takes the position that the UX has to fit in around the privacy requirements, so what? Privacy is a core pillar of their product identity—a built-in hallucinating compliments machine isn't.
If you've never been to China, you need to look no further than the streets to understand this (cameras everywhere, social credit system, etc.)
What does seem slightly odd is Apple have probably saved billions by failing to be dragged into the current model war.
Not the first to bring mp3 players to the market, nor phones, nor tablets. Market leader every time.
They could have just stayed in a corner talking about privacy, offer a solid experience while everything else drowns in slop, researched UX for llms and come 5 years later with a killer product.
I don't get why they went for the rush. It's not like AI is killing their hardware sales either.
For one thing, the iPad (market-leading tablet) and the iPhone (market-leading pocket touchscreen device) were not their first attempt at doing that. That would be the Newton, which was an actual launched product and a commercial failure.
For another thing, even Apple can't just become the market leader by doing nothing. They need to enter late with a good product, and having a good product takes R&D, which takes time. With MP3 players, smartphones, and tablets, they didn't exactly wait until the industry was burnt through before they came in with their offering; they were just later (with the successful product) than some other people who did it worse. They were still doing R&D during those years when they were "waiting."
Apple could still "show up late" to AI in a few more years or a decade, using their current toe-dipping to inform something better, and it would still fit into the picture you have of how they "should've done it." Not to mention, Apple's also lost its way before with things like convoluted product lines (a dozen models of everything) and experimental products (the Newton then, Apple Vision now); messing up for a while also isn't exactly bucking history.
Most of their current products seem to be decaying in the dead march towards the next yearly release. ux and ui are becoming more and more poorly thought (see their last design language). They half pursue ideas and don’t manage to deliver (vr, apple car, etc).
I see cargo culting and fad chasing like any average leadership, only with a fatter stack of cash supporting the endeavour.
They already lost this superpower in the EU and I think Japan, India, Brazil too. Early next year they've got their US antitrust trial, and later in the year are some class actions challenging their control over app distribution, and at least two pieces of draft legislation are circulating that would require allowing competing apps to be defaults.
If they need another two years they might face an entrenched and perhaps even better competitor, while their own app needs to be downloaded from the App Store.
I see Apple dusting off its OG playbook.
We're in the minicomputing era of AI. If scaling continues to bear fruit, we'll stay there for some time. Potentially indefinitely. If, however, scaling plateaus, miniaturisation retakes precedence. At that point, Apple's hardware (and Google's mindshare) incumbency gains precedence.
In the meantime, Apple builds devices and writes the OS that commands how the richest consumers on Earth store and transmit their data. That gives them a default seat at every AI table, whether they bother to show up or not.
Apple also doesn't have actual privacy since their focus was using the word strategically against their competitors, not actually protecting user data.
> Subramanya, who Apple describes as a "renowned AI researcher," spent 16 years at Google, where he was head of engineering for Gemini. He left Google earlier this year for Microsoft. In a press release, Apple said that Subramanya will report to Craig Federighi and will "be leading critical areas, including Apple Foundation Models, ML research, and AI Safety and Evaluation."
I don't see how Google + Copilot mindset even touches on privacy. I wouldn't be surprised if we users will be forced to pay even more personal data in the near future.
big company bad, or do you have examples?
From a technology or engineering perspective, I have no idea how to work with Apple.
And for audio production - I'm just a dabbler, really, but I've been able to do some really impressive things with just GarageBand and a Fender Mustang Micro amp-plug over USB-C. It "just works" unlike my experience on Linux recently, where there are lots of little bits that are genius, but I couldn't manage to figure out how to get a basic midi synth working with a DAW that had a UI that was designed for humans. (Jack is amazing, though - being able to do arbitrary audio filter chains with random pieces of software is seriously cool.)
I continue to be impressed by ChromeOS. With the Linux development environment (Debian VM), it is a brilliant work environment.
Add Android apps as well and ChromeOS is an awesome convergence platform. There are Chromebooks that are high enough quality that I don't miss anything about Apple OS or Microsoft Windows.
> And for audio production
For specialty use-cases, driver support will favour Windows and Apple OS.
And gaming is still Windows-first, although Linux is improving.
- Call "person"
- Call "business" (please don't say "I don't see so and so in your contacts" and on a second try, work)
- Find "place" (while driving) - Define "phrase or word" (please don't say I found this on the web)
- Set a timer or alarm
- Check the messages (in a sane way)
- Set reminders (this one surprisingly works well)
- Use intents correctly (I just want to be able to say "play 99% invisible in Overcast")
It doesn't need to do all the fancy things they show-cased last year. It just needs to do the basics really well and build from there.
But Android has been 100% accurate for simple commands for over a decade for me. Things like:
- "weather." tells me forecast of where I am.
- "alarm at 8am" and "alarm in 30 minutes" works as expected.
- calendar commands also work
My favourite is "go home" which opens Google map with a route to home.
These things just work. I don't recall last time I had to repeat myself.
Failing to find any way to get the alarm thing back, I turned off the entire assistant thing.
EG I can talk to it like I would chatgpt and it works well. But I can't be like "hey I want to get dinner with my wife on our anniversary, please book the best available option in my city for fine dining"
It's still way better than Siri, which feels like a voice CLI to me (same as Alexa, which is very low quality IME)
Edit: why in gods name are people downvoting me for politely asking about someone’s differing experience?
Not true for me at all, it fails at the most basic tasks, sometimes even at tasks it has done before. Three examples:
- "Timer 5 minutes" -> Loading spinner is shown. Siri disappears after a few seconds. No error, no confirmation. I then have to manually check if the timer was set or not (it was not).
- "Turn on the lights in the living room" to which it responds "Sorry, I cannot do that". I have Phillips Hue lights that are connected to Apple Home, of course Siri can do that. It did that before.
- "Add tooth paste to my shopping list". The shopping list is a list I have in reminders. It then tries to search for the query on Google. I then tried "Add tooth paste to the list shopping list in reminders" which worked, but if I have to be this wordy, it is no longer any convenient.
There are many more simple cases in which Siri always / sometimes fails. I also have the feeling that it performs far worse if asked in my native language (German) than in English.
"Here is what I found about "The Dragonborn comes at 25" on the Internet" opens Safari
I pretty much only use it when I can’t look at the phone so I’m not sure if it’s still there.
I can think of one time recently where no matter how I prompted it to play an album for (decades old but probably triple platinum,) it kept playing some cardi b song with the band’s name in the title instead… but that’s probably like a 1 in 2000 request problem. Maybe its a genre thing?
But mainly I wanted to share that video because Craig Federighi calling the AI/ML team "AI/MLess" is one heck of a burn
From TFA
Some of the teams that Giannandrea oversaw will move to Sabih Khan and Eddy Cue, such as AI Infrastructure and Search and Knowledge. Khan is Apple's new Chief Operating Officer who took over for Jeff Williams earlier this year ... Apple CEO Tim Cook thanked Giannandrea ...
Seems like Khan is preparing the mothership for when he eventually assumes the CEO role from Cook.Nah, IS&T has always been under the CFO, and apparently some fraction of AIML is headed under them.
The various interviews I've watched of his (and some of the leaked news) shows he's still quite deep in the tools.
1980s - silicon graphics / general magic
1990s - chief technologist, netscape
early 2000s - CTO Tellme (speech recognition)
late 2000s - CTO Metaweb (knowledge graph) -> acquired into Google
2010s - Google head of Machine Intelligence, Search, Gmail Smart Reply, etc, then took over Google Search and ML driven ranking (BERT)
2018 -> SVP ML/AI Apple to merge Siri/Core ML/all AI offerings under one roof
2023-2025 - led Apple Intelligence push
March 2025 - removed as head of Siri
Dec 2025 - retirement
would love to do an exit interview with him on the last 4 decades in building ai assistants!
-- https://x.com/markgurman/status/1995617560373706942?s=20
cv of his successor Amar Subramanya - 16 years at GDM - head of eng for Gemini chatbot/training. joined microsoft in JULY this year.. and now poached to Apple. lmao.
Now I'm weighing more on the Apple side for not making it better.
Never heard of Tellme, but it sounds impressive on a resume.
Metaweb was a good open-source fact database which subsequently got walled off once Google bought it.
Google Search works significantly worse now than it did under Amit, and I say that as both a user and a websearch Xoogler. (JG took over about a year after I left Google).
Siri is the subject of this article.
"Who is speaking?"
The same person who has been speaking the last hundred times, dammit!
All of that to realize Siri was kind of boring. Funny thing is it’s been over a decade and it’s maybe 20% better than it was at launch. MAYBE.
I don’t want to blame this one guy for all of that, but part of me can’t help but point at the leader and ask “you couldn’t have done better than… that?”
While Eddie Cue seems to be Apple's SaaS man, I can't say I'm confident that separating AI development and implementation is a good idea, or that Apple's implementation will not fall outside UI timeframes, given their other availability issues.
Unstated really is how good local models will be as an alternative to SaaS. That's been the gambit, and perhaps the prospective hardware-engineering CEO signals some breakthrough in the pipeline.
https://news.ycombinator.com/item?id=43436174
An organization of Apple's size doesn't fail due to the mistakes of a single person, unless that person is the CEO.
Haven’t found anything else it’s useful for.
And then my exchange business plan telling me copilot is here over and over in a giant popup screen and then saying - it's not available yet
Are there new functions I don't know about? I... can't think of anything else they'd add, but I literally do not understand what their engineers and managers working on Siri were doing on a daily basis. They must have been writing some code at some point. Did it just never launch? Am I simply ignorant?
Now, it'll show a loading indicator for 5-6 seconds and then do nothing at all... or do something entirely unrelated to my request (eg responding to "hey siri, how much is fourteen kilograms in pounds" by playing a song from my music library).
My personal favourite is Siri responding to a request to open the garage door, a request it had successfully fielded hundreds of times before, by placing a call to the Tanzanian embassy. (I've never been to Tanzania. If I have a connection to it, it's unknown to me. The best I can come up with is Zanzibar sort of sounds like garage door.)
This could be personalized, 'does this user do this kind of thing?' which checks history of user actions for anything similar. Or it could be generic, 'is this the type of thing a typical user does?'
In both cases, if it's unfamiliar you have a few options: try to interpret it again (maybe with a better model), raise a prompt with the user ('do you want to do x?'), or if it's highly unfamiliar, auto cancel the command and say sorry.
Problem is, Siri is already damaging Apple's reputation with how useless it is..
Been like this many versions, but dunno if it does it with the latest.
Just checked I’m on iOS 18.2.
It could potentially help tremendously but for that they would need to understand the usefulness of LLMs and tool usage.
I hope they don’t do anything remotely like that at Apple.
I am completely okay with the Apple approach to date (privacy and late mover cost advantage over progress and burning money/raising prices).
At this point, their investment to ship a better Siri is nearly zero if they take an open source model and run it on the device. Did John really mishandle it, or did he realise this and decide not to burn $BILS of cash and play the long game instead?
I worked pretty closely with him and his team for a bit at Google, and he seemed like a great human being, in addition to being a great engineer. I wouldn't read too much into a few-month stint at Microsoft.
Perhaps you are not getting it rammed down your throat because you’re not a business user? On personal editions one area where AI has been a failure is taking over the search bar, but you’re right, you can disable it.
We'd have working voice assistants by now. We're held up by the incessant need to game "engagement" and seek rent.
In reality users just want a goddamn voice interface to their phone. Set a timer, remind me of x next time I'm at location y. Turn on the lights. Set home air conditioning to 72.
Simple, trivial bullshit that has absolutely no monetizable worth. Because it's not profitable enough it's not worth developing at all. I'm half convinced the only reason siri and google assistant even still exist is solely and exclusively because the "other guy" has it.
People argue innovation is impossible without capitalism. I argue innovation is impossible with capitalism. If your idea isn't profitable enough it's not worth any amount of investment regardless of how beneficial the idea might be.
Then LLMs came and it still wasn't "real enough."
Honestly he’s had one hell of a career. Even if Siri sucked.
Still. Idle hands, he should get back on that horse if he can. Go do more stuff.
What did he even do for Apple's AI strategy for 7 years?
Apple is still far behind in doing anything useful with AI.
[Now I remember- he came in with the metaweb acquisition at a time when search was going all-in on knowledge graphs and semantics]