Siri: ... "Here's what I found on the web for "what is my ETA'"
From outside I don't know the cause but contrary to their normal reputation for better integration between parts of their products it seems like Siri is in some organizational way fundamentally broken.
It was one of the most surreal moments of my career. It was an emergency and the team was dialed in to saving this guy and the entire moment was just super "wtf!!" We all looked at each other and nobody really did anything. We all already hated Vocera with the fire of 10,000 suns and that dumbass moment just was the ultimate example of how dumb it was. The MD just moved on and barked at the ER tech to go get x-ray and to this day I am surprised he didn't rip the vocera off his collar and throw it across the room or put it on the ground and stomp on it.
Vocera was always a buggy piece of crap that was the miserable bane of our existence. After the code (the guy didn't make it) on of the nurses was smoking outside on the benches and was like "I didn't even know we had a pizza truck."
The most awesome voice system I ever used was the Motorola push to talk walkie talkies we used when I was a medic. Those things were bombproof and never lost signal, never f*cked up, just always worked, batteries last for weeks and were reliable af. Those things were the freaking BEST lol.
Thanks technology.
That's not even a command-interpretation issue, it's simply broken functionality between 100% stock Google software.
"Get directions to <restaurant in the city I live>"
"Getting directions to <restaurant with same name 800 miles away>"
Now they basically never work.
I want my location to be in Singapore (Singapore SIM/Card etc) I want my UI to be in English. I want my Siri speaks Cantonese to me.
For reason on Apple knows why, I have to use English (Singapore) as my UI and Siri language or Apple Intelligence will not turn on. As if the engineer who develop Siri/Apple Intelligence have never think about the needs of those who speaks more than one language.
(/s)
But what’s really lacking is a model for multiple people sharing a single computing experience in real life. Companion mode in Google Meet or Spotify Jam are two attempts but both still force you through the one user, one device path.
Two adults sitting in a car shouldn’t have to constantly think “whose phone is this?” connected to CarPlay. Especially when they’re part of the same Apple “family” and on a Spotify family plan.
Two people seamlessly interacting with one “system” would break all sorts of auth and other assumptions, but it seems worth figuring out as computing becomes more and more prevalent in every facet of life.
Imagine harnessing the desktop devices within the home for a family-focused OpenClaw, Xgrid-style, rather than offloading to some server far away that is unaware of the general context of a household.
For the CarPlay use case at least some kind of ad hoc “party” entity that all the devices flow into might be interesting. I’m thinking about how with the original StarCraft game one disc had a license for up to 8 instances of the game to play via LAN so you could have a single license key allow a whole LAN party. Some system like that where the auth flows through a “primary” account but everyone in the “party” contributes their own entitlements to it and can provide input.
This feels like an under-explored space. I want to allow my guests to use <tech thing> for a period of time; I don't want to have to worry about permanent breakages or security issues. What can I delegate and how?
And also living without it doesn’t really affect Apple’s bottom line. But yeah I wish I had an AI assistant in my iPhone which would text back my parents with what I’m doing today and reply to their needless updates I get since buying them smartphones.
Siri in general seems to be, for me at least, superfluous. The answer to most questions I ask is “I don’t know” or “I didn’t catch that” or “I can’t”. AI in general is still causing me major question marks, especially where it comes to the valuations right now on the stock market. This morning I was watching Bloomberg at the European open and noticed one of my stocks wasn’t really moving as usual, and the presenter then announced that the Nordic markets were closed today because of the Ascension Day public holiday. So I googled “is the Danish stock market open today?” and naturally Google’s AI was the top link, proudly announcing “Yes! The Danish market is open today, here are the hours yadda yadda”. I scrolled down and found the actual link to the market and it showed that, of course, the market is closed, it’s ascension day. So I asked the Google AI - “are you sure about that?” and it thought again and found out that “no, the Danish stock market is closed today. I apologise for telling you it was open without checking”. Honest to god this is the tech that’s putting Nvidia at a $5.5Trilion valuation and keeping the market at all time highs right now? A technology that makes even Google worse?
This is a very tricky one.
>> Know that my son has a test on Thursday and hasn’t opened the revision material since Monday. A gentle nudge, not a surveillance report
This feels like a surveillance report to me. The extent to which adults should surveil their children's devices is hotly disputed. There's one faction which thinks total surveillance should be mandatory (as a solution to the age verification problem or otherwise), and others which believe that children can and should have privacy (are you absolutely sure you should be monitoring your seventeen year old's conversation with their girlfriend?).
Not to mention that it's tracking a family member's interaction with a third party. We can pre-emptively assume the school knows and approves about this one, at least.
> Track our medication schedule and ping people (or me, if someone misses a window) without turning into a clinical monitoring tool.
This feels like the sort of thing where you have weeks of meetings trying to work out whether HIPAA applies or not. It would definitely be valuable. It's also a problem if it's wrong, even if that's entirely down to user error. So people make do with the adhoc version of general purpose calendar entries.
(not to mention the period tracker use case: you want to be careful with technologies which provide an evidence trail that the government have announced they want to use against you)
I disagree; it's complicated enough that even humans using paper calendars and talking to each other over dinner sometimes get this stuff wrong. It seems like it would be a nightmare to actually implement, using AI or not, and given the high error rates people are reporting in other threads here, I would have no faith in Apple's or Google's ability to actually do this.
It will get it right most of the times, but sometimes it puts her as wildly younger than she is, and once it even said she wasn't born yet so I should prepare for xyz.
Siri is occasionally useful in the car if I'm by myself but mostly in the better than nothing sense.
I think it's fair to say that a technology Amazon was, at one point, going to fill a building in downtown Boston to further develop has been extremely underwhelming.
I don't think it is these PR issues that cause Apple such consternation, partly as -- even as someone who lives a personal life filled with such corner cases -- I just don't think those are complex issues to solve, but mostly as Apple never seems to put much thought into corner cases like this anywhere else in their business, even when it doesn't butt up against the skewed demographics of software developers (such as how Cydia had much better handling of independent developers and joint projects than Apple's App Store still does 15 years later, and the what ifs were often fascinating to handle).
In reality, the "what ifs" that Apple gets stuck on are lower level, and can even sometimes be spun in a sympathetic light: "what if a domestic abuser manages to automate so much of your software that they essentially have persistent spyware on your device" or "what if a user scripts something to the point where their phone doesn't work quite right and constantly needs tech support" or "what if people share so much of their content with someone else that they share private information without realizing"...
...but -- as is the case with their App Store restrictions that sometimes are reasonable but almost always are not -- the truth is their implied concerns are selfish: "what if a family only buys one iPad for their two or even three kids and we lose 10% of our hardware revenue" or "what if some college roommates declare themselves a family and start sharing purchases of movies and books" or "what if kids in high school (aka, 13+) can still agree to screen time limits they can't change and then don't spend as much time engaged with their phone".
It isn't just that Apple has merely not implemented some of the stuff in this article or doesn't understand what people want: instead, as their business model (like almost all big tech business models) is inherently extractive and even a bit exploitative, their need to optimize for profit is a tradeoff against what people want, so they go out of their way (in ways that are sometimes ridiculous, such as how payments work for family sharing) to make some of these use cases so broken that it forces and/or misleads their customers into spending more (and sometimes a lot more) money to work around the otherwise-arbitrary limitations.
The vision is to create an "operating system for your family" that delivers the right information to parents, via the channel they want it delivered through, at the right time. You can check it out at https://helloleto.com — I just launched a few weeks ago and am starting to onboard parents while I work to improve the onboarding.
The overall goal aligns: help make the lives of parents easier, using technology that works automatically behind the scenes. Would love to chat more / answer any questions that you (or anyone else) may have about this—we've got the technology to help parents and it's time to finally do it.
These sort of things are exactly what hand-rolled setups à la OpenClaude are great for- the potential for insane privacy disaster is still there, but in that case you have no one to blame but yourself.
Large tech companies aren’t going to take that heat for features that aren’t really monetizable.
That means that you've got a bunch of really neat features built into the operating system which in practice you can only use one time it would be useful in twenty because the other people you want to use them with aren't using an iPhone. I'm thinking here of things like Airdrop only working between Apple devices, or the support for sharing audio from your phone/TV with someone else who's also got a pair of Bluetooth headphones only working if those headphones have an Apple logo on them.
I'd consider building the system out as an MCP server rather than trying to bundle the agent with it. I had an AI build something out that is just a tasklist that works the way I think about tasks, which I've been using both personally and professionally. It's an MCP server only, which I can expose on the internet with OAuth. It has been surprisingly fun to use, because the AI can spontaneously interact with the information in ways I didn't program in. I have a recurring task with an AI to give me a dump of my current top tasks once a day to my phone.
Professionally, I'm working between a lot of different teams with their own Jira boards and I needed something to use myself to organize and prioritize tasks that can't be prioritized within one place in Jira. With the Atlassian MCP server hooked up to the same agent as my code it is fairly trivial to attach a Jira bug to a task and then prompt the AI to do whatever to the bug attached to this task. I put an explicit field for it in to the task definition but you don't even really need that, just putting the bug in the description is all that was really necessary.
The point I am trying to make here is, you don't even really have to "design" a product at this point. You just need to expose things to the AI so that when the user makes some vague statement about what they want to do it can convert that into concrete calls. The AI and the user will do things with it that you didn't even think of, and users can just add things by saying things in the descriptions of various tasks. I've mentioned how even if AI were to freeze today for the next 10 years we'd still be learning how to use AI and getting more out of it... this is I think a still under-explored application space.
Seems like there would be more of a market for it.
Although, I guess most software has "user" and "organization", and family kind of slots into the 2nd one. But most of that software isn't oriented to the needs of actual families.
Based on the author's list, what's needed is some kind of dashboard that integrates many different systems together. The 2000s were kind of moving in that direction, with different platforms being interoperable, and UIs being highly customizable.
I'm sorry to read that. Looks like it's good that Apple didn't build that yet.
I mean thats not actually true
It requires a shit tonne of context and also has a fucktonne of bad outcomes that people accept with chatGPT but not apple products.
> Know that my son has a test on Thursday and hasn’t opened the revision material since Monday. A gentle nudge, not a surveillance report.
That requires two bits of context that are hard to find:
1) that there is an exam. Ideally it'll be in the calendar, but who's exam is it actually? is it the creator, the invitee or owner of the calendar's exam
2) That certain actions on the web == revision. THat requires knowing what the exam is about, what the offical study material is, and more importantly cross account access to web history.
> Track our medication schedule and ping people (or me, if someone misses a window) without turning into a clinical monitoring tool.
How do you nonintrusive test that medication has been taken? How do you know its the right pills? How do you upload the prescription to do that? how do you handle power of attorney? How do you not get sued when people rely on it?
> Coordinate pickup times, grocery lists, meal plans–the sort of mundane family logistics that currently live in a group chat and three different apps.
Again sharing of rawe data to model to build a context. How do you screen for privacy? how do you make sure that talking about private stuff (like love interest etc) doesn't leak into other contexts?
> Better family e-mail, better event handling, better package tracking across household members.
Define better.
Look as someone who worked on AR/AI assistant glasses, its trivial to make something like that which works 80% of the time. You can't make it secure though, because it requires removing a bunch of privacy barriers that stop fraud and stuff leaking to third parties.
Its a really hard problem to crack to both be accurate, private and secure. You can pick one, at best.
That's not a trivial thing to build. By what criteria should a show match what you've already seen if you watched shrinking and below decks and silo this past month?
Things with boats? Jason Siegel? Post apocalyptic stuff?
It has a recommendation engine that is based on my watch history. It's not perfect. But it usually is helpful.
If this small indie team can build one, I'm sure apple could spin one up if they so chose. They just choose not to, because they have another agenda.
It’s very hard to do both things well and at Apple scale it’s nearly impossible.
This is what enabled us to win despite FindMy being launched a few years after us.
As a shameless plug I’m building a family AI team as a startup within our larger 600 person org.
https://chrishulls.medium.com/life360-is-building-a-family-a...
In my case, I have the ability to access a large, unrelated family group about 3000 miles away because my email address is similar to someone on the family.
That kind of fuck-up would be front page of NY Times News for Apple.
Siri isn't lame because of the lack of frontier LLM. Siri is a massive failure of simply coding it to do obvious things, which is a UX failure, which is ironic given Apple's reputation as the UX leaders. I guess it is a low bar considering the competition of MS and Goog.
Over the last 6 years, I have fully bought into the ecosystem and it constantly dissapoints me. I invite the UX team to spend a few days watching me struggle with their fragmented ecosystem. But I warn them to not let me get started on AppleTV (the streaming app), where the enshitification takes the crown over all of their competitors. They seem to have jumped the shark past the give the consumer great value stage.
No you don't. You want to gain trust with him and talk. And talk with the educational team.
"A gentle nudge, not a surveillance report."
That's exactly what it's going to end up being
I’m also not sure how any of this can happen given that Apple seems intent on making their apps harder to use and less interested in the users’ preferences over time. They are running away from elegant solutions and simple just-works software.