The ideal implementation of AI for Apple is probably to finally make Siri work. This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.
The most important part of this is it doesn’t necessarily feel like AI. The user does not like AI for its own sake or the weirdos who ramble about putting them into a permanent underclass. The user likes messaging their friends and playing music.
To much of this hype cycle has no user in mind.
Isn’t this the proverbial ”faster horse”? Ie let me do exactly what I can do now, in a very slightly different, possibly very slightly more convenient way?
If the user asks for a faster horse and you sell them a trebuchet, you lose, no matter how fast the trebuchet would technically get them to their destination.
This isnt unprecedented, its what happened in the dotcom bubble as well. But then that tech started getting used properly as well. So i think its a matter of time before claude code levels of value is avialable to normal users
Please elaborate
Wouldn't the simplest solution be to auction off Siri's back end the way Apple does Safari's search bar in iOS?
I’m picturing a combination of on-board facilities and online services from the Apple cloud that Apple product holders could use to flag and filter LLM slop. As a value added prospect, iPhone users who read HN or used TikTok would be seeing clear UI-level indications of when they’re interacting with slop with options to kill it.
In my estimation it would provide platform benefits without losing capabilities, leverage Apples hardware and not advertising positioning, fix critical issues of spam and scams, and let them market a higher calibre of online experience. Also, they could un-eff Siri - “play album X starting at track Y”, come on, it’s 2026.
"You have to work backwards from the customer experience."
AI was never going to be on Apple's roadmap in a significant way because it's in their DNA to differentiate technology from products.
I remember my first meeting I went to at another company that was just a guy talking with a PowerPoint. I couldn’t believe we didn’t have the data or time to ask probing questions. We’re just supposed to take this guy for his word? Crazy
All the major AI companies are trying to manufacture their own ecosystems to become less disposable. They'll get away with it for a while, but only insofar as hardware prevents advanced use. Once we get that hardware[1] there will only be two types of AI companies: hardware manufacturers, and labs. Just like sync became trivial and ancillary, so will AI inference.
I agree with Gruber's take, if the seller is Apple.
We are in the midst of a paradigm shift, and the perspective in the daring fireball post aligns exactly with this author’s perspective:
https://rebecca-powell.com/posts/return-on-intelligence-01-e...
It may well be that the user interface of your "phone", and how you use it, changes over time as we progress toward AGI, but as long as Apple keep to the Job's aesthetic of making well designed products that get out of the way and just "do the thing", they should be fine. Of course Apple will eventually fall, as all companies do, but I don't think the reason for it will be that the "phone" market was rendered obsolete by AI.
Perhaps if phones becomes more of a "pocket assistant" than a device to run discrete apps, then they will becomes harder to differentiate based on software, and more of a generic item rather than a status/luxury one ... who knows? Anyone else have any theories of how Apple may eventually fall?
There is one potential AI risk to Apple, that they are at a disadvantage due to not having their own frontier models and datacenters to run them on, but I think there will always be someone willing to sell them API access, and they will adapt as needed. Good enough AI is only going to get cheaper to train and serve, and Apple not trying to compete in this area may well turn out to have been a great decision, just as Microsoft seem to be doing fine letting OpenAI take all the risk.
It's not going away in the next few years. Which means Apple doesn't have to rush to release an AI product for the sake of it à la Giannandrea.
That’s the thing; the LLM itself - the chat window - can’t be the whole product for an industry. It’s a technology that you build things with.
Today I wanted to book a public transport ticket in Germany but it was simply too hard to keep copy pasting screenshots from the app to ChatGPT. This seems to be a very easy problem to solve and standardise at the OS level but no one seems to want to do it.
I agree its not a totally different "product" but does require some thought. Apple can't sleep on this.
To me there are cool things but nothing so great where if LLMs were deleted I’d cry about it. To contrast mRNA vaccines, gene therapy and crispr seem more impactful in reality, just to mention things from 2020.
But you specified America, so I guess no.
A funny story that happened the other day: A friend knew he had to be at dinner at a place across town but he forgot why he had to be at that dinner. While we were waiting for his rideshare to come, he was flipping through every kind of app trying to reconstruct the original context for his appointment.
In theory, this is where AI should shine. He should have been able to say "Hey Siri, pull up all of the info that references tonight's dinner appointment" and AI should be the unified interface into a bunch of app-specific data pools.
But of course he's never in 1 million years would have thought about using Siri to do that because of how bad Siri is.
Even considering that it’s sometimes wrong or hallucinating, it’s doing an important job by beginning to eliminate gate keeping, be it centered on cost or access.
Coding adjacent, but my small town's small businesses have all dramatically improved their websites with LLMs. Folks who didn't have them before can now build them. Folks who had to rely on a web designer no longer have to.
Yes. Code looks intimidating if you aren't used to it (and don't have an IDE). And there are lots of steps between having a file of code and having a hosted website.
Internet
- made the communication possible, all the information diffusing was only possible because of internet
- all sorts of small interactions and serendipitous communication through social media was due to the internet
- computation and simulation required was possible with the internet
Sometimes things make other things possible in subtle but real ways which are overdetermined. You can't articulate how AI will help a person materially in first order effects. But it will.
>>technologies have built-in politics that stem from the political views and goals of the people building the technology.
First, its not just technology that has built-in politics. It's everything, think of tshirts, cups, hats sold on political rallied. Second- how does this even hold up in the context of AI? Who do you credit for building "AI"? Is it just the bunch of founders listed in the article? What about Geoffrey hinton? What about Turing or shannon or leibniz?
The practical implementation is what leads to the autocratic and or fascist like tendencies. LLMs in their current state take massive amounts of money/compute/energy to make. Those items in large amounts are typically managed by corporations or governments. Corporations are not democracies. Corporations also have liability considerations they have to work around. And, they have to do all this without pissing off the government they operate under too much. So yes, this is almost always going to lead to a situation that is not individual friendly. The implementation ends up opinionated because it must. There are only a small number of implementations and the company has much less freedom in what it outputs than the average 'open all the freedom gates' idiot thinks.
Really the only solution here, if possible, is hoping that we can train LLMs/AI with far less resources in the future. If so, this can lead to a proliferation of different models optimized for different purposes. But at the end of the day we must remember all models are biased, this includes human brains. At the end of the day, both AI and brains, are a map and not the territory. We are defined by what we filter out.
i agree that specific implementations of a technology (claude, gemini, qwen) are never neutral but any tech itself (llms as a concept) is neutral you can implement it in any way you want. you can make a llm trained on diverse data, tuned for anti fascist opinions, using solar power and recycled hardware to be carbon neutral. the reason nobody is really doing it is just good old wealth inequality. as long as only big corporations can afford to use and develop llms or any other tech it will be biased to benefit them, thats why its so important to democratize it.
and for the open source part, the fact that it started as a libertarian movment dont mean it cant also be socialist. its going against the capitalist norm of exclusive property rights (including ip) and profit at all costs. sharing the product of your labor with everyone for free is one of the biggest things you can do to help, its like the online equivalent of putting food in the community fridge.
open llms let you fine tune them to add the missing under represented perspectives. you can run them locally with zero climate impact. analyze them in depth to reveal biases the devs never noticed or dont want you to see. none of that possible with closed source. the right thing to do is not avoid using ai at all costs but do everything you can to make it good. your skills and hardware access are a privilege. use it.