150 pointsby ch_sm4 hours ago13 comments
  • coffeefirst2 hours ago
    Agreed.

    The ideal implementation of AI for Apple is probably to finally make Siri work. This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.

    The most important part of this is it doesn’t necessarily feel like AI. The user does not like AI for its own sake or the weirdos who ramble about putting them into a permanent underclass. The user likes messaging their friends and playing music.

    To much of this hype cycle has no user in mind.

    • tobr22 minutes ago
      > This isn’t necessary fancy, just let me set some calendar events without knowing the magic words or tell it to open Overcast and play the new Gastropod episode. Better yet, for power users, let me set up reusable shortcuts using natural language.

      Isn’t this the proverbial ”faster horse”? Ie let me do exactly what I can do now, in a very slightly different, possibly very slightly more convenient way?

      • kibwen10 minutes ago
        If the user asks for a faster horse and you sell them a car, you win.

        If the user asks for a faster horse and you sell them a trebuchet, you lose, no matter how fast the trebuchet would technically get them to their destination.

      • whatshisface15 minutes ago
        The whole point of AI is that if something different happens, it's not you doing it.
    • samrus2 hours ago
      Absolutely agreed. It feels like tech companies forgot that they are supposed to add value to users. Theyve been shoving random AI usecases down their users throats with no regard for whether it works for the users flow or not. When theres so much value to be had from AI in normal products. Claude code is the best in this right now, probably because the engineers themselves are users.

      This isnt unprecedented, its what happened in the dotcom bubble as well. But then that tech started getting used properly as well. So i think its a matter of time before claude code levels of value is avialable to normal users

      • new_account_10023 minutes ago
        > When theres so much value to be had from AI in normal products.

        Please elaborate

        • MrDarcy17 minutes ago
          Replace search for one.
    • JumpCrisscross2 hours ago
      > ideal implementation of AI for Apple is probably to finally make Siri work

      Wouldn't the simplest solution be to auction off Siri's back end the way Apple does Safari's search bar in iOS?

      • 37 minutes ago
        undefined
      • danarisan hour ago
        No, because what Siri does needs to be tightly integrated in ways that search does not.
    • bonesssan hour ago
      I have a grander vision for an ideal Apple “AI”: anti-AI.

      I’m picturing a combination of on-board facilities and online services from the Apple cloud that Apple product holders could use to flag and filter LLM slop. As a value added prospect, iPhone users who read HN or used TikTok would be seeing clear UI-level indications of when they’re interacting with slop with options to kill it.

      In my estimation it would provide platform benefits without losing capabilities, leverage Apples hardware and not advertising positioning, fix critical issues of spam and scams, and let them market a higher calibre of online experience. Also, they could un-eff Siri - “play album X starting at track Y”, come on, it’s 2026.

    • WillAdams2 hours ago
      The thing which kills me is a lot of this was working back in the Newton days.
  • rglover3 hours ago
    Steve already gave away the secret [1] (must watch) a long time ago:

    "You have to work backwards from the customer experience."

    AI was never going to be on Apple's roadmap in a significant way because it's in their DNA to differentiate technology from products.

    [1] https://youtu.be/oeqPrUmVz-o?si=ndUU1H5D3pNifWss

    • otterleyan hour ago
      "Working backwards" is also, famously, Amazon's philosophy. It's one of my most cherished takeaways from working there.
      • justonceokayan hour ago
        Once I extracted the medicine from the poison I am very glad that Amazon was my first corporate work experience. Many of the leadership principles and cultural norms there are actually very good ideas when not taken to extremes.

        I remember my first meeting I went to at another company that was just a guy talking with a PowerPoint. I couldn’t believe we didn’t have the data or time to ask probing questions. We’re just supposed to take this guy for his word? Crazy

    • paulddraperan hour ago
      What was Siri?
    • micheletyson3 hours ago
      [dead]
  • hresvelgr3 hours ago
    This is a similar argument to "Dropbox is a feature, not a product" and it definitely rings true in this instance too. I remember the litany of applications that only supported sync through Dropbox. It had no ecosystem, it's saving grace was that no one yet was operating a service similar at that scale.

    All the major AI companies are trying to manufacture their own ecosystems to become less disposable. They'll get away with it for a while, but only insofar as hardware prevents advanced use. Once we get that hardware[1] there will only be two types of AI companies: hardware manufacturers, and labs. Just like sync became trivial and ancillary, so will AI inference.

    [1] https://taalas.com/the-path-to-ubiquitous-ai/

    • basch2 hours ago
      and the differentiating factor on hardware will be the seamlessness of the interface, in software. the combination of voice, eye tracking, swiping, capture of intent, being able to mumble to myself at a volume only my device can hear. The hardware needs to be little more than something that gets out of the way and acts as an input device with a battery.
  • pizlonator13 minutes ago
    AI seems to be a product if you're Anthropic (the seller) and any enterprise with a software team (the buyer).

    I agree with Gruber's take, if the seller is Apple.

  • junto2 hours ago
    The answer as always in these situations is to zoom out.

    We are in the midst of a paradigm shift, and the perspective in the daring fireball post aligns exactly with this author’s perspective:

    https://rebecca-powell.com/posts/return-on-intelligence-01-e...

    • lioeters28 minutes ago
      Really enjoyed that article, thanks for the link. I agree there can be a bubble and a genuine paradigm shift at the same time. We're going through our first wave of attempts, more or less wrong, but the general direction is right, that the future will never be the same.
  • HarHarVeryFunny3 hours ago
    I totally agree - the phone as a form factor is not going away. People are always going to want to have a mobile communicator/computer, and want one with a screen and all-day battery life. The phone is not going to be replaced by smart glasses or some other wearable or screen-less pocket device.

    It may well be that the user interface of your "phone", and how you use it, changes over time as we progress toward AGI, but as long as Apple keep to the Job's aesthetic of making well designed products that get out of the way and just "do the thing", they should be fine. Of course Apple will eventually fall, as all companies do, but I don't think the reason for it will be that the "phone" market was rendered obsolete by AI.

    Perhaps if phones becomes more of a "pocket assistant" than a device to run discrete apps, then they will becomes harder to differentiate based on software, and more of a generic item rather than a status/luxury one ... who knows? Anyone else have any theories of how Apple may eventually fall?

    There is one potential AI risk to Apple, that they are at a disadvantage due to not having their own frontier models and datacenters to run them on, but I think there will always be someone willing to sell them API access, and they will adapt as needed. Good enough AI is only going to get cheaper to train and serve, and Apple not trying to compete in this area may well turn out to have been a great decision, just as Microsoft seem to be doing fine letting OpenAI take all the risk.

    • enos_feedler43 minutes ago
      I think the vision of pocket assistant versus discrete apps is very much Apple. Remember the original iPhone had no app store. The app store is kind of a pain to deal with. If I had to bet, this starts with Apple pivoting Swift Playground into Playground releasing it across all devices. The programming language becomes invisible. The live canvas is the document.
    • JumpCrisscross2 hours ago
      > the phone as a form factor is not going away

      It's not going away in the next few years. Which means Apple doesn't have to rush to release an AI product for the sake of it à la Giannandrea.

      • SoftTalkeran hour ago
        That's really the point of the article. As long as the phone is the (or at least a significant) conduit for our use of AI technology, Apple is in a good spot, and it's the same spot where they have historically done very well.
  • jmountan hour ago
    This is important to think through, does one have a product, tech, tool, or even just a feature. I given thing is not necessarily at the bottom of this stack, but also not always at the top.
    • ako32 minutes ago
      Really depends on the company and who you're selling to. For a car company a tire is a feature, for other companies it's their product.
  • wiseowise3 hours ago
    Anything is a product if you can sell it.
  • wslhan hour ago
    If capable humanoid robots are really closer than most people think, I'd be surprised if Apple isn't exploring them. That may be the counterexample to "AI is not a product": a physical AI product where hardware, sensors, UX, privacy, and integration matter as much as the model.
    • wolttam40 minutes ago
      In that case the robot as a whole is the product and the model is just a part of the technology making it possible.

      That’s the thing; the LLM itself - the chat window - can’t be the whole product for an industry. It’s a technology that you build things with.

  • simianwordsan hour ago
    Why is every consumer hardware company sleeping on AI? The best product is Openclaw and it is embarrassing.

    Today I wanted to book a public transport ticket in Germany but it was simply too hard to keep copy pasting screenshots from the app to ChatGPT. This seems to be a very easy problem to solve and standardise at the OS level but no one seems to want to do it.

    I agree its not a totally different "product" but does require some thought. Apple can't sleep on this.

  • amazingamazing2 hours ago
    GPT 3.5 is nearly 4 years old. What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life? For the sake of conversation let’s say the average person is some random person in middle America.

    To me there are cool things but nothing so great where if LLMs were deleted I’d cry about it. To contrast mRNA vaccines, gene therapy and crispr seem more impactful in reality, just to mention things from 2020.

    • raincole35 minutes ago
      Translation. If the said random person is interesting in any media from non-english speaking countries. Anime, manhwa, cultivation web novels.

      But you specified America, so I guess no.

      • amazingamazing33 minutes ago
        Translation existed before llms tho in hundreds of languages
        • raincole29 minutes ago
          And? Coding existed before LLMs too.
    • shalmanese2 hours ago
      Apple's problem might be they were right too early which is sometimes worse than being wrong. The original vision of Siri was substantively correct in how AI would supercharge our phones but huge parts of the vision got forgotten when Siri was acquired by Apple and the original founders left. The original technical choices around Siri constrained it from evolving into something useful.

      A funny story that happened the other day: A friend knew he had to be at dinner at a place across town but he forgot why he had to be at that dinner. While we were waiting for his rideshare to come, he was flipping through every kind of app trying to reconstruct the original context for his appointment.

      In theory, this is where AI should shine. He should have been able to say "Hey Siri, pull up all of the info that references tonight's dinner appointment" and AI should be the unified interface into a bunch of app-specific data pools.

      But of course he's never in 1 million years would have thought about using Siri to do that because of how bad Siri is.

    • miguel_rdp2 hours ago
      Access to a rational, imperfect yet functional expert in lots of everyday subjects: personal finance, making decisions and plans, relationships, taboo questions, the first steps of a medical/law opinion, general problem solving and breakdown..

      Even considering that it’s sometimes wrong or hallucinating, it’s doing an important job by beginning to eliminate gate keeping, be it centered on cost or access.

      • amazingamazing2 hours ago
        Im unconvinced. How do you trade this for misinformation and scams that will be coming on unprecedented scale? In any case isn’t it the case that the value there is human expertise and search? At least with gpt 5 using it without search will almost certainly give you wrong information in a variety of topics so the value seems to be in search which is old tech
        • wrxd28 minutes ago
          100% I would be happier to have a small model that can run locally capable of searching the web than a stand-alone frontier model
    • JumpCrisscross2 hours ago
      > What’s a non coding use case that’s enabled with LLMs that materially improves the average person’s life?

      Coding adjacent, but my small town's small businesses have all dramatically improved their websites with LLMs. Folks who didn't have them before can now build them. Folks who had to rely on a web designer no longer have to.

      • amazingamazing2 hours ago
        Was it really that difficult to build a generic website with a template before? Using a LLM instead of a template seems like ridiculous overkill imho but thanks for the anecdote.
        • JumpCrisscross2 hours ago
          > Was it really that difficult to build a generic website with a template before?

          Yes. Code looks intimidating if you aren't used to it (and don't have an IDE). And there are lots of steps between having a file of code and having a hosted website.

          • amazingamazing2 hours ago
            I don’t see how a llm solves this. It’s not like a llm hosts the website. Sites like squarespace and Wordpress let you modify your site without ever seeing code. They have graphical editors that you can stay in if you wish. I agree llms help, though if you use a product.
    • simianwordsan hour ago
      You can't easily articulate the way in which mRNA vaccines were possible by internet. But internet definitely played an important part.

      Internet

      - made the communication possible, all the information diffusing was only possible because of internet

      - all sorts of small interactions and serendipitous communication through social media was due to the internet

      - computation and simulation required was possible with the internet

      Sometimes things make other things possible in subtle but real ways which are overdetermined. You can't articulate how AI will help a person materially in first order effects. But it will.

  • kordlessagainan hour ago
    [flagged]
  • oulipo23 hours ago
    AI is a political ideology masquerading as technology https://tante.cc/2026/04/21/ai-as-a-fascist-artifact/
    • dwa35923 hours ago
      I was honestly a bit intrigued to read that article but its written on a stack of weak arguments. for example:

      >>technologies have built-in politics that stem from the political views and goals of the people building the technology.

      First, its not just technology that has built-in politics. It's everything, think of tshirts, cups, hats sold on political rallied. Second- how does this even hold up in the context of AI? Who do you credit for building "AI"? Is it just the bunch of founders listed in the article? What about Geoffrey hinton? What about Turing or shannon or leibniz?

      • pixl972 hours ago
        Yea, in itself AI is just AI.

        The practical implementation is what leads to the autocratic and or fascist like tendencies. LLMs in their current state take massive amounts of money/compute/energy to make. Those items in large amounts are typically managed by corporations or governments. Corporations are not democracies. Corporations also have liability considerations they have to work around. And, they have to do all this without pissing off the government they operate under too much. So yes, this is almost always going to lead to a situation that is not individual friendly. The implementation ends up opinionated because it must. There are only a small number of implementations and the company has much less freedom in what it outputs than the average 'open all the freedom gates' idiot thinks.

        Really the only solution here, if possible, is hoping that we can train LLMs/AI with far less resources in the future. If so, this can lead to a proliferation of different models optimized for different purposes. But at the end of the day we must remember all models are biased, this includes human brains. At the end of the day, both AI and brains, are a map and not the territory. We are defined by what we filter out.

    • simianwordsan hour ago
      These kind of posts mean nothing - its just agitprop to signal ideology belongingness. No epistemic value whatsoever.
    • tancop2 hours ago
      another "ai is inherently evil" take coming from the "ai is inherently evil" blog.

      i agree that specific implementations of a technology (claude, gemini, qwen) are never neutral but any tech itself (llms as a concept) is neutral you can implement it in any way you want. you can make a llm trained on diverse data, tuned for anti fascist opinions, using solar power and recycled hardware to be carbon neutral. the reason nobody is really doing it is just good old wealth inequality. as long as only big corporations can afford to use and develop llms or any other tech it will be biased to benefit them, thats why its so important to democratize it.

      and for the open source part, the fact that it started as a libertarian movment dont mean it cant also be socialist. its going against the capitalist norm of exclusive property rights (including ip) and profit at all costs. sharing the product of your labor with everyone for free is one of the biggest things you can do to help, its like the online equivalent of putting food in the community fridge.

      open llms let you fine tune them to add the missing under represented perspectives. you can run them locally with zero climate impact. analyze them in depth to reveal biases the devs never noticed or dont want you to see. none of that possible with closed source. the right thing to do is not avoid using ai at all costs but do everything you can to make it good. your skills and hardware access are a privilege. use it.