383 pointsby nutanc12 hours ago57 comments
  • daft_pink12 hours ago
    The real creepy thing is the way they force you to give up your data with these products. If it were just useful add ons, it wouldn’t bother me, but the fact that Gemini requires you to turn activity history off for paid plans for the promise they won’t train on your data or allow a person to view your prompts is insanity. If you’re paying $20 for Pro or 249.99 for Ultra, you should be able to get activity history without training or review or storing your data for several years.
    • thomascgalvin11 hours ago
      I have a pixel watch, and my main use for it is setting reminders, like "reminder 3pm put the laundry in the dryer". It's worked fine since the day I bought it.

      Last week, they pushed an update that broke all of the features on the watch unless I agreed to allow Google to train their AI on my content.

      • state_less10 hours ago
        My Android phone comes hobbled unless I give it all my data to be used for training data (or whatever). I just asked, "Ok Google, play youtube music." And it responded with, "I cannot play music, including YouTube Music, as that tool is currently disabled based on your preferences. I can help you search for information about artists or songs on YouTube, though. By the way, to unlock the full functionality of all Apps, enable Gemini Apps Activity."

        I'm new to Android, so maybe I can somehow still preserve some privacy and have basic voice commands, but from what I saw, it required me to enable Gemini Apps Activity with a wall of text I had to agree to in order to get a simple command to play some music to work.

        • bjelkeman-again8 hours ago
          That is the point when I turn around and walk away from that company.
          • state_less8 hours ago
            I'm almost there, but the mobile operating systems (compatible with the phones i have) are a snag at the moment.
        • SoftTalker8 hours ago
          Just stop talking to your computer and use the screen interface, that still works.
          • state_less8 hours ago
            When I'm on my bike, it's difficult. I will ride no handed and change a track, but it's more dangerous than it needs to be.

            I might switch back to my iOS device, but what I'd really like to do is replace the Andriod OS on this Motorola with a community oriented open source OS. Then I could start working on piping the mic audio to my own STT model and execute commands on the phone.

            • parineum7 hours ago
              Just stop if you need to adjust something?
              • therealpygon6 hours ago
                That seems a lot like corpo- excuse making about adjusting usage to compensate for the fact that a product someone purchased has been changed, and broken, in order to be forced into agreement for a contract. That is called coercion in many places, but it seems like your recommended solution is that people accept getting screwed just so corporations can make more money when people complain…is that correct?
                • parineuman hour ago
                  I'm just saying, how often do you need to adjust your phone while you're cycling?

                  It's a step back to not be able to do it by voice but if you're concerned enough about your privacy, stopping once or twice during a ride doesn't sound like the end of the world.

                  I'm not saying it's fine that Google took away functionality but, from a practical perspective, it seems like OP was acting like there's no other option available to change tracks. There is and it's really not that inconvenient.

                • venturecruelty6 hours ago
                  I mean, any time you suggest "regulate companies" or "form a union", you get dogpiled. So until society gets its act together and collectively fixes these problems, the only immediate solution is to opt out.
              • Retric6 hours ago
                Google already broke the basic functionality they wanted, dropping android now beats constantly looking for subpart workarounds.

                Microsoft pulled the this crap with Windows, you once they stop caring about their you’ve already lost it’s time to stop paying their game.

                • immibis6 hours ago
                  So just don't bring a phone at all?
                  • beanjammin6 hours ago
                    GrapheneOS or LineageOS on your phone gets rid of the AI cruft. Linux on your computer.

                    There are few things AI is truly very good at. Surveillance at scale is one of them. Given everything going on in the world these days it's worth considering.

                    • parineuman hour ago
                      OP wants AI to change the music. They just don't want the new EULA added to the mix.
                      • Retric40 minutes ago
                        AI is meaningless term when you extend it to cover everything, may as well just call it software.

                        So yea the software’s EULA changed for the worse, that’s the underlying issue.

          • DerCed8 hours ago
            How long, though, until every input is AI-interpreted and your intention is "helpfully" translated to "what you meant"
            • SoftTalker4 hours ago
              To be fair it seems to already be happening. My phone keyboard, always prone to interpolating what I type into utter nonsense, seems to have gotten worse in the past year or so.
      • Polizeiposaune9 hours ago
        We need consumer protection laws that protect against functional regressions like this -- if a widget could do X when I bought it, it should keep doing X for the life of the product and I shouldn't have to "agree" to an updated license for it to be able to keep doing X.
        • bigstrat20038 hours ago
          Or even updates that introduce new, undesired functionality. When I bought my PS4 (at launch), the section of the UI for video apps was pleasant and straightforward. It had the various video apps I had installed and that was it. Fast forward several years, and Sony updated the UI to prioritize showing apps that they wanted you to use (whether you had installed them or not), and even showed ads for movies and such.

          I don't think it's asking too much to not make my product worse after I buy it, and I think we need legislation to prevent companies from doing that. I'm not sure what that would look like, and the government is bought and paid for by those same companies, so it's unlikely we will see that. But we do need it.

          • mrcsharp7 hours ago
            > not make my product worse after I buy it

            How can such law be written and how can a lawyer litigate that in court? The way you've phrased it is very subjective. What is an objective measure that a court can use to determine the percentage of quality drop in a product against a timeline?

            • staticautomatic39 minutes ago
              As a lawyer I think this could potentially be litigated as a breach of the implied warranty of merchantability.
            • anonymouskimmer6 hours ago
              Easy, mandate that any UI changes be revertable for the life of the product, or until the company goes bankrupt.
              • mrcsharp2 hours ago
                How would that work in real life though? Now every change made to any program must be tested against an ever growing combination of enabled and disabled UI changes.
                • anonymouskimmer40 minutes ago
                  I don't know, but I do know that on my web browser I can add and remove various of the buttons and right-click menu options. And on linux I can skin my desktop environment in a variety of ways (Unity stopped working, I went to Gnome which was glitching, and now have something very much like Unity used to be in XFCE and unlike a commercial product I paid nothing for this.).
              • Capricorn2481an hour ago
                > Easy, mandate that any UI changes be revertable for the life of the product, or until the company goes bankrupt

                I'm aware people are annoyed with big UI overhauls that seemingly do nothing, but I don't think you understand what it would take to support what you wrote. You're describing something that gets exponentially harder to maintain as a product ages. It's completely prohibitive to small businesses. How many UI changes do you think are made in a year for a young product? One that is constantly getting calls from clients to add this or that? Should a company support 100 different versions of their app?

                I understand a small handful of companies occasionally allow you to use old UI, but those are cases where the functionality hasn't changed much. If you were to actually mandate this, it would make a lot of UIs worse, not better.

                As much as people want to act like there's a clear separation, a lot of UI controls are present or absent based on what business logic your server can do. If you are forced to support an old UI that does something the company cannot do anymore, you are forcing broken or insecure functionality. And this would be in the name of something nobody outside of Hackernews would even use. Most people are not aware there is an old.reddit.com.

                • anonymouskimmer33 minutes ago
                  There are a couple of ways you can do this:

                  1) Have this law only apply B2C.

                  2) Stop having rolling feature updates except on an opt-in basis. It used to be that when I bought an operating system or a program it stayed bought, and only updated if I actively went out and bought an update. Rolling security updates are still a good idea, and if they break UI functionality then let the end customer know so that they can make the decision on whether or not to update.

                  For hosted software, such as Google office, is it really that much more difficult to host multiple versions of the office suite? I can see issues if people are collaborating, but if newer file formats can be used in older software with a warning that some features may not be saved or viewable, then the same can be done with a collaborative document vis-a-vis whatever version of the software is opening the document.

                  My wife recently went 0patch and some other programs to cover her Win10 when Microsoft stopped updating it. She still got force updated two updates having to do with patching errors in Windows' ESU feature that blocked people from signing up for the 1-year of ESUs. She let those updates happen without trying to figure out a way to block them as they have no other impact on her operating system, but it would have been nice if Microsoft have been serious about ending the updates when it said it was.

                  I am not a programmer, but come on. This was done in the past with far less computational ability.

          • shigawire7 hours ago
            To me it seems crazy to legislate that the UI of software you have licensed cannot change.
            • CamperBob25 hours ago
              You don't do it that way. As the other poster suggests, you mandate that UI changes can always be rolled back by the user.

              It should be illegal for you to change a product you sold me in a way that degrades functionality or impairs usability, without offering me the option of either a full refund or a software rollback.

              If that causes pain and grief for server-based products, oh, well. Bummer. They'll get by somehow.

      • DrewADesign6 hours ago
        Yeah my pixel watch went straight into the trash. All set. Based on my conversations with folks working on these price products, it seems they simply can’t fathom why anybody is so concerned about privacy when giving it up yields so many useful products and services.
      • Teever11 hours ago
        International coordinated action by consumers taking a company to small claims court at the same time around the world to see redress about defective products would be an effective strategy.
        • Y_Y11 hours ago
          Are you proposing a "World Sue A Tech Giant Day"? A global bonanza of micro-litigation that bleeds AI-leviathans dry by a thousand cuts?

          I'm in, but let's have it in October or something when I'm less busy.

          • ryukoposting9 hours ago
            I like this idea, though I'm concerned about how we could make sure the courts are ready to handle the deluge of activity.

            Update: talked to some experts. IANAL, and they aren't either. This would be cataclysmic for the courts unless they knew it was coming AND every claim was filed correctly (fees paid, no errors, etc). Even if everything was done perfectly, it would be a ton of work and there's no way every case would be processed in a day. It's also likely that all the identical cases filed in a single jurisdiction would be heard together in a single trial. There's also weirdness when you consider where each claim is filed. Quote: "you may be in the right, but I can guarantee you would have a terrible time"

            • cj7 hours ago
              The point isn’t to win every individual case, is it?

              I assume the main point would be getting the attention of politicians who would step in and intervene. Especially if it’s a situation where the courts are truly overwhelmed.

            • immibis6 hours ago
              Why would that be anyone's problem? If users keep having to sue Apple to get stuff Apple was supposed to have given them, courts may impose higher and higher penalties until Apple starts just giving them to users without wasting anyone's time.
          • 9 hours ago
            undefined
      • quantified10 hours ago
        Did you agree, or did you give up your data?
      • verisimi10 hours ago
        > I have a pixel watch

        you rented/leased a watch for an undefined amount of time.

    • polotics9 hours ago
      https://gitlab.com/natural_aliens/geminichatsaver/-/tree/mai... pull requests and any other feedback welcome
    • 1vuio0pswjnm74 hours ago
      When people pay for YouTube subscriptions to avoid ads, does YouTube/Google continue to collect and store data

      Do the terms allow YouTube/Google to use the data collected for any purpose

    • jacquesm8 hours ago
      And the fact that even if you don't want it, don't use it they still charge you as if you do.
    • gishh6 hours ago
      People pay for this?
  • gorgoiler9 hours ago
    Lots of things in life seem to be the majority having to go along with the decisions of the minority. I remember in 2012 when Facebook put white chevrons for previous- and next-photo in the web photo gallery product and thinking how this one product decision by a handful of punks has now been foisted on the world. At the time I was really into putting my photography on FB and, somewhat pretentiously, it really pissed me off to start having UI elements stuck on it!

    Car dashboards without buttons, TVs sold with 3D glasses (remember that phase?), material then flat design, larger and larger phones: the list is embarrasing to type because it feels like such a stereotypical nerd complaint list. I think it’s true though — the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.

    And now with AI, too. I just interacted with duck.ai, duck duck go’s stab at a bot. I long for a little more conservatism.

    • venturecruelty5 hours ago
      This is what happens when you let companies become empires, with the tacit agreement of your "democratically-elected" government. In no sane world should my electricity bill go up because Google wants me to put glue on pizza. Unfortunately, I don't think we live in a sane world.
    • 8bitsrule7 hours ago
      >the tech PMs are autocrats and have such a bizarrely outsized impact on our lives.

      They're the ones who are just asking for it ... they, themselves need more forceful training. It's up to us to move slower and fix things.

  • irusensei11 hours ago
    Microsoft is all about this. You know how they also force stuff you don't want on the OS? Somewhere within Microsoft there might be a dashboard where they show their investors people are using Bing and Copilot. Borderline financial scam if you think about it.
    • cons0le10 hours ago
      Copy and paste is not working reliably in in windows anymore; coincidentally it's breaking at the same time Msoft is moving to replace all copy/paste with OCR only. It's garbage
      • keyle7 hours ago
        I haven't used windows for years but the shear amount of commentary on recent changes and the claims are so beyond beliefs...

        It reads like a company that is only there to squeeze money out of existing customers and hell bent on revenues above growth. Like one of those portfolio acquisitions.

      • dandelionv1bes9 hours ago
        Genuinely VSCode has been broken for me for with copying due to it desperately trying to vibe code for me. You’ve reminded me to fix that.
        • dclowd99019 hours ago
          Had the same issue switching to cursor. Cmd + k multiple selection skip is now no longer the key map. Drives me fucking nuts.
      • TZubiri9 hours ago
        I haven't noticed this, also how exactly would OCR copy paste work? In order to copy text I would need to select text, which would mean it's already encoded as text.
        • ArcHound31 minutes ago
          I can imagine, that instead of a text select, they instead take a screenshot.

          Round trip through recall and OCR, here's your "text" or image for pasting.

          Sounds dumb. I know.

        • buildsjets6 hours ago
          Encoded as text, but converted to MS Comic Sans to introduce some OCR errors.
    • connicpu10 hours ago
      That's why this was the year I finally dropped Windows and VSCode forever. Not that hard for me because all the games I play work flawlessly in Proton, and I already used Linux at work.
      • miramba9 hours ago
        What is your replacement for VSCode?
        • gerdesj6 hours ago
          You can drop Windows and keep VSCode. I'm running it on this laptop (Kubuntu 25.04).

          To install it, browse to here: https://code.visualstudio.com/ (search: "vscode"). Click on "Download for Linux (.deb)" and then use Discover to install and open it - that's all GUI based and rather obvious. You are actually installing the repository and using that which means that updates will be done along with the rest of the system. There is also a .rpm option for RedHat and the like. Arch and Gentoo have it all packaged up already.

          On Windows you get the usual hit and miss packaging affair.

          Laughably, the Linux version of VSCode still bleats about updates being available, despite the fact that they are using the central package manager, that Windows sort of has but still "lacks" - MSI. Mind you who knows what is going on - PShell apps have another package manager or two and its all a bit confusing.

          Its odd that Windows apps, eg any not Edge browser, Libre Office, .pdf wranglers, ... anything not MS and even then, there are things like their power toy sort of apps, still need their own update agents and services or manual installs.

          • typpilol5 hours ago
            I learned today that you can install vscode via winget now lol
            • gerdesj5 hours ago
              Yes but winget is not the Windows central package manager. Actually, Windows does not have one but for some reason you have enforced updates from a central source.

              Why does Windows not have a formal source for safe software? One that the owner (MS) endorses?

              One might conclude that MS won't endorse a source of safe software and hence take responsibility is because they are not confident in the quality of their own software, let alone someoneelses.

        • connicpu3 hours ago
          I decided to finally learn a modal editor and installed Helix. Ideal for me since it's very hackable if you're already familiar with Rust. Very easy to build from source. Plus all I need is LSP support and I'm good at work, clangd is all I need for an IDE.
        • xandrius6 hours ago
          VSCodium is pretty good.
        • brendoelfrendo8 hours ago
          Not who you responded to, but for a GUI editor I tend to like Zed, and for terminal I like Helix. Yes, Neovim is probably better to learn because Vim motions are everywhere, but I like Helix's more "batteries included" approach.
    • givemeethekeys8 hours ago
      They've been all about this since Windows 95.
  • firefoxd9 hours ago
    AI reminds me of the time Google+ was being shoved down our throats. If you randomly clicked on more that 7 hyperlinks on the internet, you'd magically sign up for google plus.

    Around that time, one of my employer's website had added google plus share buttons to all the links on the homepage. It wasn't a blog, but imagine a blog homepage with previews of the last 30 articles. Now each article had a google plus tag on it. I was called to help because the load time for the page had grown from seconds to a few minutes. For each article, they were adding a new script tag and a google plus dynamic tag.

    It was fixed, but so much resources were wasted for something that eventually disappeared. Ai will probably not disappear, but I'm tired of the busy work around it.

    • LogicFailsMe9 hours ago
      The difference was that Google Plus was actually kind of cool. I'm not excusing them shoving it down your throat, but at least it was well designed.

      Most of the AI efforts currently represent misadventures in software design at a time when my Fitbit charge can't even play nice with my pixel 7 phone. How does that even happen?

      • chanux5 hours ago
        I remember believing Google+ will win because it was quite nicely done. But I guess it never caught on with the masses to be successful in Google's definition of success (Adsense?).

        PS: I was thinking that I didn't notice it being shoved down because I was high on the Koolaid. But I do remember when they shoved it in YouTube comments.

        • iteriaan hour ago
          Google+ lost because when they launched, they didn't let everyone join. That means that people joined and couldn't bring their friends over, so they bounced off of it. By the time they opened it up to everyone it had a bad reputation of being "dead". And then of being obnoxious when Google refused to allow it natural growth.

          I think they intended to be like Facebook and have a selective group of people join, but they just allowed any random set of people to join and then said tou can bring 5 or some low number with you. That was never going to work for the rapid growth they wanted.

          I liked Google+, but it Google really mismanaged it.

    • duxup2 hours ago
      I liked G+.

      It felt like I had some level of control of my feed and what I saw and for the time it existed the content was pretty good :(

    • insin8 hours ago
      All that time and effort that went into forcing Google+ everywhere and its legacy is just lots of people accidentally ending up with 2 YouTube accounts from when they were messing with that
  • wasmainiac11 hours ago
    > I will not allow AI to be pushed down my throat just to justify your bad investment.

    Pretty much my sentiment too.

    • exsomet11 hours ago
      The neat thing about all this is that you don’t get a choice!

      Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.

      It’s happening, with you or to you.

      • wasmainiac9 hours ago
        I do have a choice, I just stop using the product. When messenger added AI assistants, I switched to WhatsApp. Now WhatsApp has one too, now I’m using Signal. Wife brought home a win11 laptop, didn’t like the cheeky AI integration, now it runs Linux.
        • apatheticonion5 hours ago
          Sadly, almost none of my friends care or understand (older family members or non-tech people). If I tried to convince friends to move to Signal because of my disdain for AI profiteering, they'd react as if I were trying to get them to join a church.
        • 7 hours ago
          undefined
      • hansvm11 hours ago
        Reasonably far off topic:

        Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.

        Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?

        • hermitcrab7 hours ago
          I run a small online software business and I am continually getting cards refused for blue chip customers (big companies, universities etc). My payment processor (2Checkout/Verifone) say it is 3DS authentication failures and not their fault. The customers tell me that their banks say it isn't the bank's fault. The problem is particularly acute for UK customers. It is costing me sales. It has happened before as well:

          https://successfulsoftware.net/2022/04/14/verifone-seems-to-...

          • immibis6 hours ago
            I've recently found myself having to pay for a few things online with bitcoin, not because they have anything to do with bitcoin, but because bitcoin payments actually worked and Visa/MC didn't!

            For all the talk in the early days of Bitcoin comparing it to Visa and how it couldn't reach the scale of Visa, I never thought it would be that Visa just decided to place itself lower than Bitcoin.

            Kind of the same as Windows getting so bad it got worse than Linux, actually...

        • fragmede11 hours ago
          If by "traditional solution" you mean a bunch of data is fed into creating an ML model and then your individual transaction is fed into that, and it spits out a fraud score, then no, they'd not using LLMs, but at this high a level, what's the difference? If their ML model uses a transformers-based architecture vs not, what difference does it make?
          • hansvm8 hours ago
            > what difference does it make

            Traditional fraud-detection models have quantified type-i/ii error rates, and somebody typically chooses parameters such that those errors are within acceptable bounds. If somebody decided to use a transformers-based architecture in roughly the same setup as before then there would be no issue, but if somebody listened to some exec's hairbrained idea to "let the AI look for fraud" and just came up with a prompt/api wrapping a modern LLM then there would be huge issues.

          • esseph10 hours ago
            One hallucinates data, one does not?
      • oxag3n6 hours ago
        Even if my favorite service is so irreplaceable, I still can use it without touching AI part of it. If majority who use a popular service never touch AI features, it will inevitably send a message to the owner one way or another - you are wasting money with AI.
        • ryanmcbride5 hours ago
          Nah the owner will get a filtered truth from the middle managers that present them with information that everything's going great with AI, and the lost money is actually because of those greedy low level employees drinking up all the profit by working from home! The entire software industry has a massive Truth-To-Power problem that just keeps getting worse. I'd say the software industry in this day and age feels like Lord of the Flies but honestly feels too kind.
          • apatheticonion5 hours ago
            Exactly this. "AI usage is 20% of our customer base" "AI usage has increased 5% this quarter" "Due to our xyz campaign, AI usage has increased 10%"

            It writes a narrative of success even if it's embellished. Managers respond to data and the people collecting the data are incentivised to indicate success.

    • marginalia_nu6 hours ago
      It really gives me the same vibes as the sort of products that go all in on influencer marketing. Nothing has made me less likely to try "Raid Shadow Legends" than a bunch of youtubers faking enthusiasm about it.

      It's a sort of pushiness that hints not even the people behind the product are very confident in its appeal.

    • codaphiliac11 hours ago
      almost the same as RTO mandates:

      we’ll force you to come back to justify sunk money in office space.

      • aetherspawn8 hours ago
        I personally think all the gains in productivity that happened with WFH were just because people were stressed and WFH acted like a pressure relief. But too much of a good thing and people get lazy (seeing it right now, some people are filling full timesheets and not even starting let alone getting through a day of work in a week), so the right balance is somewhere in the middle.

        Perhaps… the right balance is actually working only 4 days a week, always from the office, and just having the 5th day proper-off instead.

        I think people go through “grinds” to get big projects done, and then plateau’s of “cooling down”. I think every person only has so much grind to give, and extra days doesn’t mean more work, so the ideal employee is one you pay for 3-4 days per week only.

        • rootusrootus7 hours ago
          We just need a metric that can't be gamed which will reliably show who is performing and who is not, and we can rid ourselves of the latter. Everyone else can continue to work wherever the hell they want.

          But that's a tall order, so maybe we just need managers to pay attention. It doesn't take that much effort to stay involved enough to know who is slacking and who is pulling their weight, and a good manager can do it without seeming to micromanage. Maybe they'll do this when they realize that what they're doing now could largely be replaced by an LLM...

      • venturecruelty6 hours ago
        Not for nothing did the endless WSJ and Forbes articles about "commuting for one hour into expensive downtown offices is good, actually" show up around the same time RTO mandates did.
      • datavirtue9 hours ago
        Don't forget about the poor local businesses. Someone needs to pay to keep the executives' lunch spots open.
        • venturecruelty6 hours ago
          Hey now. Little coffee shops and lunch spots and dry cleaners are what make cities worth living in in the first place.
    • booleandilemma5 hours ago
      I see comments like this one* and I wonder if the whole AI trend is a giant scam we're getting forced to play along with.

      * https://news.ycombinator.com/item?id=46096603

  • Aperocky8 hours ago
    > And let’s be clear: We don't need AGI (Artificial General Intelligence).

    In general, I think we want to have it, just like nuclear fusion, interplanetary and interstellar colonization, curing cancer, etc. etc.

    We don't "need" it similar to people in 1800s don't need electric cars or airports.

    Who owns AGI or what purpose the AGI believe it has is a separate discussion - similar to how airplanes can be used to transport people or fight wars. Fortunately today, most airplanes are made to transport people and connect the world.

    • bluefirebrand7 hours ago
      > In general, I think we want to have it

      Outside of tech circles no one I talk to wants AI for anything

      • sothatsit5 hours ago
        All of my family members bar one use ChatGPT for search, or to come up with recipes, or other random stuff, and really like it. My girlfriend uses it to help her write stories. All of my friends use it for work. Many of these people are non-technical.

        You don’t get to 100s of millions of weekly active users with a product only technical people are interested in.

      • silver_silver6 hours ago
        Can second this. Am the only tech worker among my friends and family and every single one of them reacts to AI the same as to crypto or NFTs
      • immibis6 hours ago
        AI does some things but it's nowhere near as good as it has to be to justify its valuations.
      • nickpp7 hours ago
        Do they want self driving cars or domestic help robots though?
        • novemp6 hours ago
          No.
          • nickpp5 hours ago
            They’re in a tiny tiny tiny minority then.
      • Aperocky5 hours ago
        I meant as a society we should want AGI, but I understand that's not how most feel.
      • marcinzm6 hours ago
        Yet somehow ChatGPT has almost a billion users. Thats a lot of tech bros.
        • Bender6 hours ago
          Any time there is something new everyone will sign up to try it out. Give it time. Once there are enough intrusive ads, or subtle ads shimmed into answers, social manipulation and political bias once it hits critical mass, rewriting of history, squeezed rate limits, more cost for less rate limits that number will drop if they are honest and/or deleting inactive accounts. The negative features will not creep in until they believe they have achieved critical corporate capture and dependency.
          • marcinzm6 hours ago
            The negatives don’t change the inherent demand from people for AI.
            • amrocha5 hours ago
              They actually do. I used to like twitter and now I don’t use it anymore because it’s gone to shit.

              People used to google stuff before it became click bait content and ads.

              Same thing is gonna happen with ai chatbots. You begrudgingly use them when you have to and ignore them otherwise.

        • jamesjyu6 hours ago
          What's even more impressive is the retention charts: https://x.com/far33d/status/1981031672142504146

          Every single cohort is smiling upward from the past two years. That is insane, especially at their scale! AI is useful to people.

  • mrcsharp8 hours ago
    > And let’s be clear: We don't need AGI (Artificial General Intelligence). We don't need a digital god. We just need software that works.

    We (You and I) don't. Shareholders absolutely need it for that line to go further up. They love the idea of LLMs, AI, and AGI for the sole reason that it will help them reduce the cost of labour massively. Simple as that.

    > I will use what creates value for me. I will not buy anything that is of no use to me.

    If only everyone was thinking this way. So many around these parts have no self-preservation tendencies at all.

    • input_sh7 hours ago
      I don't think any of the people at the top actually believe world's-most-average-answer generator is a path that leads us to AGI. It's just a marketing boogeyman and a handy excuse to remove any remnants of agency that the workforce currently has.
    • seizethecheese6 hours ago
      Nobody is using tools that create no value for them, that’s an absurd argument. Many are overclocking their AI use because of their self preservation instinct.
      • callc6 hours ago
        A bit more nuanced: nobody is using tools that have a perceived negative value.

        People pushing AI desperately want to convince us the value is positive. 10x productivity! Fire all your humans!

        In reality, depending on the use-case, the value is much smaller. And can be negative thinking about the total value over long term (incomprehensible shoddy foundations of large / long running projects)

        • rootusrootus6 hours ago
          It's a little alarming to hear from people whose managers have actually told them they expect 10x as much output. How could any competent person, no matter how optimistic, think it works that way? I guess the answer is in my question.

          I'm thankful at the moment that I work for a boring company. I want to leave for something more interesting, but at least I can say that our management isn't blowing a bunch of money on AI and trying to use it to get rid of expensive developers. Heck, they won't even pay for Claude Code. Tightwads.

        • seizethecheese6 hours ago
          Yeah, that’s right. I’m reacting to the author’s statement (I won’t use tools that have negative value) to assert that it’s essentially a vapid statement. I wish the author made arguments such as yours.
      • venturecruelty6 hours ago
        Have you ever met a smoker?
  • MinimalAction9 hours ago
    This might be obvious but I think the only way forwards is to disengage in services offered by these mega-tech companies. Degoogling has become popular enough to foster open communities that prioritize their time and effort to keep softwares free from parasitic enterprises.

    For instance, I am fiddling with LineageOS on a Pixel (ironically enough) that minimizes my exposure to Google's AI antics. That doesn't mean to say it is easy or sustainable, but enough of us need to stop participating in their bad bets to force upon that realization.

    • encomiast9 hours ago
      I'm hoping "degoogle" is the 2026 word of the year.
    • smt889 hours ago
      No one with a white-collar job in the US can get away from Google and Microsoft. We're forced to use one or the other, and some of us are forced to use both.

      That's not to mention all the other tech companies pushing AI (which is honestly all of them).

      • MinimalAction9 hours ago
        Agreed. At the level of companies, it is hard to find any practical solution. Personally, I am trying to do what I can.
    • IlikeKitties9 hours ago
      My Healthcare providers App in Germany refuses to work on anything that isn't a Phone running official Google^tm verified^(r) and hardware attested OS. Same with some banks.
      • MinimalAction8 hours ago
        I feel things like these should be illegal. There must be other genuine ways to verify the end user.
        • venturecruelty5 hours ago
          We managed to have healthcare before smartphones and Google. It's definitely possible.
  • mmastrac11 hours ago
    Is it possible to permanently disable Gemini on Android? I keep getting it inserted into my messages and other places, and it's horrible to think that I'm one misclick away from turning it on.
    • Y_Y11 hours ago
      Sorry, you've irrevocably consented by touching a button that appeared above what you were trying to tap half a millisecond earlier.
      • withinboredom10 hours ago
        That only happens with Apple, so it's fine.
    • ajkjk11 hours ago
      My feeling is we need laws to stop it
      • rubyfan11 hours ago
        The industry agrees with you, hence the regulatory capture.
      • Yizahi10 hours ago
        Too big to fail now
        • DavidPiper9 hours ago
          If it only takes a few years for a private entity to become "too big to fail" and quasi-immune to government regulation, we have a real problem.
          • fyrn_8 hours ago
            An yeah, and honestly we do seem to have a real problem. Here's hoping OpenAI doesn't get the bailout they seem to be angling for..
      • crazygringo10 hours ago
        You don't like some features being added to products so you want laws against adding certain features?

        I might not like a certain feature, but I'd dislike the government preventing companies from adding features a whole lot more. The thought of that terrifies me.

        (To be clear, legitimate regulations around privacy, user data, anti-fraud, etc. are fine. But just because you find AI features to be something you don't... like? That's not a legitimate reason for government intervention.)

        • dclowd99019 hours ago
          I think it's more about enforcing having easy mechanisms to opt out, which seem to be absent with regards to AI integration.

          It's better to assume good faith when providing a counter argument.

          • crazygringo8 hours ago
            That doesn't change anything. If there aren't any harms except that certain people don't "like" a feature, it's not the government's role to force companies to allow users to opt out of features. If you don't like a feature, don't buy the product. The government should not be micromanaging product design.
            • timando6 hours ago
              What product should I buy if I need a smartphone to e.g. pay for parking but I don't want a smartphone that tracks me?
              • crazygringo6 hours ago
                Take it up with your city council, if they're the ones require a smartphone to pay for parking.

                But also, you're going to have to be more specific about what tracking you're worried about. Cell towers need to track you to give you service. But the parking app only gets the data you enable with permissions, and the data the city requires you to give the app (e.g. a payment method). So I'm not super clear what tracking you're concerned about?

                If you don't use your smartphone for anything but paying for parking, I genuinely don't know what tracking you're concerned about.

            • materielle7 hours ago
              Why isn’t it the governments role?

              Because you think it’s not?

              What if I, and many other people, think that it is?

              • crazygringo7 hours ago
                Because it's ultimately a form of censorship. Governments shouldn't be in the business of shutting down speech some people don't like, and in the same way shouldn't be in the business of shutting down software features some people don't like. As long as nobody is being harmed, censorship is bad and anti-democratic. (And we make exceptions for cases of actual harm, like libelous or threatening speech, or a product that injures or defrauds its users.) Freedom is a fundamental aspect of democracy, which is why freedoms are written into constitutions so simple majority vote can't remove them.
                • anonymouskimmer5 hours ago
                  1) Integration or removal of features isn't speech. And has been subject to government compulsion for a long time (e.g. seat belts and catalytic converters in automobiles).

                  2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).

                  • crazygringo5 hours ago
                    I said, "As long as nobody is being harmed". Seatbelts and catalytic converters are about keeping people safe from harm. As are black box warnings and mandatory sonograms.

                    And legally, code and software are considered a form of speech in many contexts.

                    Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.

                    • anonymouskimmer5 hours ago
                      Mandatory sonograms aren't about harm prevention. (Though yes, I would agree with you if you said the government should not be able to compel them.)

                      In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.

                      I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.

                      But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.

                      I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.

            • bluefirebrand5 hours ago
              > If there aren't any harms except that certain people don't "like" a feature

              The reason I don't like these sorts of features is because I think they are harmful, personally

            • novemp6 hours ago
              In a democratic society, "government" is just a tool that the people use to exercize their will.
        • ajkjk6 hours ago
          Yes. I think laws should be used to shut down things that are universally disliked but for which there is no other mechanism for accountability. That seems like obviously the point of laws.
        • venturecruelty6 hours ago
          >You don't like some features being added to products so you want laws against adding certain features?

          Correct, especially when the features break copyright law, use as much electricity as Belgium, and don't actually work at all. Just a simple button that says "Enabled", and it's off by default. Shouldn't be too hard, yeah? You can continue to use the slop machine, that's fine. Don't force the rest of us to get down in the trough with you.

          • crazygringo6 hours ago
            I have no problem with a company voluntarily choosing to make it a toggle.

            I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.

            If there are problems with copyright, that's an issue for the courts -- not a user toggle. If you have problems with the electricity, then that's an issue for electricity infrastructure regulations. If you think it doens't work, then don't use it.

            Passing a law forcing a company to turn off a legal feature by default is absurd. It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.

            • venturecruelty5 hours ago
              >I have a big problem with a government forcing companies to enable toggles on features because users complain about the UX.

              I have a big problem with companies forcing me to use garbage I don't want to use.

              >If there are problems with copyright, that's an issue for the courts -- not a user toggle.

              But in the meantime, companies can just get away with breaking the law.

              >If you have problems with the electricity, then that's an issue for electricity infrastructure regulations.

              But in the meantime, companies can drive up the cost of electricity with reckless abandon.

              >If you think it doens't work, then don't use it.

              I wish I lived in your world where I can opt out of all of this AI garbage.

              >Passing a law forcing a company to turn off a legal feature by default is absurd.

              "Legal" is doing a lot of heavy lifting. You know the court system is slow, and companies running roughshod over the law until the litigation works itself out "because they're all already doing it anyway" is par for the course. AirBnB should've been illegal, but by the time we went to litigate it, it was too big. Spotify sold pirated music until it was too big to litigate. How convenient that this keeps happening. To the casual observer, it would almost seem intentional, but no, it's probably just some crazy coincidence.

              >It's no different from asking a publisher to censor pages of a book that some people don't like, and make them available only by a second mail-order purchase to the publisher. That's censorship.

              Forcing companies to stop being deleterious to society is not censorship, and it isn't Handmaid's Tale to enforce a semblance of consumer rights.

              • crazygringo5 hours ago
                > I have a big problem with companies forcing me to use garbage I don't want to use.

                That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.

                I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.

                In reality, companies try their best to build products that make money, and they compete with each other to do so. These principles have led to amazing products. And as long as no consumers are being harmed (e.g. fraud, safety, etc.), asking the government to interfere in product decisions is a terrible idea. The free market exists because it does a better job than any other system at giving consumers what they want. Just because you or a group of people personally don't like a particular product isn't a reason to overturn the free market and start asking the government to interfere with product design. Because if you start down that path, pretty soon they're going to be interfering with the things you like.

                • anonymouskimmer5 hours ago
                  > The free market exists because it does a better job than any other system at giving consumers what they want.

                  Bull. Free markets are subject to a lot of pressures, both from the consumers, but also from the corporate ownership and supply chains. The average consumer cannot afford a bespoke alternative for everything they want, or need, so are subject to a market. Within the constraints of that market it is, indeed, best for them if they are free to choose what they want.

                  But from personal experience I know damn sure that what I really really want is often not available, so I'm left signalling with my money that a barely tolerable alternative is acceptable. And then, over a long enough period of time, I don't even get that barely tolerable alternative anymore as the company has phased it out. Free markets, in an age of mass production and lower margins, universally mean that a fraction of the market will be unable to buy what they want, and the alternatives available may mean they have to go without entirely. Because we have lost the ability to make it ourselves (assuming we ever had that ability).

                • venturecruelty5 hours ago
                  >That pretty much sums it up. And the answer is: too bad. Deal with it, like the rest of us.

                  I am dealing with it, thanks, by fighting against it.

                  >I have a big problem with companies not sending me a check for a million dollars. But companies don't obey my whims. And I'm not going to complain that the government should do something about it, because that would be silly and immature.

                  Because as we all know, forcing you to use the abusive copyright laundering slop machine is exactly morally equivalent to not getting arbitrary cheques in the mail.

                  >In reality, companies try their best to build products that make money, and they compete with each other to do so.

                  In the Atlas Shrugged cinematic universe, maybe. Now, companies try to extract as much as they can by doing as little as possible. Who was Google competing with for their AI summmary, when it's so laughably bad, and the only people who want it are the people whose paycheques depend on it, or people who want engagement on LinkedIn?

                  >The free market exists because it does a better job than any other system at giving consumers what they want.

                  Nobody wants this!

                  >Because if you start down that path, pretty soon they're going to be interfering with the things you like.

                  I mean, they're doing that, too, and people like you look down your nose and tell me to take that abuse as well. So no, I'm not going to sit idly by and watch these parasites ruin what shreds of humanity I have left.

        • MangoToupe10 hours ago
          What is so terrifying about exerting democratic control over software critical to exist in society?
          • crazygringo10 hours ago
            > over software critical to exist in society?

            I don't know what that means grammatically.

            But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?

            The answer is, because it infringes on freedom. As long as these AI features aren't harming anyone -- if your only complaint is you find their presence annoying, in a product you have a free choice in using or not using -- then there's no democratic justification for passing laws against them. Democratic rights take precedence.

            • collingreen10 hours ago
              This is the argument against all customer protection as well as things like health codes, right?

              Nobody is FORCING you to go to that restaurant so it's antidemocracy to take away their freedom to not wash their hands when they cook?

              • crazygringo8 hours ago
                Please see the part of my comment where I say as long as it's not harming anyone.
            • lenkite9 hours ago
              > As long as these AI features aren't harming anyone

              Why do you say this ? They are clearly harming the privacy of people. Or you don't in privacy as a right ? But, a lot of people do - democratically.

              • crazygringo8 hours ago
                If you can show it's harming privacy, then regulate the privacy. That's legitimate. But I assume you're talking about AI training, not feature usage.

                Trying to regulate whether an end-user feature is available just because you don't "like" AI creep is no different from trying to regulate that user interfaces ought to use flat design rather than 3D effects like buttons with shadows. It would be an illegitimate use of government power.

            • MangoToupe6 hours ago
              > But you could ask, what is so terrifying about exerting democratic control over people's free speech, over the political opinions they're allowed to express?

              What an asinine comparison lol

            • smt889 hours ago
              You're trying to make it sound like a corporation's right to force AI on us is equivalent to an individual's right to speech, which is idiotic in its face. But I'd also point out that speech is regulated in the US, so you're still not making the point you think you're making.

              And as far as I'm concerned, as long as Google and Apple have a monopoly on smartphone software, they should be regulated into the ground. Consumers have no alternatives, especially if they have a job.

              • crazygringo8 hours ago
                It's not "idiotic on its face" and that's not appropriate for HN. Please see the guidelines:

                https://news.ycombinator.com/newsguidelines.html

                Code and software are very much forms of speech in a legal sense.

                And free speech is regulated in cases of harm, like violent threats or libel. But there's no harm here in any legal sense. People are just unhappy with the product UX -- that there are buttons and areas dedicated to AI features.

                Companies should absolutely have the freedom to build the products they want as long as there's no actual harm. If you merely don't like a UX, use a competing product. If you don't like the UX of any product, then tough. Products aren't usually perfectly what you want, and that's OK.

                • smt886 hours ago
                  You're completely ignoring the most important point I raised, which is that I can't use a competing product. I can't stop using Microsoft, Google, Meta, or Apple products and still be a part of my industry or US society.
                  • crazygringo5 hours ago
                    So what's your argument?

                    You're not being forced to use the AI features. If you don't want to use them, don't use them. There's zero antitrust or anticompetitive issue here.

                    Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.

                    If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange. But that's what you're asking for, for the government to determine which features software products have.

                    • 5 hours ago
                      undefined
                    • smt882 hours ago
                      > You're not being forced to use the AI features. If you don't want to use them, don't use them

                      You can't turn them off in many products, and Microsoft's and Google's roadmaps both say that they're going to disable turning them off, starting with using existing telemetry for AI training.

                      > Your argument that Google and Apple should be "regulated into the ground" isn't an argument. It's a vengeful emotion or part of a vague ideology or something.

                      You're just continuing to ignore that all of this is based on their market dominance. There are literally two options for smartphone operating systems. For something that's vital to modern life, that's unacceptable and gives users no choice.

                      If a company gets to enjoy a near-monopoly status, it has to be regulated to prevent abuse of its power. There's a huge amount of precedent for this in industries like telecom.

                      > If I want blenders to be sold in bright orange, but the three brands at my local store are all black or silver, I really don't think it's right for the government should pass a law requiring stores to carry blenders in bright orange

                      Do you really not see the difference between "color of blender" and "unable to turn off LLMs on a device that didn't have any on it when I bought it"?

              • 8 hours ago
                undefined
            • esseph10 hours ago
              Newsflash

              If voters Democratically decide to do something, that's democracy at work.

              • crazygringo8 hours ago
                "Newsflash", the entire point of constitutions that enumerate rights is that fundamental rights and freedoms may not be abridged even by majority decision.

                If a Supreme Court strikes down a majority-passed law limiting free speech guaranteed by the Constitution, that's democracy at work.

                • esseph8 hours ago
                  If they can't be abridged, then why do we have amendments?

                  And no, that would be the courts at work, which may or may not be beholden to the whims of other political figures.

                  • crazygringo8 hours ago
                    It takes more than majority vote to add a new amendment.

                    Go ahead and try, but I don't think you'll find that an amendment to restrict people's freedoms is going to be very popular. Because it will be seen as anti-democratic.

    • saratogacx9 hours ago
      I uninstall the gemini app and disable the google app. It seems they are heavily linked so remmoving it may do the trick. As a practice I don't use any google apps if I can find a good replacement so I am not sure if messages is impacted.
  • thih99 hours ago
    What I also dislike about the AI is that it promotes a mainframe-like development workflow. Schedule your computation, pay for the usage, etc. Any chance this particular trend stops or reverses? Are we ever going to have local AI that is in practice comparable and sufficient?
    • simianparrot8 hours ago
      You don’t need it for development. Neither locally nor on the mainframe. Money saved.
  • duxup2 hours ago
    I think the problem is, the folks in charge feel like they don't have a choices.

    They spent a ton of money and/or they see everyone's LinkedIn posts or fantastic news stories by someone selling BS and they're afraid to say the emperor has no clothes.

  • tim33310 hours ago
    In addition to the annoyances mentioned, the pushing of AI may be leading to a massive waste of money and resources. I'm sure if, instead of AI being shoved in, want it or not, they said pay $1 if you want AI, the number of data centers needed would be reduced dramatically.
    • burningChrome9 hours ago
      >> the pushing of AI may be leading to a massive waste of money and resources

      And massive amounts of energy to run these new fangled AI data centers. Not sure if you lumped that in with "resources", but yes we're already seeing it:

      A typical AI data center uses as much electricity as 100,000 households, and the largest under development will consume 20 times more, according to the International Energy Agency (IEA). They also suck up billions of gallons of water for systems to keep all that computer hardware cool.

      https://www.npr.org/2025/10/14/nx-s1-5565147/google-ai-data-...

    • polynomial6 hours ago
      Zero attractor would likely be a safe bet.
  • Waterluvian10 hours ago
    VSCode feels like it’s in the “brand withdrawal” phase of its lifespan. I’ve turned off the sneakily named “Chat” and yet it still shows the chat sometimes when I toggle the bottom bar visibility.

    > We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.

    I’m not sure I have an idea of what this might look like. Do they want money? What might that model look like? Do they want credit? How would that be handled? Do they want to be consulted? How does that get managed?

    • bloppe9 hours ago
      It probably starts with a reexamination of copyright law, which has always been a pragmatic rather than a principled system, but has not noticeably changed since the digital revolution.

      Copyright is meant to encourage more publication by providing publishers with (temporary) control over their products after having released them to the public. Once the copyright expires, the work enters the public domain, which is really the end goal of copyright law. If publishers start to feel like LLMs are undermining that control, they might publish less original work, and therefore less stuff will eventually enter the public domain, which is considered bad for society. We're already seeing some effects of this as traffic (and ad revenue) to various websites has fallen significantly in the wake of LLM proliferation, which lowers the incentive to publish anything original in the first place.

      Anyway, I'm not sure how best to adapt copyright law to this new world. Some people have thought about it though: https://en.wikipedia.org/wiki/Copyright_alternatives

    • serial_dev9 hours ago
      Just like one of crypto’s biggest real world uses ended up being scams, LLMs are tools for bypassing copyright and enabling plagiarism with plausible deniability.
  • analog317 hours ago
    One issue might just be money. "Forcing" us to buy new laptops every 3 years has to be planned well enough in advance for the new OS and hardware to be ready, but meanwhile, it may have occurred to a lot of people that hanging onto the old stuff for a few more years might make the most sense right now.

    Also, even many home users may be finding that they interact less and less with the "platform" now that everything including MS Office runs from a browser. I can barely remember when the differences between Windows and Linux were even relevant to my personal computer use. This was necessitated by having to find a good way to accommodate Windows, MacOS, iOS, and Android all at once.

    • 1970-01-017 hours ago
      That's an interesting take. We really don't need TPUs and GPUs to run an OS, but that doesn't mean they won't try to sell it to us as a necessity.
    • anonymouskimmer5 hours ago
      Yep. My personal laptop is 12 years old and my work laptop is 6. A replacement battery, some extra RAM, and a replacement fan (kind of hard to get) for my personal laptop a few years ago and it still does everything I want it to do.

      I cracked the screen on my work laptop last year, but IT set me up with a replacement screen. It's so much nicer having discrete buttons than a clickable trackpad, so I skipped on an upgrade. Still does everything I need it to do for work (including working with Windows 11).

      And the vast majority of things I do on either laptop involves a web browser.

  • arealaccount11 hours ago
    > We don't need to do it this quarter just because some startup has to do an earnings call.

    What startups are doing earning calls?

    • nutanc4 hours ago
      Well not exactly earning calls in the classical sense, but haven't you heard about these startups announcing how they have scaled to $100 million in 3 months etc. Maybe revenue calls every quarter.
    • collingreen10 hours ago
      All the public ones?
      • TZubiri9 hours ago
        I'd say at that point they are no longer startups, they've already started up
        • nutanc4 hours ago
          I would say in the AI age almost every business is a startup as per PGs definition [https://paulgraham.com/growth.html]
        • crabmusket9 hours ago
          Lots of businesses like to claim being a "startup" as it brings connotations of innovation, dynamism, coolness, being the "next big thing" etc. There are many senses of the word, and it can be used in different ways (e.g. I work at a small business which has some elements of startup culture, and it's not an incorrect way to give people a sense of what it's like here - but we're definitely well established) but I think often being one of the "cool kids" is part of the motivation.
  • Duanemclemore11 hours ago
    The Copilot button that comes on new laptops is the Darkest Pattern I have ever seen. UI exploitation that has jumped the software / hardware gap.

    A student will be showing me something on their laptop, their thumb accidentally grazes it because it's larger than the modifier keys and positioned so this happens as often as possible. The computer stops all other activity and shifts all focus to the Copilot window. Unprompted, the student always says something like "God, I hate that so much."

    If it was so useful they wouldn't have to trick you into using it.

    • zahlman10 hours ago
      > Unprompted, the student always says something like "God, I hate that so much."

      ... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)

      > If it was so useful they wouldn't have to trick you into using it.

      They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.

      • collingreen10 hours ago
        And the telemetry doesn't lie! Look how many people are clicking that button! KPIs go brrrrrrr
        • sethops19 hours ago
          What's sad is how real this is.
      • Duanemclemore9 hours ago
        BTW, the context is that one thing I teach is 3d modeling software, so the students are following my instructions to enter keyboard commands. It's usually Rhino3d where using the spacebar to repeat the last command is common.

        > ... Dare I ask how Copilot typically responds to that? (They're doing voice detection now, right?)

        Just as a generalization, the dozen or so times this has happened this semester the pop-up is accompanied by an "ugh" then after the window pops up from the taskbar the student immediately clicks back into the program we're using. It seems like they're used to dealing with it already. I haven't seen any voice interaction.

        I mean, the statistics say the students use AI plenty - they just seem annoyed by the interruption. Which I can agree with.

        > They delude themselves that they're doing no such thing. Of course the feature is so useful that you'd want to be able to access it as easily as possible in any context.

        Exactly.

    • politelemon10 hours ago
      I'm having a hard time believing any of this, and am tempted to think this might be in bad faith. It's true it's a bit ambitious on their part that they replaced the right side key, but it isn't larger than normal and it's not positioned any differently than normal keys. Working with hundreds of laptops and humans, several ham fisted, on a daily basis I've not seen this at all.

      Further, a dark pattern is where you are led towards a specific outcome but are pulled insidiously towards another. This doesn't really fall into that definition.

  • winddude7 hours ago
    "But we bought too many GPUs! We spent billions on infrastructure! They have to be put to work!"

    right... none of them are saying that. They could probably use more GPUs considering the price of GPUs and memory are skyrocketing and the supply chain can't keep up. It's about experimentation, they need real users and real feedback to know how to improve the current generation of models, and figure out how to monetise them.

    • seizethecheese6 hours ago
      The author and more than half of the comments here are hallucinating reasons to be angry about AI. Ironic, really.
      • creata5 hours ago
        In what universe is having things that you do not want shoved on you an invalid reason to be angry?
        • seizethecheese5 hours ago
          The comment I’m replying to is an example of one hallucination. Specifically that AI is being pushed because companies have too many GPUs when in reality they have too little. There are several other hallucinations in this thread.
          • nutanc4 hours ago
            This will need a separate blog post But when you give something for free, then you will run out of that resource. So yes, companies have too little GPUs to give their services for free, but too many GPUs for their paid services.
            • seizethecheese4 hours ago
              This may be true but it doesn’t change that the idea that AI is being pushed due to a GPU glut is pure hallucination
      • venturecruelty6 hours ago
        Add "patronizing proponents" to the pile.
  • rmnclmnt9 hours ago
    > We don't need AGI (Artificial General Intelligence). We don't need a digital god. We just need software that works.

    Yeah, I think the days of working software are over (at least deterministically)

  • charlesabarnes5 hours ago
    I think the market forces that is supposed to constrain these companies are broken.

    Users don't have many other options to switch to. Even if they did, the b2b/advertising revenue they get makes up for any losses they may take on the consumer side.

  • dreamcompiler8 hours ago
    There's a humorous vignette in season 2 of "A Man on the Inside":

    Our heroes are in the office of a tech billionaire who says "See that coffee machine? Just speak what you want. It can make any coffee drink."

    So one character says "Half caf latte" and the machine does nothing except open a small drawer.

    "You have to spit in the drawer so it can collect your DNA--then it will make your coffee" the billionaire says.

    This pretty much sums up the whole tech scene today. Any good engineer could build that coffee machine, but no VC would fund her company without the spit drawer.

  • osigurdson11 hours ago
    The worst usage of AI is “content dilution” where you take a few bullet points and generate 5 paragraphs of nauseating slop. These days, I would gladly take badly written content from humans filled with grammatical errors and spelling mistakes over that.
    • jjav11 hours ago
      > generate 5 paragraphs of nauseating slop

      Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.

      The amount of waste is quite staggering in this back and forth game.

      • andrei_says_11 hours ago
        > Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.

        Which more often than not will lose or distort the original intention behind the first 5 bullet points.

        Which is why I avoid using LLMs for writing.

      • harvey910 hours ago
        It's pretty awesome that we now have nondeterministic .Zip
    • 9 hours ago
      undefined
  • rlt10 hours ago
    I don’t we’re at the “companies bought too many GPUs” stage yet. My understanding is they still can’t get enough GPUs, or data centers to put them in, or power to run them. Most companies don’t even own them, they rent from the clouds.

    We are, however, at the “we need an AI strategy” stage, so execs will throw anything and everything at the wall to see what sticks.

    • protocolture7 hours ago
      >“companies bought too many GPUs”

      Nuance.

      They have insufficient GPUs for the amount of training going on.

      If you assume theres a plateau where the benefits of training constantly no longer outweigh the costs, then they probably have too many GPUs.

      The question is how far away is that plateau.

    • bdbdbdb10 hours ago
      They can't get enough GPUs right now, with VCs pumping money into them, but that can very quickly turn into "we're out of money, what do we do with all these GPUs?"
    • brazukadev6 hours ago
      > My understanding is they still can’t get enough GPUs

      Enough for what? The adoption is slowing down fast, people are not willing to pay for a chatbox and a 10% better model won't change that.

  • cnees11 hours ago
    Amazon's Price History feature certainly doesn't need to open their AI assistant, but in addition to be graph I came for, I get a little summary of the graph. I really hope they aren't using an LLM for that when all it's doing is telling me it's the lowest price in 30 days.
    • crabmusket9 hours ago
      That's the kind of lazy bullshit idea that, to me, exemplifies the AI hype slop era we're in. The point of a chart is to communicate visually. If the chart isn't clear without a supplemental explanation, why is it there?

      If user research indicates your chart isn't clear enough, then improve the chart. But what are the odds they did any user research? They probably just ran an A/B test and saw number go up because of the novelty factor.

  • gosub10011 hours ago
    If AI was as great as they pretend, there would be no need to force it on us.
    • DavidPiper9 hours ago
      It's the classic "You don't need to tell a child that something is fun. The more times you a child something is fun, the more they will doubt you. It's easy to tell if something is fun, because it's fun."
    • AGI-slop11 hours ago
      Its the only game in town, and reasonably expected to be close to the last.
      • InexSquirrel10 hours ago
        What do you mean by that?
        • AGI-slop9 hours ago
          A few days ago i took a photo of some water pipes, and asked chatgpt to review it .

          Unbeknownst to me that there was an issue. It pointed out multiple signs of slow leaks and then described what i should do to test and repair it easily.

          I see a lot of negative energy about the 'AI' tech we have today to the point where you will get mass downvoted for saying something positive.

          • nateburke6 hours ago
            Did you follow through with the full repair? How long did it take? What was the materials cost?
          • creata5 hours ago
            If you did a writeup of this case, you could change some minds.
          • brazukadev6 hours ago
            A knowledgeable friend or family member could have helped with that too. AI is helpful, it is just not trillion dollars helpful.
  • shmerl11 hours ago
    As usual, those who spent too much money on it use it as a way to show their investors they didn't waste all that money and to get them to spend even more. That's why it's so messed up.
  • Isamu12 hours ago
    The AI push is not just hype, it’s a scramble for cash. For now the only game plan is to scale up massively with a giant investment gamble, to try to get beyond the obvious limitations that threaten to burst the bubble.

    Plus the general economy outlook is negative, AI is the bright spot. They are striving to keep growth up amid downward pressure.

  • hastily31147 hours ago
    I think the reason AI is being pushed everywhere right now is simply that companies want to experiment with it.

    They want to explore what is possible and what sticks with users.

    The best way to do this is to just push it in their apps as many places as possible since 1. you get a nice list of real world problems to try and solve. 2. You have more pressure on devs to actually make something that works because it is going into production. 3. You get feedback from millions of users.

    Also, by working heavily with their AI, they will discover areas that can be improved and thus make the AI itself better.

    They don't care that it is annoying, unhelpful or uneconomical because the purpose is experimentation.

    • stephen_g4 hours ago
      Or they could beta test them first with users who opt in, and work out that most users hate the features there, instead of taking on AI nonsense everywhere and beta testing crappy features nobody wants on real users?
    • protocolture7 hours ago
      Most of these companies have an advanced features usergroup. Leave it there until its desired.
  • hermitcrab8 hours ago
    >I will not allow AI to be pushed down my throat just to justify your bad investment.

    Unfortunately these reckless investments are likely to cause massive collateral damage to us 'little people'. But I'm sure the billionaires will be just fine.

  • YmiYugy6 hours ago
    > I hear the complaints from the tech giants already: "But we bought too many GPUs! We spent billions on infrastructure! They have to be put to work!"

    > Not my problem.

    I don't know about the author specifically, but the bubble popping is a very bad thing for many people. People keep saying that this bubble isn't so bad because it's concentrated on the balance sheet of very deep pocketed tech companies who can survive the crash. I think that is basically true, but a lot is riding on the stock valuations of these big tech companies and lots of bad stuff will happen when these crash. It's obviously bad for the people holding these stocks, but I these tech stocks are so big that there is a real risk of widespread contagion.

    • fhrhfhfhdj6 hours ago
      The only thing worse than popping the AI bubble is trying to inflate it even larger with a government bailout. The longer the government tries to protect the bubble, the more extreme and destructive the level of capital misallocation is going to become. They should have popped it years ago before we were trying to build nuclear reactors to keep our internet chatbots from taking down the power grid.
  • ChrisMarshallNY10 hours ago
    This seems like a fairly reasoned screed. I can't find much to disagree with.

    I am, personally, quite optimistic about the potential for "AI," but there's still plenty of warts.

    Just a few minutes ago, I was going through a SwiftUI debugging session with ChatGPT. SwiftUI View problems are notoriously difficult to debug, and the LLM got to the "let's try circling twice widdershins" stage. At that point, I re-engaged my brain, and figured out a much simpler solution than the one proposed.

    However, it gave me the necessary starting point to figure it out.

  • ziggure8 hours ago
    I love that the picture in the article is AI-generated
  • beefnugs3 hours ago
    They have already proven they absolutely COULD tag everything with evidence of how much came from an original author/artist. Google's image generator has that hidden signature feature that can publicize ai generated, but they don't go the step further back to the real source. Just "it came from ai" aheeyuk pay us
  • gnarlouse6 hours ago
    Also don’t shove it up our
  • btbuildem10 hours ago
    I mean, we're in the upslope stage of the hype/bubble cycle. Once this pops and 80% of invested people lose their shirts, the long-term adoption cycle will play out much more reasonably, more like OP wishes.
  • nacozarina12 hours ago
    assume it is being ‘done wrong’, not due to the usual trifecta of greed/evil/stupidity, but due to socio-economic pressure that demands this approach.

    AI really needs R&D time, where we first figure out what it’s good for and how best to exploit it.

    But R&D for SW is dead. Users proved to be super-resilient to buggy or mis-matched sw. They adapt. ‘Good-enough’ often doesn’t look it. Private equity sez throw EVERYTHING at the wall and chase what sticks…

  • seizethecheese7 hours ago
    > Right now, the frantic pace of deployment isn't about utility; it's about liquidity. It’s being shoved down our throats because some billionaires need to make some more billions before they die.

    > We see the hallucinations. We see the errors. Let's pick the things which work and slowly integrate it into our lives. We don't need to do it this quarter just because some startup has to do an earnings call.

    Citation needed?

    To me this is a contentless rant. AI is about old billionaires getting richer before they die? It’s at least a lot more than that.

    Seems like some people tried AI in 2023, got negative affinity for it, then just never update with new information. In my personal use, hallucinations are way down, and it’s just getting more and more useful.

  • snorbleck11 hours ago
    yeah, we're f&%^ed
    • ponector10 hours ago
      Fired?
      • mikestew9 hours ago
        Your regex parser is broken if your answer is a five char string.
  • newsclues12 hours ago
    If AI was amazing you wouldn’t need to push it, people would demand it!

    You need to push slop, because people don’t really want it.

  • akomtu10 hours ago
    AI can be thought of as a parasitic lifeform: it feeds on truth and excretes slop. We know AI is no good for us, but those pushing it have a nefarious plan: make people dependent on it, so we can't get rid of this parasite without destroying our society.
  • DaveZale12 hours ago
    try watching a televised American football game. So many ads for AI. Of course ads appeal most to the gullible.
    • skeeter202011 hours ago
      Have you seen the Workday ads featuring all the washed-up rock stars? They're pushing managing people AND AI agents - using AI. sigh...
  • zahlman10 hours ago
    As I click into this thread from the front page, "Writing a Good Claude.md" appears immediately above it. Sigh.
    • Flipflip797 hours ago
      What’s the problem there exactly? There’s nothing necessarily diametrically opposed in the two articles. They’re almost complimentary really
    • protocolture7 hours ago
      "How to voluntarily use claude to get what you want"

      "I dont like being forced to use AI products"

      Whats the issue?

      • brazukadev6 hours ago
        HN might be the one pushing AI down their users throat with so many top stories every dall, all day.
  • jeffbee7 hours ago
    This article has basically nothing to say, and all of the comments here on HN are just projecting their priors into the vacuum.
    • mrcsharp7 hours ago
      It is one more voice against the frantic nature of this topic and the corporations desperate to push for it. It is no different than one more post about Claude.md making it to the front page.
      • jeffbee6 hours ago
        It spends no time establishing the facts of franticness and desperation.
  • bentt8 hours ago
    This is just another example of getting the actual problem wrong.

    When he says "billionaires making more billions" it's really off the mark. These people are not forcing AI down our throats to make billions.

    They are doing it so they can win.

    Winning means victory in a zero sum game. This is a game that is zero sum because the people that play it think that way. However, the point is not to make money. That's a side effect.

    They want to win so the other guys don't. That means power, growth, prestige, and winning. Winning just to win.

    Once people start to understand that this is the prime directive for the elite in the tech business, the easier it is for everyone to defend against it.

    The way you defend against it is to make it non-zero sum. Spread the ideas out. Give people choices. Act more like Tim Berners Lee and less like Zuck. This will mean less money, sure, but more to the point, it deprives anyone of being "the winner". We should all celebrate any moves that take power away from the few and redistribute it to many. Money will be made in the process, but that's okay.

  • 6510 hours ago
    Oh how I'd like the AI bubble to pop already when ROIs don't justify the cost. I like AI for things like getting recommendations or classifying images. And yet execs feel the need to force every possible use case down our throats even if they don't make any sense or make quality worse.

    E.g. Programming - and I do judge not only those who use AI to code but execs who force people to use AI to code. Sorry, I'd like to know how my code works. Sorry, you're not an efficient worker, you're just making yourself dumber and churning out garbage software. It will be a competitive advantage for me when slop programmers don't know how to do anything and I can actually solve problems. Silicon Valley tech utopians cannot convince me otherwise. I don't think poorly socialized dweebs know much about anything other than their AI girlfriends providing them with a simulation of what it feels like to not be lonely.

  • zzzeek12 hours ago
    > We will work with the creators, writers, and artists, instead of ripping off their life's work to feed the model.

    i support this but the Smarter Than Me types say it's impossible. It's not possible to track down an adequate number of copyright holders, much less get their permission, much less afford to pay them, for the number of works required in order to get the LLM to achieve "liftoff".

    I would think that as I use Claude for coding, it would work just as well if it didnt suck down the last three years of NYT articles as if it did. There's a vast amount of content that is in the public domain, and if you're ChatGPT you can make deals with a bunch of big publishers to get more modern content too. But that's my know-nothing take.

    maybe the issue is more about the image content. Screw the image content (and definitely the music content, spotify pushing this slop is immensely offensive), pay the artists. My code OTOH is open source, MIT licensed. It's not art at all. Go get it (though throw me a few thousand bucks every year because you want to do the right thing).

    • FLT811 hours ago
      It's not' impossible', it's economically unviable. There's a difference. We really should be mandating that companies that don't pay fair market prices for the data they use to train their models must open source everything as reparation to humanity.
    • creata5 hours ago
      > i support this but the Smarter Than Me types say it's impossible.

      Adobe has been training models on only licensed media. I'm not sure if it's all their models or just some of them, and I haven't seen the results, but someone's doing it.

      • zzzeek4 hours ago
        How ironic it would be for Adobe, the literal 500 ton dreadnaught of copyright, patent, and intellectual property ownership, to be sucking down training content from unlicensed sources. At least something is not entirely wrong in the world today.
    • NoraCodes7 hours ago
      Why do AI companies get to do whatever they want in order to meet their business goals ("liftoff")?
    • bgwalter11 hours ago
      It is not an axiom that LLMs even have the right to achieve "liftoff". They are obvious instruments of plagiarism that often just reorder sentences so as not to get caught. They can be forbidden.

      If you don't mind the oligarchs stealing your code, that is your prerogative. Many other people do mind.

      • 11 hours ago
        undefined
  • hmans8 hours ago
    [dead]
  • eur0pa12 hours ago
    [flagged]
    • stack_framer12 hours ago
      And this is the new, fashionable, easy way to insult someone. Just say their work sounds like AI, and job done.
      • jsheard12 hours ago
        In this case they're not doing themselves any favors by leading their posts with obviously AI generated images. That just primes the reader to suspect the author is slopping it up before they even start reading.
      • nutanc12 hours ago
        You are right. And "work sounds like AI" is an insult says everything you need to know about the generations by AI :)
      • poly2it12 hours ago
        Do you want AI pushed down your throat?
    • 12 hours ago
      undefined
    • 12 hours ago
      undefined
  • xnx11 hours ago
    [flagged]
    • suprfsat11 hours ago
      Yeah, my Windows rebooted and then bam, this web page opened itself.
      • malfist11 hours ago
        And the subscription fee for your browser went up 40% because adding anti-ai content is a value add. And no, you cant opt out
  • chasing0entropy12 hours ago
    [flagged]
    • procaryote12 hours ago
      The article renders well even if you block javascript though
      • zahlman10 hours ago
        GP's point is that the page attempts to run unnecessary JS and that this is objectionable.
    • happytoexplain12 hours ago
      You're implying they're the same in some way, but you haven't explained why.
      • chasing0entropy11 hours ago
        Disable your script execution and cookie storage for the OP site and then attempt to view it. The page and content load fine; the host injecting coercing messages for enabling tracking cookies and scripts are the reason AI has been integrated into everything.

        You either comply of face unnecessary roadblocks. OP has complied by sharing the link. My right to choose tracking cookies and script execution is parallel to my right to not utilize or be forced to utilize AI. This issue has to be addressed universally as is not simple "no ai" on the web; it's freedom to use the web or compliance with violation of that freedom

    • dbalatero11 hours ago
      Hyperbole much? These are two different issues, you can of course write your own blog post about that.

      Benefit of the doubt: this person wants to get their word out and it's more energy than they had to track down a pristine, pure, sparkling blogging engine.

    • acedTrex12 hours ago
      Can you explain javascript and cookies similarity to LLMs?
    • dawnerd12 hours ago
      You have the ability to turn that off…
  • andy9912 hours ago
    [flagged]
    • dbalatero11 hours ago
      I think the author cares about both.

      Did that single sentence in this relatively short, 36 sentence post really make you flip the table as hard as you imply? That's surprising if so.

    • 1shooner11 hours ago
      On the contrary, cannibalizing the commercial viability of original content creation is possibly the most short-sighted aspect of the current AI push. That isn't 'political', it's just a relatively conservative assessment of the content market.
  • mwkaufma10 hours ago
    >> It is time to do AI the right way.

    Some "No True Scotsman"-flavored cope.

  • rvz11 hours ago
    Ok fair enough, feel more of the AGI.
  • stego-tech11 hours ago
    Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.

    I’m not opposed to any of the above, necessarily. I’ve just always been the type to want to adopt things as they are needed and bring demonstrable value to myself or my employer and not a moment before. That is the problem that capital has tried to solve through “founder mode” startups and exploitative business models: “It doesn’t matter whether you need it or not, what matters is that we’re forcing you to pay for it so we get our returns.”

    • manphone11 hours ago
      The difference is the level of investment and consumer application for each service - most customers would never be able to tell you what an erp is.
    • tayo428 hours ago
      How could say cloud is overhyped when we're at a point where running physical machines is a rare and specialized skill and companies couldn't run their own hardware anymore.
    • aurareturn11 hours ago

        Replace AI with Blockchain. With cloud. With big data. With ERP. With service models. With basically almost any fad since virtualization. It’s the same thing: over-hyped tools with questionable value being shoe-horned into every possible orifice so the investors can make money.
      
      They said the exact same thing when electricity was invented too.

      Gas companies said electricity was a fad. Some doctors said electric light harms the eyes. It's too expensive for practical use. Need too much infrastructure investment. AC will kill people with shocks. Electrification will destroy jobs, said gas lamp unions. It's unnatural, said some clergy. And on and on and on...

      • stego-tech8 hours ago
        Thing is though, most folks don't recall that electricity itself wasn't the fad, but some methods of generation were. Same with vehicles, as everyone tried every fuel until we gradually found the ones that worked best in each engine or vehicle.

        As I'm seeing from the multitude of folks scrutinizing my clearly exhaustive list of every single technology fad ever (sarcasm, for those unclear), they're missing the forest for the trees and therefore likely to hit every single rung when they fall down a ladder. I'm not saying X or Y wasn't valuable, or wasn't useful, only that they way they're shoved into every single orifice by force is neither of those things to those taken advantage of in the quest for utility.

        The point isn't that advancements are bad, the point is the way we force them down the throats of everyone, everywhere, all the time creates a waste of skill and capital for the returns of a very select group of people. The point is that fads are bad, and we should let entities (companies and people) find use in what's offered naturally rather than force-feeding garbage to everyone to see what's actually palatable.

      • drivebyhooting10 hours ago
        Electric light indeed harms the eyes.
      • exasperaited10 hours ago
        This post is just one multipurpose category error.
      • XorNot10 hours ago
        What an utterly bizarre comparison.

        Even the block chain comparison isn't valid because it didn't consist of an "AI" button getting crammed into every single product and website, turned into a popover etc.

        • aurareturn9 hours ago
          There are nuances to the examples.

          For example, I'm not a big fan of blockchain. In fact, I think crypto is just 99% scam.

          But big data led to machine learning and LLMS right? Cloud led to cheaper software with faster deploy times right? In fact, cloud also means many browser based apps replacing old Windows apps.

          None of these were fads. They are still tremendously useful today.

  • atleastoptimal9 hours ago
    Boomers in the manager class love AI because it sells the promise of what they've longed for for decades: a perfect servant that produces value with no salary, no need for breaks, no pushback, no workers comp suits, etc.

    The thing is, AI did suck in 2023, and even in 2024, but recently the best AI models are veering into not sucking territory, which when you look at it from a distance makes sense, eventually if you throw the smartest researchers on the planet and billions of dollars at a problem, something eventually will give and the wheels will start turning.

    There is a strange blindness many people have on here, a steadfast belief that AI just will just never end up working or always be a scam, but the massive capex on AI now is predicated on the eventual turning of the fledgling LLM's into self-adaptive systems that can manage any cognitive task better than a human. I don't see how the improvements we've seen over the past few years in AI aren't surely heading in that direction.

    • bkolobara8 hours ago
      It still kinda sucks though. You can make it work, but you can also easily end up wasting a huge amount of time trying to make it do something that it's just incapable of. And it's impossible to know upfront if it will work. It's more like gambling.

      I personally think we have reached some kind of local maximum. I work 8 hours per day with claude code, so I'm very much aware of even subtle changes in the model. Taking into account how much money was thrown at it, I can't see much progress in the last few model iterations. Only the "benchmarks" are improving, but the results I'm getting are not. If I care about some work, I almost never use AI. I also watch a lot of people streaming online to pick up new workflows and often they say something like "I don't care much about the UI, so I let it just do its thing". I think this tells you more about the current state of AI for coding than anything else. Far from _not sucking_ territory.

  • AGI-slop11 hours ago
    AI will with certainty increase productivity of % and the rest will fall behind, perhaps dramatically. Effectiveness with AI can still be a grind, beyond simple prompting, we are getting lots of expensive AI tools heavily subsidized right now, that may not always be the case.
  • qustrolabe9 hours ago
    What examples of AI integrations annoy you? Because I have such wonderful time randomly discovering AI integrations where they actually fit nicely: 1) marimo documentation has ask button to quickly get some help, kind of like way smarter RAG; 2) postman has AI that can write little scripts that visualize responses however you want (for example I turned bunch of user ids into profile links so that I could visit all of them); 3) Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about. 4) Google's AI Mode saved me many clicks, even just Gemini that can quickly fetch when certain TV Show goes live and make reminder is amazing
    • evil-olive9 hours ago
      > Grok button on each Twitter post is just amazing to quickly get into what post even references and talks about.

      when Charlie Kirk was shot, and the video was posted to Twitter, people asked Grok to "fact-check" it...and Grok told them the videos were fake and Kirk was alive. [0]

      Grok also spread misinformation about the identity of the shooter. [1]

      > On Friday morning, after Utah Gov. Spencer Cox announced that the suspect in custody was Robinson, Grok's replies to X users' inquiries about him were contradictory. One Grok post said Robinson was a registered Republican, while others reported he was a nonpartisan voter. Voter registration records indicate Robinson is not affiliated with a political party.

      and that's just one particularly egregious event in a long string of problems, such as the MechaHitler thing. [2] and the Elon Musk piss-drinking thing. [3]

      so if you're going to defend these "AI" integrations as being useful and helpful...I dunno, Grok is probably not a good example to point to.

      0: https://www.engadget.com/ai/grok-claimed-the-charlie-kirk-as...

      1: https://www.cbsnews.com/news/ai-false-claims-charlie-kirk-de...

      2: https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...

      3: https://www.404media.co/elon-musk-could-drink-piss-better-th...

    • croes9 hours ago
      On Windows: Notepad and Edge Developer Tools