27 pointsby andsoitis15 hours ago17 comments
  • aleksiy12314 hours ago
    Is there a name for the thing where "plausible" sounding, easily digestible narratives with nothing to back them up are used to explain complex interactions that noone really fully understands.

    While google definitely has issues, this aint the root cause, even if there where only one root cause.

    • aleksiy12314 hours ago
      Also rereading, the tweet the central point seems to be why the adoption curve of ai by (individual ?) developers within Google seems to be the same as of other companies.

      I think this begs the question, why should we expect AI adoption curve of Google devs to be any different from any other company?

      Adoption, has been pretty rapid everywhere from what I can see, the tooling itself is still fairly unstable as everyone is rapidly iterating.

      The productivity gains are there, but they seem much more modest than people like to admit (10-20%).

    • Spooky2314 hours ago
      Shitposting.

      Yeah Google has a fundamentally broken engineering organization because some influencer dude thinks they should vibe code, with technology those presumably incompetent engineers substantially invented.

    • mtklein14 hours ago
      The closest term I know is "just-so story".
  • solarkraft14 hours ago
    This is so doomy, like not being all in is the biggest mistake.

    > My Google friend and I had this conversation over a month ago. I didn't share it because I wanted to look around a bit, and see if it's really as bad as all that. I've been talking to people from dozens of companies since then. And yeah. It's as bad as all that.

    > They may have moats and high walls, but the horde is coming for them all the same.

    Can somebody explain how engineering getting a bit cheaper justifies this hysteria?

    From the little I know, it seems that there is plenty more to running a software business than engineering, especially if you don’t include project management or product vision in that.

    Maybe I’m an AI laggard or naive, but I see plenty of things that can’t easily be automated because I tried.

    Maybe I’ll be automated away tomorrow by somebody who believes harder ...

    • headius11 hours ago
      A simple rule to follow right now: never trust someone who's selling an AI coding product to tell you how great AI is at coding.

      It sounds like my experience has been similar to yours. I have found a few places where agentic coding produces pretty good results.. generally very small patches that could have been written by anyone. I give the tool credit for finding small bugs that nobody noticed before.

      On the larger or novel tasks I've thrown at these models, including some of the top tier options, the tools have either produced incorrect solutions, solutions written in a very inefficient way, or solutions that actually introduced more problems. I've taken some of these same challenges to other AI experts as who couldn't believe the tool failed. None of them were able to get good results either.

      Everybody is desperate to carve out their slice of the AI Gold Rush right now before it all condenses down and developers realize they can't give up all agency to coding tools trained on the great mass of garbage that's out there. If at some point these tools truly do make developers 10x more efficient, they'll naturally get adopted. Hype chasers and product marketers are not the ones to listen to right now.

    • 14 hours ago
      undefined
  • pensatoio14 hours ago
    This is a very smug post with very little substance.
  • 827a14 hours ago
    > Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool.

    The 20% in the agentic power user camp broadly refuse to do any external educational communication. They're only interested in pulling the ladder up behind them; that is, if they are climbing the ladder at all, and its reasonable to have doubts about this for the reason that they broadly allow very little observability into their processes.

  • code5115 hours ago
    Anything from Steve Yegge must come with an AI bias disclaimer now.
    • Zafira14 hours ago
      He was also effectively paid $300,000 to facilitate a cryptocurrency rug pull on Gas Town, bowing out after the rug pull because Gas Town required his “full attention”. [0]

      Everything he says now is suspect.

      [0] https://steve-yegge.medium.com/steveys-birthday-blog-34f4371...

      • bayarearefugee14 hours ago
        Yeah, I used to respect him as a tech blogger, but you can't wash that crypto stink off once it gets on you.
  • firefoxd14 hours ago
    I'm not sure what the expectations is supposed to be. Is Google supposed to have 100% adoption rate? Even OpenAI and Anthropic don't have that.

    I have managers asking similar things at work: how can we increase AI adoption in dev teams? Why though? How does it benefit the manager? How can we increase the vim adoption rate for dev teams?

    Google has thousands of mature products. You don't just throw a single solution (AI) to all problems.

  • pingou14 hours ago
    What does it have to do with hiring freeze? They say 20% are agentic power users at Google. Can't they teach the others? Why would someone from outside be somehow more likely to use AI, or more likely to influence their colleagues?
  • riskassessment14 hours ago
    Can someone explain to me how and in what way Claude Code is considered "agentic" and Cursor/Gemini CLI/Antigravity are not?
    • gbalduzzi14 hours ago
      Gemini cli id definitely agentic, cursor and antigravity have agentic tools.

      Claude code is simply considered the best agentic tool, not the only one lol

  • stanfordkid14 hours ago
    They basically wrote the equivalent to Claude Code and launched it as a product... how does their adoption curve lag behind John Deere?
  • skizm14 hours ago
    > How is it that a handful of companies are taking off like a spaceship, and the rest, including Google, are mired in inaction?

    Which companies? Not counting companies directly benefiting from selling AI.

    • dpark14 hours ago
      Agree. This is the same hand-wavy “some people” that lazy writers use when they can’t back up their claims with actual data.
  • krackers14 hours ago
    > just cancelled IntelliJ for a thousand engineers

    IntelliJ can't cost more than the AI provider subscriptions, and it will actually handle large refactors without breaking your codebase.

    • dpark14 hours ago
      But if you take away their IDEs, they’ll be forced to use the AI! What could possibly go wrong with this plan?
  • eigen-vector13 hours ago
    Yegge is basically posting straight up slop these days. He's making some extremely confident claims about google after talking to one person. Didn't stop to think if that person is the most informed on this.

    Somehow yegge also has problems with the adoption curve being consistent with other companies. Whatever that means.

  • awkward14 hours ago
    Why would we expect John Deere to have a low AI adoption curve? It's main product line is just as computerized as Tesla's.
  • dieortin14 hours ago
    It’s interesting that your code not being 100% AI slop will get your engineering org called “utterly mediocre” nowadays
  • danielovichdk14 hours ago
    20 years at Google and this is the shit he's relaying. Fire his ass.
  • bayarearefugee14 hours ago
    > How is it that a handful of companies are taking off like a spaceship, and the rest, including Google, are mired in inaction?

    All these companies are "taking off like a spaceship", so... where is all the (quality, non-slop) space traffic?

    I use LLMs. I believe LLMs (especially combined with agentic coding) increase coding productivity and in the right hands can produce non-slop, but by and large on a macro level everything still feels pretty much the same industry wide as it did last year, and five years ago and ten years ago.