381 pointsby cheeaun3 hours ago43 comments
  • anonymous9082132 hours ago
    Microsoft employee (VP of something or other, for whatever Microsoft uses "VP" to mean) doing damage control on Bluesky: https://bsky.app/profile/scott.hanselman.com/post/3mez4yxty2...

    > looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP

    > Understood. Not trying to sweep under rugs, but I also want to point out that everything is moving very fast right now and there’s 300,000 people that work here, so there’s probably be a bunch of dumb stuff happening. There’s also probably a bunch of dumb stuff happening at other companies

    > Sometimes it’s a big systemic problem and sometimes it’s just one person who screwed up

    This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should. In either case -- no review process, or a failed review process -- the failure is definitionally systemic. If a single person can on their own whim publish not only plagiarised material, but material that is so obviously defective at a single glance that it should never see the light of day, that is in itself a failure of the system.

    • HelloNurse15 minutes ago
      > "everything is moving very fast"

      Then slow down.

      With this objective lack or control, sooner or later your LLM experiments in production will drive into a wall instead of hitting a little pothole like this diagram.

    • p_ingan hour ago
      You’re incorrect on how the publishing process works. If a vendor wrote the document, it has a single repo owner (all those docs are in github) that would need to sign off on a PR. There isn’t multiple layers or really any friction to get content on learn.msft.
      • anonymous90821329 minutes ago
        I suggested that if there is no review process, it is a systemic issue, and that if there is a review process that failed to catch something this egregious, it is a systemic issue. My supposition is that regardless of how the publishing process works, there is a systemic failure here, and I made no claims as to how it actually works, so I'm not sure where the "you're incorrect on how it works" is coming from.
      • RobotToaster24 minutes ago
        I've seen better review processes in hobby projects
        • HelloNurse12 minutes ago
          Neither deadlines nor cheap work for hire help any sort of review process, while an hobby project is normally done by someone who cares.
    • hansmayer19 minutes ago
      > This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.

      Completely with you on this, plus I would add following thoughts:

      I don't think the size of the company should automatically be a proxy measure for a certain level of quality. Surely you can have slobs prevailing in a company of any size.

      However - this kind of mistake should not be happening in a valuable company. Microsoft is currently still priced as a very valuable company, even with the significant corrections post Satyas crazy CapEx commitments from 2 weeks ago.

      However it seems recently the mistakes, errors and "vendors without guidelines" pile up a bit too much for a supposedly 3-4T USD worth company, culminating in this weird random but very educational case. If anything, it's indicator that Microsoft may not really be as valuable as it is currently still perceived.

    • prmoustachean hour ago
      > In either case -- no review process, or a failed review process -- the failure is definitionally systemic.

      Ortho and grammar errors should have been corrected, but do you really expect a review process to identify that a diagram is a copy from another one some rando already published on the internet years ago?

      • pointlessonean hour ago
        It’s not just a copy. It’s a caricature of a copy with a plenty of nonsense in it: typos and weird “text”, broken arrows, etc. Even a cursory look gives a feeling that something’s fishy.
        • tharos4733 minutes ago
          Weird text was already deemed acceptable by microsoft in their documentation as they machine translated most screenshots instead of recreating them in different locales, leading to the same problems as this image.
        • toong42 minutes ago
          "Legal reviewed it and did not flag any issues!"
      • logifailan hour ago
        Shouldn't "where are we sourcing our content" be part of any publication review process?
      • clortan hour ago
        plenty of people on the internet recognised it immediately, so sure, he may have been a rando when he created it, but not so much 15 years later..
        • Freak_NL41 minutes ago
          Just that tiny image on his blog was enough for me to go "oh yeah, I used his diagram to explain this type of git workflow to colleagues a decade ago". Someone should have spotted that right away.
      • sznio43 minutes ago
        No. I'd expect that "continvouclous morging" gets caught.
      • michaelt32 minutes ago
        Here is the original: https://nvie.com/posts/a-successful-git-branching-model/

        Here is the slop copy: https://web.archive.org/web/20251205141857/https://learn.mic...

        The 'Time' axis points the wrong way, and is misspelled, using a non-existent letter - 'Tim' where the m has an extra hump.

        It's pretty clear this wasn't reviewed at all.

    • nxobjectan hour ago
      A postmortem for that but not Copilot in notepad.exe? Priorities…
    • Anon4Now15 minutes ago
      > everything is moving very fast right now

      Now that's an interesting comment for him to include. The cynic in me could find / can think of lots of reasons from my YouTube feed as to why that might be so. What else is going on at Microsoft that could cause this sense of urgency?

    • tabs_or_spaces2 hours ago
      An entire post mortem for a morged diagram is wild
    • adityaathalye2 hours ago
      Oldest trick in the book... Shoot the vendor.
    • xxran hour ago
      Yeah, isn't this why we're told everything "moves so much slower at a bigco" than at a startup?
    • theolivenbauman hour ago
      Seems like this is going to be the year of AI slop being released everywhere by Microsoft. Just wish they'd put as much effort into a post morten for this one as they're doing for a diagram on a blog post https://github.com/microsoft/onnxruntime/issues/27263#issuec...
    • thunfischtoast2 hours ago
      Microsoft seems to have thrown quality assurance overboard completely. Vibe generate everything, throw it at a wall, see what sticks. Tech bros are so afraid of regulation they even drop regulation inside their own companies. (just kidding)
      • nhinck2an hour ago
        It's not just throwing QA out, they are actively striving for lower quality because it saves money.

        They're chasing that sweet cost reduction by making cheap steel without regard for what it'll be used for in the future.

      • bonesss30 minutes ago
        Just a thought: the timeline of the vibe techs rolling out and the timeline of increasing product rot, sloppiness, and user-hostile “has anyone ever actually used this shit!?!” coming out of MS overlap.

        Vibing won’t help out at all, and years from now we’re gonna have project math on why 10x-LLM-ing mediocre devs on a busted project that’s behind schedule isn’t the play (like how adding more devs to a late project generally makes it more late). But it takes years for those failures to aggregate and spread up the stack.

        I believe the vibing is highlighting the missteps from the wave right before which has been cloud-first, cloud-integrated, cloud-upselling that cannibalized MS’s core products, multiplied by the massive MS layoff waves. MS used to have a lot of devs that made a lot of culture who are simply gone. The weakened offerings, breakdown of vision, and platform enshittification have been obvious for a while. And then ChatGPT came.

        Stock price reflects how attractive stocks are for stock purchasers on the stock market, not how good something is. MS has been doing great things for their stock price.

        LLMs make getting into emacs and Linux and OSS and OCaml easier than ever. SteamOS is maturing. Windows Subsytem for Linux is a mature bridge. It’s a bold time for MS to be betting on brand loyalty and product love, even if their shit worked.

    • 7bitan hour ago
      Any excuse that tries to play down its own fault by pointing out other companies also have faults, is dishonest.

      And that's exactly what happened here.

  • tombert2 minutes ago
    Is there a single thing that Microsoft doesn’t half-ass? Even if you wanted to AI generate a graph, how hard is it to go into Paint or something and fix the test?

    I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.

    It feels like everything Microsoft does is like that though; superficially fine until you get into the details of it and it’s actually broken, but you have to put up with it because it’s used everywhere.

  • KronisLVa minute ago
    > take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.

    I don’t even care about AI or not here. That’s like copying someone’s work, badly, and either not understanding or not giving a shit that it’s wrong? I’m not sure which of those two is worse.

  • ezst2 hours ago
    Waiting for the LLM evangelists to tell us that their box of weights of choice did that on purpose to create engagement as a sentient entity understanding the nature of tech marketing, or that OP should try again with quatuor 4.9-extended (that really ships AGI with the $5k monthly subscription addon) because it refactored their pet project last week into a compilable state, after only boiling 3 oceans.
    • meibo2 hours ago
      Glorp 5.3 Fast Thinking actually steals this diagram correctly for me locally so I think everyone here is wrong
    • Longwelwind22 minutes ago
      Using an LLM to generate an image of a diagram is not a good idea, but you can get really good results if you ask it to generate a diagram.io SVG (or a Miro diagram through their MCP).

      I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!

    • shaky-carrousel8 minutes ago
      You're holding the LLM wrong.
    • nolok21 minutes ago
      It's microsoft's AI though, not even the totally crazed evangelists like that one.
  • cwal372 hours ago
    LinkedIn is also a great example of this stuff at the moment. Every day I see posts where someone clearly took a slide or a diagram from somewhere, then had ChatGPT "make it better" and write text for them to post along with it. Words get mangled, charts no longer make sense, but these people clearly aren't reading anything they're posting.

    It's not like LinkedIn was great before, but the business-influencer incentives there seem to have really juiced nonsense content that all feels gratingly similar. Probably doesn't help that I work in energy which in this moment has attracted a tremendous number of hangers-on looking for a hit from the data center money funnel.

    • ChristianJacobs2 hours ago
      LinkedIn is a masquerade ball dressed up as a business oriented forum. Nobody is showing their true selves, everyone is either grinding at their latest unicorn potential with their LLM BFF or posting a "thoughtful" story that is 100% totally real about a life changing event that somehow turns into a sales pitch at the end...
      • ozim2 hours ago
        There are people who write genuinely interesting stuff there as well.

        I use block option there quite a lot. That cleans up my experience rather well.

    • varjag16 minutes ago
      Of course they aren't. The text to go with those diagrams is also machine generated.
  • rmunn2 hours ago
    Similar story. I'm American but work and live outside the US, so I don't know how likely this would be if I had ordered from Amazon. But I ordered a rug for my sons' room from this country's equivalent to Amazon (that is, the most popular order-online-and-we-ship-to-you storefront in this country), and instead of what I ordered (a rug with an image showing the planets, with labels in English) I got an obviously AI-generated copy of the image, whose letters were often mangled (MARS looked like MɅPS, for example). Thankfully the storefront allowed me to return it for a refund, I ordered from a different seller on the second try, and this time I received a rug that precisely matched the image on the storefront. But yes, there are unscrupulous merchants who are using AI to sloppily copy other people's work.
    • fnands10 minutes ago
      Another similar story: My aunt passed away last year, and an acquaintance of my cousin sent her one of those "hug in a box" care packages you can buy off Amazon.

      Except when it was delivered, this one said "hug in a boy" and "with heaetfelt equqikathy" (whatever the hell that means). When we looked up the listing on Amazon it was clear it was actually wrong in the pictures, just well hidden with well placed objects in front of the mistakes. It seems like they ripped off another popular listing that had a similar font/contents/etc.

      Luckily my cousin found it hilarious.

  • hansmayer37 minutes ago
    This is hilarious actually. I am starting to lean into "AI-dangerous" camp, but not because the chatbot will ever become sentient. Its precisely because of increasingly widespread adoption of un-reliable tools by the incompetent but self-confident Office Worker (R).
    • pjc5029 minutes ago
      Automatic Soldier Sveijk.
  • Animatsan hour ago
    This is so out of hand.

    There's this. There's that video from Los Alamos discussed yesterday on HN, the one with a fake shot of some AI generated machinery. The image was purchased from Alamy Stock Photo. I recently saw a fake documentary about the famous GG-1 locomotive; the video had AI-generated images that looked wrong, despite GG-1 pictures being widely available. YouTube is creating fake images as thumbnails for videos now, and for industrial subjects they're not even close to the right thing. There's a glut of how-to videos with AI-generated voice giving totally wrong advice.

    Then newer LLM training sets will pick up this stuff.

    "The memes will continue" - White House press secretary after posting an altered shot of someone crying.

    • pjc5026 minutes ago
      The war on facts continues. Facts are hard, they require a careful chain of provenance. It's much cheaper to just make up whatever people want to hear, safe in the knowledge that there will never be any negative consequences for you. Only other people, who aren't real anyway.
    • nxobjectan hour ago
      > recently saw a fake documentary about the famous GG-1 locomotive

      It wouldn’t happen to be a certain podcast about engineering disasters, now, would it?

  • nippoo2 hours ago
    They've taken it down now and replaced with an arguably even less helpful diagram, but the original is archived: https://archive.is/twft6
    • yoz-y2 hours ago
      Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…

      That this was ever published shows a supreme lack of care.

      • shaky-carrousel2 minutes ago
        And that's what they dared to show to the public. I shudder thinking about the state of their code...
      • quietbritishjiman hour ago
        The turn axis is great! Not only have they invented their own letter (it's not r, or n, or m, but one more than m!), it points the wrong way.
      • zephenan hour ago
        Is it truly possible to make GitFlow look worse than reality?
    • rzmmm41 minutes ago
      It looks like typical "memorization" in image generation models. The author likely just prompted the image.

      The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications

      • pjc5028 minutes ago
        Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.
  • jezzamon2 hours ago
    "continvoucly morged" is such a perfect phrase to describe what happened, it's poetic
    • alex_suzuki2 hours ago
      It's the sound of speaking when someone is stuffing AI down your throat.
    • ChrisArchitect2 hours ago
      Was reading the word morged thinking it was some new slang I hadn't heard of. Incredible.
      • nvaderan hour ago
        If it wasn't before, it will be now.
      • thebruce87m2 hours ago
        I propose:

        Morge: when an AI agent is attempting to merge slop into your repo.

        • Balinaresan hour ago
          Lifehack: you can prevent many morges by banning user claude on GitHub. Also then GitHub will also tell you when a repo was morged up.

          Do your part to keep GitHub from mutating into SourceMorge.

      • adityaathalye2 hours ago
        Same! I was about to go duck-searching for meaning, but thanks to jezzamon for pointing it out.

        brb, printing a t-shirt that says "continvoucly morged"

        • kuerbel35 minutes ago
          You could add one of those Microslop memes that are going around.
    • FeistySkink2 hours ago
      Part of the VC/CM pipeline.
    • ares6232 hours ago
      "Babe, wake up. New verb for slop just dropped."

      It's a perfectly cromulent word.

  • xxran hour ago
    When I read the title, I thought "morg" was one of those goofy tech words that I had missed but whose meaning was still pretty clear in context (like a portmanteau of "Microsoft" and "borged," the latter of which I've never heard as a verb but still works). I guess it's a goofy tech word now.
  • Brian_K_White2 hours ago
    Please let morged become a thing.
    • rossant27 minutes ago
      A mix between merged, morphed, and morgue. I love it. Should be nominated as word of 2026.
    • ares6232 hours ago
      Satya yelled "it's morgin' time" and then morged all over the place.
      • zephenan hour ago
        If you've got the tiന്ന, we've got the morge.
  • adzm2 hours ago
    > Till next 'tim'

    It took me a few times to see the morged version actually says tiന്ന

    • zahlmanan hour ago
      For the curious:

        $ python -c 'print(list(map(__import__("unicodedata").name, "ന്ന")))'
        ['MALAYALAM LETTER NA', 'MALAYALAM SIGN VIRAMA', 'MALAYALAM LETTER NA']
      
      (The "pypyp" package, by Python core dev and mypy maintainer Shantanu Jain, makes this easier:)

        $ pyp 'map(unicodedata.name, "ന്ന")'
        MALAYALAM LETTER NA
        MALAYALAM SIGN VIRAMA
        MALAYALAM LETTER NA
  • bulbar27 minutes ago
    Good example of the fact that LLMs, as its core, are lossy compression algorithm that are able to fill in the gaps very cleverly.
  • crossroadsguyan hour ago
    Something tangential..

    > people started tagging me on Bluesky and Hacker News

    Never knew tagging was a thing on Hacker News. Is it a special feature for crème de crème users?

    • OJFord5 minutes ago
      Don't think so, expect they just mean replying to comments to mention it, or they posted another article and people commented about seeing this and isn't it from another article of yours etc.
  • zahlmanan hour ago
    I'm glad I actually checked TFA before asking here if "morging" referred to some actual technical concept I hadn't previously heard of.
    • ccozanan hour ago
      If we are here, lets at least coin it for something relevant!
  • aftergibsonan hour ago
    Archive.org shows this went live last September: https://web.archive.org/web/20250108142456/https://learn.mic...

    It took ~5 months for anyone to notice and fix something that is obviously wrong at a glance.

    How many people saw that page, skimmed it, and thought “good enough”? That feels like a pretty honest reflection of the state of knowledge work right now. Everyone is running at a velocity where quality, craft and care are optional luxuries. Authors don’t have time to write properly, reviewers don’t have time to review properly, and readers don’t have time to read properly.

    So we end up shipping documentation that nobody really reads and nobody really owns. The process says “published”, so it’s done.

    AI didn’t create this, it just dramatically lowers the cost of producing text and images that look plausible enough to pass a quick skim. If anything it makes the underlying problem worse: more content, less attention, less understanding.

    It was already possible to cargo-cult GitFlow by copying the diagram without reading the context. Now we’re cargo-culting diagrams that were generated without understanding in the first place.

    If the reality is that we’re too busy to write, review, or read properly, what is the actual function of this documentation beyond being checkbox output?

    • LauraMedia41 minutes ago
      You are assuming: A) That everyone who saw this would go as far as post publicly about it (and not just chuckle / send it their peers privately) and B) Any post about this would reach you/HN and not potentially be lost in the sea of new content.
    • anonymous908213an hour ago
      > readers don’t have time to read properly

      > So we end up shipping documentation that nobody really reads

      I'd note that the documentation may have been read and noticed as flawed, but some random person noticing that it's flawed is just going to sigh, shake their heads, and move on. I've certainly been frustrated by inadequate documentation before (that describes the majority of all documentation, in my experience), but I don't make a point of raising a fuss about it because I'm busy trying to figure out how to actually accomplish the goal for which I was reading documentation for rather than stopping what I'm doing to make a complaint about how bad the documentation is.

      This says nothing to absolve everyone involved in publishing it, of course. The craft of software engineering is indeed in a very sorry state, and this offers just one tiny glimpse into the flimsiness of the house of cards.

      • LauraMedia39 minutes ago
        I usually would post it in our dev slack chat and rant for a message or two how many hours were lost "reverse-engineering" bad documentation. But I probably wouldn't post about it on here/BlueSky.
  • chromehearts2 hours ago
    Billions must morge
    • reddalo15 minutes ago
      Developors, developors, developors, developors!
  • shaky-carrousel9 minutes ago
    > The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.

    LOL, I disagree. It's very on brand for Microslop.

  • kgeist42 minutes ago
    >What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.

    "Don't attribute to malice what can be adequately explained by stupidity". I bet someone just typed into ChatGPT/Copilot, "generate a Git flow diagram," and it searched the web, found your image, and decided to recreate it by using as a reference (there's probably something in the reasoning traces like, "I found a relevant image, but the user specifically asked me to generate one, so I'll create my own version now.") The person creating the documentation didn't bother to check...

    Or maybe the image was already in the weights.

  • bschwindHN15 minutes ago
    > The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently.

    That pretty much describes Microsoft and all they do. Money can't buy taste.

    He was right:

    https://www.youtube.com/watch?v=3KdlJlHAAbQ

  • alex_suzuki2 hours ago
    From TFA:

    > the diagram was both well-known enough and obviously AI-slop-y enough that it was easy to spot as plagiarism. But we all know there will just be more and more content like this that isn't so well-known or soon will get mutated or disguised in more advanced ways that this plagiarism no longer will be recognizable as such.

    Most content will be less known and the ensloppified version more obfuscated... the author is lucky to have such an obvious association. Curious to see if MSFT will react in any meaningful way to this.

    Edit: typo

    • Ylpertnodi2 hours ago
      > Most content will be less known and the enslopified version more obfuscated...

      Please everyone: spell 'enslopified', with two 'p's - ensloppiified.

      Signed, Minority Report Pedant

  • jronan hour ago
    Morged > Oneshotted
  • AndroTux2 hours ago
    “It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.”

    Seems to be perfectly on brand for Microsoft, I don’t see the issue.

    • blibblean hour ago
      LLM infested crap, directly pushed to customers without any pushback

      so standard Microslop

  • zkmon2 hours ago
    That old beatiful git branching model got printed into the minds of many. Any other visual is not going to replace it. The flood of 'plastic' incarnations of everything is abominable. Escape to jungles!!
    • noufalibrahiman hour ago
      Indeed. I don't remember all the details of the flow but the aesthetics of the diagram are still stuck in my head.
  • beefletan hour ago
    Developer BRUTALLY FRAME-MORGED by Microsoft AI
  • bayindirh2 hours ago
    Sorry but, isn't this textbook Microsoft? Aside being more blatant, careless and on the nose; what's different than past Microsoft?

    These people distilled the knowledge of AppGet's developer to create the same thing from scratch and "Thank(!)" him for being that naive.

    Edit: Yes, after experiencing Microsoft for 20+ odd years, I don't trust them.

  • bitwize2 hours ago
    I love it when the LLM said "it's morgin' time" and proceeded to morg all over the place.
    • ares6232 hours ago
      One step closer to the Redditification of HN. And it is entirely because the content out there nowadays.
      • nxobjectan hour ago
        Ha, I think a user since 2007’s earned the right to do that once in a while.
      • debugnik2 hours ago
        Maybe you're missing the reference to the Morbius movie joke, which sounds surprisingly fitting. It's not like older HNers never made funny references.

        Edit: Apparently you didn't.

        • bitwizean hour ago
          The commenter you're responding to a) independently made the exact same reference; b) has a username like that of Jared Leto's other Disney tentpole flop role...
          • debugnikan hour ago
            Well spotted, I guess they're pushing for HN's redditification then.
        • theodrican hour ago
          HN is a Serious Place. We're here to make money. Please leave your jokes at home.
  • dotdi2 hours ago
    I guess this image generation feature should never have been continvoucly morged back into their slop machine
  • kshri2438 minutes ago
    I can already tell this is probably some AI Microslop fuck up without even clicking on the article.

    EDIT: Worse than I thought! Who in their right mind uses AI to generate technical diagrams? SMDH!

  • whirlwin2 hours ago
    The new Head of Quality in Microsoft has not started working there yet, so it's business as usual at MS... And now with AI slop on top

    Ref: https://www.reddit.com/r/technology/comments/1r1tphx/microso...

  • isoprophlexan hour ago
    > The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.

    lmao where has the author been?! this has been the quintessential Microsoft experience since windows 7, or maybe even XP...

  • usefulposter2 hours ago
    Hey, it's just like the Gas Town diagrams.

    https://news.ycombinator.com/item?id=46746045

  • WesolyKubeczekan hour ago
    I propose to adopt the word „morge”, a verb meaning „use an LLM to generate content that badly but recognizably plagiarizes some other known/famous work”.

    A noun describing such piece of slop could be „morgery”.

    • nvaderan hour ago
      I read through all the proposals in this discussion and I like yours the best out of them.

      Seconded!

  • larodian hour ago
    Everything you publish now on will be stolen and reused one way or another.
  • zephen2 hours ago
    On the one hand, I feel for people who have their creations ripped off.

    On the other hand, it makes sense for Microsoft to rip this off, as part of the continuing enshittification of, well, everything.

    Having been subjected to GitFlow at a previous employer, after having already done git for years and version control for decades, I can say that GitFlow is... not good.

    And, I'm not the only one who feels this way.

    https://news.ycombinator.com/item?id=9744059

  • ali-aljufairi7 minutes ago
    [dead]
  • VerifiedReports2 hours ago
    [flagged]
  • marssaxman2 hours ago
    It seems to me rather less likely that someone at Microsoft knowingly and deliberately took his specific diagram and "ran it through an AI image generator" than that someone asked an AI image generator to produce a diagram with a similar concept, and it responded with a chunk of mostly-memorized data, which the operator believed to be a novel creation. How many such diagrams were there likely to have been, in the training set? Is overfitting really so unlikely?

    The author of the Microsoft article most likely failed to credit or link back to his original diagram because they had no idea it existed.

    • zahlmanan hour ago
      Yes, but from OP's perspective this is a distinction without a difference.
  • poojagill2 hours ago
    looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP
  • pwndByDeath2 hours ago
    • aobdev2 hours ago
      Check the article, AI interpreted the phrase “continuously merged” as “continvoucly morged”
    • tra32 hours ago
      I too was confused until I looked at the included screenshot.

      This is just another reminder that powerful global entities are composed of lazy, bored individuals. It’s a wonder we get anything done.

      • locusofself2 hours ago
        we are also stressed, scared for our jobs and bombarded by constant distraction
    • 2 hours ago
      undefined
    • ChristianJacobs2 hours ago
      You apparently did not read the article. "Morged" is a word the LLM that ripped off the article author's diagram hallucinated.
      • zahlmanan hour ago
        > You apparently did not read the article.

        Please don't say things like this in comments (see https://news.ycombinator.com/newsguidelines.html).

        I don't think "LLM" and "hallucinated" are accurate; different kinds of AI create images, and I get the impression that they generally don't ascribe semantics to words in the same way that LLMs do, and thus when they draw letter shapes they typically aren't actually modelling the fact that the letters are supposed to spell a particular word that has a particular meaning.

        • an hour ago
          undefined
  • yokoprime2 hours ago
    A somewhat contrarian perspective is that this diagram is so simple and widely used and has been reproduced (ie redrawn) so many times that is very easy to assume this does not have a single origin and that its public domain.
    • zahlmanan hour ago
      That's pretty hard to reconcile with OP's claim:

      > In 2010, I wrote A successful Git branching model and created a diagram to go with it. I designed that diagram in Apple Keynote, at the time obsessing over the colors, the curves, and the layout until it clearly communicated how branches relate to each other over time. I also published the source file so others could build on it.

      If you mean that the Microsoft publisher shouldn't be faulted for assuming it would be okay to reproduce the diagram... then said publisher should have actually reproduced the diagram instead of morging it.

    • blibble44 minutes ago
      it's not public domain, it's copyrighted

      what's the bet that the intention here was explicitly to attempt to strip the copyright

      so it could be shoved on the corporate website without paying anyone

      (the only actual real use of LLMs)

  • amdiviaan hour ago
    I'm failing to understand the criticism here

    Is it about the haphazardous deployment of AI generated content without revising/proof reading the output?

    Or is it about using some graphs without attributing their authors?

    if it's the latter (even if partially) then I have to disagree with that angle. A very widespread model isn't owned by anyone surely, I don't have to reference newton everytime I write an article on gravity no? but maybe I'm misunderstanding the angle the author is coming from

    (Sidenote: if it was meant in a lightheaded way then I can see it making sense)

    • sixeyes37 minutes ago
      did you read the article? this is explicitly explained! at length!

      not at all about the reuse. it's been done over and over with this diagram. it's about the careless copying that destroyed the quality. nothing was wrong with the original diagram! why run it through the AI at all?

    • matthewmacleod40 minutes ago
      Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it's been everywhere for 15 years and I've always been fine with that. What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?

      I mean come on – the point literally could not be more clearly expressed.