122 pointsby napolux3 hours ago16 comments
  • brushfoot2 hours ago
    Even without hacks, Copilot is still a cheap way to use Claude models:

    - $10/month

    - Copilot CLI for Claude Code type CLI, VS Code for GUI

    - 300 requests (prompts) on Sonnet 4.5, 100 on Opus 4.6 (3x)

    - One prompt only ever consumes one request, regardless of tokens used

    - Agents auto plan tasks and create PRs

    - "New Agent" in VS Code runs agent locally

    - "New Cloud Agent" runs agent in the cloud (https://github.com/copilot/agents)

    - Additional requests cost $0.04 each

    • piker2 hours ago
      +1. I see all these posts about tokens, and I'm like "who's paying by the token?"
      • Hrun0an hour ago
        > +1. I see all these posts about tokens, and I'm like "who's paying by the token?"

        When you use the API

        • smallerize28 minutes ago
          Yes. That is the question.
      • paulddraper22 minutes ago
        Most LLM usage?

        There’s some exceptions eg Claude Max

    • indigodaddy37 minutes ago
      So 100 Opus requests a month? That's not a lot.
      • likium16 minutes ago
        For $10 flat per request up to 128k tokens they’re losing money. 100 * 100k is 10m tokens. At current api pricing that’s $50 input tokens, not even accounting for output!
    • andrewmcwatters38 minutes ago
      [dead]
  • g947o2 hours ago
    > Note: Initially submitted this to MSRC (VULN-172488), MSRC insisted bypassing billing is outside of MSRC scope and instructed me multiple times to file as a public bug report.

    Good job, Microsoft.

  • ramon1562 hours ago
    The laat comment is a person pretending to be a maintainer of Microsoft. I have a gut feeling that these kind of people will only increase, and we'll have vibe engineers scouring popular repositories to ""contribute"" (note that the suggested fix is vague).

    I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess.

    • albert_e2 hours ago
      On the other hand ... I recently had to deal with official Microsoft Support for an Azure service degradation / silent failure.

      Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame.

      • datsci_est_201544 minutes ago
        > The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side.

        Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.

        The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.

        Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?

        • wat100002 minutes ago
          Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me.

          We need a law that forces management to be regularly exposed to their own customer service.

      • szundian hour ago
        [dead]
    • Cyphusan hour ago
      I wholly agree, the response screams “copied from ChatGPT” to me. “Contributions” like these comments and drive by PRs are a curse on open source and software development in general.

      As someone who takes pride in being thorough and detail oriented, I cannot stand when people provide the bare minimum of effort in response. Earlier this week I created a bug report for an internal software project on another team. It was a bizarre behavior, so out of curiosity and a desire to be truly helpful, I spent a couple hours whittling the issue down to a small, reproducible test case. I even had someone on my team run through the reproduction steps to confirm it was reproducible on at least one other environment.

      The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool. I was offended on so many levels. For one, I wasn’t using the CLI tool in the way it describes, and even if I was it wouldn’t affect the bug. But the bigger problem is that this person thinks a screenshot of an AI conversation is an acceptable response. Is this what talking to semi technical roles is going to be like from now on? I get to argue with an LLM by proxy of another human? Fuck that.

      • bmurphy197634 minutes ago
        That's when you use an LLM to respond pointing out all the ways the PM failed at their job. I know it sucks but fight fire with fire.

        Sites like lmgtfy existed long before AI because people will always take short cuts.

      • belteran hour ago
        >> The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool.

        You are still on time, to coach a model to create a reply saying the are completely wrong, and send back a print screen of that reply :-)) Bonus points for having the model include disparaging comments...

    • iib2 hours ago
      Some were already that and even more, because of other reasons. The Cathedral model, described in "The Cathedral and the Bazaar".
      • ForOldHack26 minutes ago
        I come to YCombinator, specifically because for some reason, some of the very brightest minds are here.
    • markstos2 hours ago
      No where in the comment do they assert they are work for Microsoft.

      This is a peer-review.

      • cmeacham982 hours ago
        It's not a peer review it's just AI slop. I do agree they don't seem to be intentionally posing as an MS employee.
      • PKop2 hours ago
        Let's just say they are pretending to be helpful, how about that?

        > "Peer review"

        no unless your "peers" are bots who regurgitate LLM slop.

        • markstos2 hours ago
          You think they lied about reproducing the issue? It’s useful to know if a bug can be reproduced.
          • cmeacham982 hours ago
            We cannot know for sure but I think it's reasonably likely (say 50/50). Regurgitating an LLM for 90% of your comment does not inspire trust.
          • PKop14 minutes ago
            Yes, of course I think they lied, because a trustworthy person would never consider 0-effort regurgitated LLM boilerplate as a useful contribution to an issue thread. It's that simple.
      • usefulposter2 hours ago
        It's performative garbage: authority roleplay edition.

        Let me slop an affirmative comment on this HIGH TRAFFIC issue so I get ENGAGEMENT on it and EYEBALLS on my vibed GitHub PROFILE and get STARS on my repos.

    • falloutxan hour ago
      Exactly I have seen these know it all comments on my own repos and also tldraw's issues when adding issues. They add nothing to the conversation, they just paste the conversation into some coding tool and spit out the info.
    • ForOldHack27 minutes ago
      Everyone is a maintainer of Microsoft. Everyone is testing their buggy products, as they leak information like a wire only umbrella. It is sad that more people who use co-pilot know that they are training it at a cost of millions of gallons of fresh drinking water.

      It was a mess before, and it will only get worse, but at least I can get some work done 4 times a day.

    • RobotToaster2 hours ago
      > I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess.

      That repo alone has 1.1k open pull requests, madness.

      • embedding-shape2 hours ago
        > That repo alone has 1.1k open pull requests, madness.

        The UI can't even be bothered to show the number of open issues, 5K+ :)

        Then they "fix it" by making issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open.

        • PKop2 hours ago
          > issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open.

          It's definitely a mess, but based on the massive decline in signal vs noise of public comments and issues on open source recently, that's not a bad heuristic for filtering quality.

  • direwolf2042 minutes ago
    Who would report this? Are they hoping for a bug bounty or they know their competitors are using the technique?
    • cess1128 minutes ago
      They tried to report it to MSRC, likely to get a bounty, and when they were stiffed there and advised to make it public they did.

      I would have done the same.

  • jlarocco10 minutes ago
    I'm sure they'll fix this, but it would be funny if the downfall of AI was the ability to use it to hack around its own billing.
  • peacebeard2 hours ago
    My guess is either someone raised this internally and was told it was fine, or knew but didn't bother raising it since they knew they’d be blown off.
  • sciencejerk2 hours ago
    Have confirmed that many of these AI agents and Agentic IDEs implement business logic and guardrails LOCALLY on the device.

    (Source: submitted similar issue to different Agentic LLM provider)

  • light_hue_12 hours ago
    Why would you report this?!

    A second time. When they already closed your first issue. Just enjoy the free ride.

    • anonymars2 hours ago
      Some part of me says, let their vibing have a cost, since clearly "overall product quality going to shit" hasn't had a visible effect on their trajectory
  • blibble2 hours ago
    the "AI" bot closing the issue here is particularly funny
    • anonymars2 hours ago
      Vibes all the way down. "Please check out this other slop issue with 5-600 other tickets pointed to it" -- I was going to ask, how is anyone supposed to make sense of such a mess, but I guess the answer is "no human is supposed to"
  • zkmon2 hours ago
    Nothing compared to pirated CDs with Office and Windows, 20 yrs back.
    • stanac2 hours ago
      They don't care, they would rather let you use pirated MS software than move to Linux. There is a repo on GH with powershell scripts for activating windows/office and they let it sit there. Just checked, repo has 165K stars.

      This could be the same, they know devs mostly prefer to use cursor and/or claude than copilot.

      • jlarocco2 minutes ago
        Home users are icing on the cake. Suing them for privacy is a bad look (see the RIAA), and using Windows and Office at home reinforces using at work.

        On the other hand, since they own GitHub they can (in theory) monitor the downloads, check for IPs belonging to businesses, and use it as evidence in piracy cases.

      • anonymars2 hours ago
        What's the direct cost to Microsoft of someone pirating an OS vs. making requests to a hosted LLM?
  • thenewwazoo2 hours ago
    Every time I see something about trying to control an LLM by sending instructions to the LLM, I wonder: have we really learned nothing of the pitfalls of in-band signaling since the days of phreaking?
    • quadrature2 hours ago
      Sure but the exploit here isn’t prompt injection, it is an edge case in their billing that isn’t attributing agent calls correctly.
      • thenewwazoo2 hours ago
        That's fair - I suppose the agent is making a call with a model parameter that isn't being attributed, as you say.
    • cpa2 hours ago
      It reminds me of when I used to write lisp, where code is data. You can abuse reflection (and macros) to great effect, but you never feel safe.

      See also: string interpolation and SQL injection, (unhygienic) C macros

    • direwolf2037 minutes ago
      Phreaking was an intentional decision, because otherwise they could have carried fewer channels on each link.
    • Mountain_Skies2 hours ago
      It'll be a sad day for Little Bobby Tables if in-band signaling ever goes out of fashion.
  • AustinDev3 hours ago
    Is it just me or is Microsoft really phoning it in recently?
    • dotancohenan hour ago
      You must be new here.

      Microsoft notoriously tolerated pirated Windows and Office installations for about a decade and a half, to solidify their usage as de facto standard and expected. Tolerating unofficial free usage of their latest products is standard procedure for MS.

    • falloutxan hour ago
      By recently, you mean since 2007
      • Ygg2an hour ago
        By recently I assume they mean since Windows 7. Alternatively since Windows 10. 2009-2015.

        Last decade it was misstep after misstep.

    • VerifiedReports2 hours ago
      Recently? They've been shipping absolute trash for 15 years, and still haven't reached the bottom apparently.
      • mrweaselan hour ago
        Thinking back, you're probably correct, but it seems like they where actively trying to create something good back then. That might just be me only seeing the good parts, with .Net and SQLServer. Azure was never good, and we've know why for over a decade, their working conditions suck and people don't stay long, resulting things being held together by duct tape.

        I do think some things in Microsoft ecosystem are salvageable, they just aren't trendy. The Windows kernel can still work, .Net and their C++ runtime, Win32 / Winforms, ActiveDirectory, Exchange (on-prem) and Office are all still fixable and will last Microsoft a long time. It's just boring, and Microsoft apparently won't do it, because: No subscription.

      • reppap2 hours ago
        Azure keeps randomly breaking our resources without any service health notifications or heads up, it's very fun living in microsofts world.
      • orphea2 hours ago
        .NET is actually, unironically good. But yes, this is one of few exceptions, unfortunately.
      • my_throwaway232 hours ago
        To be fair, Windows 7 was quite good in my opinion.

        Wait, what year is it?

        • ReptileMan2 hours ago
          windows 2000 server and windows 2003 server were their last great desktop OSs
    • PlatoIsADisease2 hours ago
      Their software seems like it. Their sales team is brutal.
  • VerifiedReports2 hours ago
    Billing for what?
    • rf152 hours ago
      The access to premium models. This much should have been evident from reading the ticket.
    • numpad02 hours ago
      > Copilot Chat Extension Version: 0.37.2026013101

      > VS Code Version: 1.109.0-insider (Universal) - f3d99de

      Presumably there is such thing as the freemium pay-able "Copilot Chat Extension" for VS Code product. Interesting, I guess.

  • pixelmelt2 hours ago
    Was good while it lasted, I hope Microsoft continues their new tradition of vibe coding their billing systems :p
    • scrubs2 hours ago
      Oh that was pithy, mean, and just the right amount of taking-it-personally. Well done!
  • huflungdung2 hours ago
    [dead]