- $10/month
- Copilot CLI for Claude Code type CLI, VS Code for GUI
- 300 requests (prompts) on Sonnet 4.5, 100 on Opus 4.6 (3x)
- One prompt only ever consumes one request, regardless of tokens used
- Agents auto plan tasks and create PRs
- "New Agent" in VS Code runs agent locally
- "New Cloud Agent" runs agent in the cloud (https://github.com/copilot/agents)
- Additional requests cost $0.04 each
Good job, Microsoft.
I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess.
Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame.
Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.
The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.
Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?
We need a law that forces management to be regularly exposed to their own customer service.
As someone who takes pride in being thorough and detail oriented, I cannot stand when people provide the bare minimum of effort in response. Earlier this week I created a bug report for an internal software project on another team. It was a bizarre behavior, so out of curiosity and a desire to be truly helpful, I spent a couple hours whittling the issue down to a small, reproducible test case. I even had someone on my team run through the reproduction steps to confirm it was reproducible on at least one other environment.
The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool. I was offended on so many levels. For one, I wasn’t using the CLI tool in the way it describes, and even if I was it wouldn’t affect the bug. But the bigger problem is that this person thinks a screenshot of an AI conversation is an acceptable response. Is this what talking to semi technical roles is going to be like from now on? I get to argue with an LLM by proxy of another human? Fuck that.
Sites like lmgtfy existed long before AI because people will always take short cuts.
You are still on time, to coach a model to create a reply saying the are completely wrong, and send back a print screen of that reply :-)) Bonus points for having the model include disparaging comments...
This is a peer-review.
> "Peer review"
no unless your "peers" are bots who regurgitate LLM slop.
Let me slop an affirmative comment on this HIGH TRAFFIC issue so I get ENGAGEMENT on it and EYEBALLS on my vibed GitHub PROFILE and get STARS on my repos.
It was a mess before, and it will only get worse, but at least I can get some work done 4 times a day.
That repo alone has 1.1k open pull requests, madness.
The UI can't even be bothered to show the number of open issues, 5K+ :)
Then they "fix it" by making issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open.
It's definitely a mess, but based on the massive decline in signal vs noise of public comments and issues on open source recently, that's not a bad heuristic for filtering quality.
I would have done the same.
(Source: submitted similar issue to different Agentic LLM provider)
A second time. When they already closed your first issue. Just enjoy the free ride.
This could be the same, they know devs mostly prefer to use cursor and/or claude than copilot.
On the other hand, since they own GitHub they can (in theory) monitor the downloads, check for IPs belonging to businesses, and use it as evidence in piracy cases.
See also: string interpolation and SQL injection, (unhygienic) C macros
Microsoft notoriously tolerated pirated Windows and Office installations for about a decade and a half, to solidify their usage as de facto standard and expected. Tolerating unofficial free usage of their latest products is standard procedure for MS.
I do think some things in Microsoft ecosystem are salvageable, they just aren't trendy. The Windows kernel can still work, .Net and their C++ runtime, Win32 / Winforms, ActiveDirectory, Exchange (on-prem) and Office are all still fixable and will last Microsoft a long time. It's just boring, and Microsoft apparently won't do it, because: No subscription.
> VS Code Version: 1.109.0-insider (Universal) - f3d99de
Presumably there is such thing as the freemium pay-able "Copilot Chat Extension" for VS Code product. Interesting, I guess.