As far as Visual Studio Code goes, I've not really used it much but it makes sense since it's Microsoft's free editor, so you will be a product and you will be marketed to. I do use Visual Studio though, and it does show Copilot in the UI by default, but there is an option to "hide Copilot" from the UI which does what is advertised. I will probably remove my important projects from Github though, but mainly so they are not used for LLM training than anything else.
The “whatever reason” can be to build a portfolio to apply for jobs. Or worse, to more quickly build trust to exploit vulnerable projects.
https://www.techdirt.com/2025/09/04/why-powerful-but-hard-to...
P.S: Most people just do it either to "light-up" their Github profile for job applications or just to get cheap swag...
And good luck stopping people from pasting from ChatGPT or Gemini or whatever. Those are free, unlike Copilot agent PRs which cost money, which is part of why I don’t see any.
I guess some people just have too much time and will happily waste on useless complaints.
They're still seen by a lot of people as a sign of project maturity and use. My unfounded suspicion is if they all dissapeared tomorrow, people would be a lot more likely to try alternative code forges.
I've been using codeberg of late, more because of their politics than anything, but in all honesty the user experience between github/gitlab/codeberg/sourcehut/gitea is near identical.
1. Network effects; people already have an account.
2. Free CI, especially free Mac and Windows CI.
- 2000 minutes of free compute time with GitHub Actions
- free Docker Hub alternative with unlimited pulling (they say that you're limited to 500Mb but I currently have probably +20Gb of images on my Free account)
They have the community aspect AND the freebies
This is a strong signal, but what it signals is confused. How much of it is the nature of the user base in actually reporting issues? Suppose the project receives regular fixes and issues are promptly closed on average — how much of that is because the project has to constantly respond to external factors, and how much is due to developers doing constant fire-fighting on an intrinsically poor design and not getting around to new functionality? Suppose there are lots of outstanding issues — how many of them are effectively duplicates that nobody's bothered to close yet?
They keep closing the ticket and saying it's "with the engineering team". I keep reopening and asking for resolution, escalation, or progress.
GitHub did have working and professional support in the past but in 2025 they are just malicious.
It's surreal.
I did mention it in this Discussions thread a while back (which the support agent at one point in July hilariously linked me to asking if I had read, making it clear they hadn't done so themselves).
https://github.com/orgs/community/discussions/147437#discuss...
Initial support response mentioned here:
https://github.com/orgs/community/discussions/147437#discuss...
I checked my profile and copilot is enabled with a "lock" icon, I cannot disable it. I have never enabled it.
Gitlab is of course adding more AI and corpo garbage, and once they prevent disabling these "features" on community editions we'll see a fork of gitlab, probably.
The assertion that github is some bustling hub of opportunity is a strange one. At best you get people more likely to contribute because they already signed up, and a contribution from somebody not willing to sign up to another free service or simply email you an issue report is a contribution worth missing.
I’d love to find a stripped down solution that focused on hosting code repos. I don’t think GitHub see it as their core business anymore.
It’s quite easy to setup git to send patch via email. And you can always use a pastebin to host the diff if you’re sharing ideas. Bit I guess that’s not as visible as the GitHub dashboard.
It’s fascinating stuff and can be very useful. Why does it have to be rammed so hard? I’ve never quite seen anything like this.
Or maybe I have. It reminds me a little of the obviously astroturfed effort to ram crypto down people’s throats. But crypto was something most people didn’t have any actual utility for. A magic tireless junior intern who had memorized the entire Internet is actually useful.
If users don't want to engage with new AI features, the new AI features become unavoidable so that engagement goes up despite user preferences.
KPIs are a fantastic way for an organization to lose any touch with reality and can drive some truly bizarre decision-making.
https://web.archive.org/web/20220823105749/http://blogs.tedn...
This isn’t AI specific though. The whole industry runs this way because thoughtful decision making doesn’t scale easily. KPIs are easier.
Ostensibly most successful software is written in languages that aren't very good, with development methodologies that aren't very good, in organizational structures that aren't very good. Where is the existence proof? Why isn't software written in a good language, using a good methodology with sane management winning the race?
The race isn't a competition, it's a death march. If you want to 'win' the death march, prioritize survival above everything else, especially quality and correctness.
(I don't strictly follow this philosophy myself, a good engineer will always ask, why not both. Just make sure you identify endurance as the most important strategy)
But I'd hope you'd admit quality and correctness aren't free attributes? They do have a cost. I can churn out low quality code way faster than I can produce code I'm proud of. If I attach myself to the quality of the code, and get stumped by some bug, become frustrated, and take a break from project_a, to work on something else, and while working on project_b to "clear my mind", I fall in love with project_b, or it gets more popular, or whatever that "pressure" happens to be... project_a has no remaining developers, and it is still dead now. Thus, quality has had a negative impact on it's survival.
Suitability has a tenuous connection and dependence on quality and correctness. (which I believe are synonyms for the same core idea?)
But why are so many businesses (the ones that still survive) so demoralizingly dysfunctional? Because they're run by individuals who don't value quality and correctness above [other attribute]. When given the choice to increase money (which is effectively the exact same thing as market share, and when talking about survival popularity is the same thing as suitability), or increase quality. They will always make the decision that ensures their survival, (by chance, no by intent, that's the orthogonality). Eventually, they'll turn that knob too far, degrade their quality enough and create an ecological niche for someone else to take over. (A competitor that maybe they acquire before it causes a real risk to it's survival/popularity, again choosing to make money/survive, over a decision targeting quality)
Would *you* rather make money, or write something high quality? I use and love marginalia, so I think I can guess the answer. (Thank you so much for building something that actually meaningfully improves the internet btw!) Are there decisions you could make that would trade the quality to become more popular, or make more money? Yes, I'm sure, but you don't seem to be trying to become the next google.
It's propping up the US economy and businesses mostly look at B2B signals. Keeping "demand" for AI high at e.g., Microsoft, keeps "demand" high on NVIDIA, CoreWeave, et al.
All of the boats are floating in the bathtub and nobody wants to be the one to pull the drain plug.
> When I joined this company, my team didn't use version control for months and it was a real fight to get everyone to use version control. Although I won that fight, I lost the fight to get people to run a build, let alone run tests, before checking in, so the build is broken multiple times per day. When I mentioned that I thought this was a problem for our productivity, I was told that it's fine because it affects everyone equally. Since the only thing that mattered was my stack ranked productivity, so I shouldn't care that it impacts the entire team, the fact that it's normal for everyone means that there's no cause for concern.
Do not underestimate the ability of developers to ignore good ideas. I am not going to argue that AI is as good as version control. Version control is a more important idea than AI. I sometimes want to argue it's the most important idea in software engineering.
All I'm saying is that you can't assume that good ideas will (quickly) win. Your argument that AI isn't valuable is invalid, whether or not your conclusion is true.
P.S. Dan Luu wrote that in 2015, and it may have been a company that he already left. Version control has mostly won. Still, post 2020, I talked to a friend, whose organization did use git, but their deployed software still didn't correspond to any version checked into git, because they were manually rebuilding components and copying them to production servers piecemeal.
Writing unit tests where needed makes you more productive in the long run. Writing in modern languages makes you more productive. Remember how people writing assembly thought compiled languages would rot your brain!
But people just resist change and new ways of doing things. They don't care about actual productivity, they care about feeling productive with the tools they already know.
It's a hard sell when an application moves a button! People don't like change. Change is always a hard sell to a lot of people, even when it benefits them.
The teams making changes to software are, on average, moderately worse than the teams who originally developed the software, if only because they missed out on the early development experience, and often don't fully understand the context and reasons for the original design and don't reason from first principles when making updates, but copy the aspects they notice superficially while undermining the principles they were originally established on.
Even when the changes are independently advantageous, it is common for changes to one part of a system to gratuitously break a variety of other parts that are dependent on it. Trying to manage and fix a complex web of inter-dependent software which is constantly changing and breaking is an overwhelming challenge for individual humans, and unfortunately often not a sufficient priority for groups and organizations.
No, I don't remember that and I've been around awhile. (I'm sure one could find a handful of examples of people saying that but one can find examples of people saying sincerely that the earth is flat.) It was generally understood that the code emitted by early, simple compilers on early CISC processors wasn't nearly as good as hand-tuned assembly code but that the trade-off could be worthwhile. Eventually, compilers did get good enough to reduce the cases where hand-tuned assembly could make a difference to essentially nothing but this was identified through benchmarking by the people who used assembly the most themselves.
If you want to sell us on change, please stop lying right to our faces.
They eventually got there, (and I expect AI will eventually get there too), but it took a lot of evolution.
Modern ISA designers (including those evolving the x86_64 ISA) absolutely take into account just how easy it is for a compiler to target their new instructions. x86 in modern times has a lot of RISC influence once you get past instruction decode.
Debatable? It has positive effects for organizations and for the society, but from a selfish point of view, you gain relatively little from writing tests. In your own code, a test might save you debugging time once in a blue moon, but the gains are almost certainly offset by the considerable effort of writing a comprehensive suite of tests in the first place.
Again, it's prudent to have tests for more altruistic reasons, but individual productivity probably ain't it.
> Writing in modern languages makes you more productive.
With two big caveats. First, for every successful modern language that actually makes you more productive, there's 20 that make waves on HN but turn out to be duds. So some reluctance is rational. Otherwise, you end up wasting time learning dead-end languages over and over again.
Second, it's perfectly reasonable to say that Rust or whatever makes an average programmer more productive, but it won't necessarily make a programmer with 30 years of C++ experience more productive. This is simply because it will take them a long time to unlearn old habits and reach the same level of mastery in the new thing.
My point is, you can view these through the prism of rational thinking, not stubbornness. In a corporate setting, the interests of the many might override the preferences of the few. But if you're an open-source developer and don't want to use $new_thing, I don't think we have the moral high ground to force you.
It’s much more than this. You feel it when you make a change and you are super confident you don’t have to do a bunch of testing to make sure everything still behaves correctly. This is the main thing good automated tests get you.
Compilers did get better, and continue to--just look at my username. But in the early days one could make very strong, very reasonable, cases for sticking with assembly.
Well, how'd you describe web apps of today if not precisely brainrot?
> They don't care about actual productivity, they care about feeling productive
Funny you'd say that, because that describes a large portion of "AI coders". Sure they pump out a lot of lines of code, and it might even work initially, but in the long run it's hardly more productive.
> It's a hard sell when an application moves a button!
Because usually that is just change for the sake of change. How many updates are there every day that add nothing at all? More than updates that actually add something useful, at least.
You're assuming that the change is beneficial to people when you say this, but more often than not that just isn't true. Most of the time, change in software doesn't benefit people. Software companies love to move stuff around just to look busy, ruin features that were working just fine, add user hostile things (like forcing Copilot on people!), etc. It should be no surprise that users are sick of it.
I write a TON of one off scripts now at work. For instance, if I fight with a Splunk query for more than five minutes, I’ll just export the entire time frame in question and have GHCP (work mandates we use only GHCP) spit out a Python script that gets me what I want.
I use it with our internal MCP tools to review pull requests. It surfaces questions I didn’t think to ask about half the time.
I don’t know that it makes me more productive, but it definitely makes me more attentive. It works great for brainstorming design ideas.
The code generation isn’t entirely slop either. For the vast majority of corporate devs below Principal, it’s better than what they write and its basic CRUD code. So that’s where all the hyper productive magical claims come from. I spend most of my days lately bailing these folks out of a dead end fox hole GHCP led them into.
Unfortunately, it’s very much a huge time sink in another way. I’ve seen a pretty linear growth in M365 Copilot surfacing 5 year old word documents to managers resulting in big emails of outdated GenAI slop that would be best summarized as “I have no clue what I’m talking about and I’m going to make a terrible technical decision that we already decided against.”
Big tech's general strategy is get-big-fast - and then become too-big-to-fail. This was followed by facebook, uber, paypal, etc. The idea is to embed AI into daily behaviors of people whether they like it or not, and hook them. Then, once hooked, developers will clamor for it whether it is useful or not.
I've seen it all the time. Version control, code review, unit testing, all of these are top-down.
Tech tools like git instead of CVS and Subversion, or Node instead of Java, may be bottom-up. But practices are very much top-down, so I see AI fitting the pattern very well here. It feels very similar to code review in terms of the degree to which it changes developer practices.
Obviously developers invented these things and initially diffused the knowledge.
But you're agreeing with me that they then got enforced top-down. Just like AI. AI isn't new or different like this. Developers started using LLM's for coding, it "diffused" so management became aware, and then it becomes enforced.
There's a top-down mandate to use version control or unit testing or code review or LLM's. Despite plenty of developers initially hating unit tests. Initially hating code review. These things are all standard now, but weren't for a long time.
In contrast to things like "use git not Subversion" where management doesn't care, they just want you to use a version control.
AI seems to have primarily been pushed top-down from management long before any consensus has been reached from the devs on what it's even good for.
This is unusual; I suspect the reason is that (for once) the tech is more suitable for management functions than the dev stuff. Judging from the amount of bulletpointese generation and condensation I've seen lately anyway.
And there have been plenty of enthusiastic devs regarding LLM's.
And the idea that "until a consensus is reached" is just not true. These practices are often adopted with 1/3 of devs on board and 2/3 against. The whole point of top-down directives is that they're necessary because there isn't broad consensus among employees.
It was the same thing with mobile-first. A lot of devs hated it while others evangelized it, but management would impose it and it made phones usable for a ton of things that had previously been difficult. On the balance, it was a helpful paradigm shift imposed top-down even if it sometimes went overboard.
Early VCS was clunky and slow. If one dev checked out some files, another dev couldn't work on them. People wouldn't check them back in quickly, they'd "hoard" them. Then merges introduced all sorts of tooling difficulties.
People's contributions were now centrally tracked and could be easily turned into metrics, and people worried (sometimes correctly) management would weaponize this.
It was seen by many as a top-down bureaucratic Big Brother mandate that slowed things down for no good reason and interfered with developers' autonomy and productivity. Or even if it had some value, it wasn't worth the price devs paid in using it.
This attitude wasn't universal of course. Other devs thought it was a necessary and helpful tool. But the point is that tons of devs were against it.
It really wasn't until git that VCS became "cool", with a feeling of being developer-led rather than management-led. But even then there was significant resistance to its new complexity, in how complicated it was to reason about its distributed nature, and the difficulty of its interface.
AI does not have such curve. It is top down, from the start.
Management caught up and started to talk about them only years later.
I wasn't around to experience it but my understanding is that this is what happened in the 90's with object oriented programming - it was a legitimately useful idea that had some real grassroots traction and good uses, but it got sold to non-technical leadership as a silver bullet for productivity through reuse.
The problem then, as it is now, is that developer productivity is hard to measure, so if management gets sold on something that's "guaranteed" to boost it, it becomes a mandate and a proxy measure.
Let’s not let the smoke and mirrors dictate how we use the tool, but let us also not dismiss the tool just because it’s causing a fad.
Much like the internet era actually - obviously loads of value, but picking out the pets.coms from the amazon.coms ... well, it wasn't clear at the time which was which; probably both really (we buy our petfood online) except that only one of them had the cash reserves to make it past the dot com crash.
The core problem, as OP called out, is change aversion. It's just that for many previous useful changes, management couldn't immediately see the usefulness, or there would've been pressure too.
Let's not forget that well-defined development processes with things like CI/CD, testing, etc only became widespread after DORA made the positive impact clearly visible.
Let's face it: Most humans are perfectly fine with the status quo, whatever the status quo. The outward percolation of good ideas is limited unless a forcing function is applied.
"AI" on the other hand is shoved down people's throats by management and by those who profit from in in some way. There is nothing organic about it.
AI adoption is, for better or worse, voluntarily or not, very fast compared to other technologies.
The same could be true for coding agents too, or maybe not. Time will tell.
This adoption rate / shoving is insane. It is not based on anything but dollars.
No new real wealth can be created but financial wealth may transfer from the firms buying stuff to the large tech firms - thereby creating new financial wealth for big tech stockholders. In the long run the two should converge - but in the short run they can diverge. And I think that’s what we are seeing.
The attempt to compare it with Version Control, with sliced bread, with plumbing and sanitization practices. Think of any big innovation and compare it with it until people give in and accept this is the biggest bestest thing ever to have happened and it is spreading like wildfire.
Even AI wouldn't defend itself this passionately but it conquered some people's hearts and minds.
I use AI a lot myself, but being forced to incorporate it into my workflow is a nonstarter. I'd actively fight against that. It's not even remotely the same thing as fighting source control adoption in general, or refusing to test code before checking it in.
Wonder what'll happen to JPY once the Yen-carry unwinds from this massive hype-cycle - will probably hit 70 JPY to the dollar! Currently Sony Bank in Japan offers USD time-deposits at 8% pa. - that's just insanely high for what is supposed to be a stable developed economy.
Honestly I think the same thing happened with self-driving cars ~10 years ago.
Larry Page and Google's "submarine" marketing convinced investors and CEOs of automakers and tech companies [1] that they were going to become obsolete, and that Google would be taking all that profit.
In 2016, GM acquired Cruise for $1 billion or so. It seems like the whole thing was cancelled in 2023, written off, and the CEO was let go
How much profit is Waymo making now? I'm pretty sure it's $0. And they've probably gone through hundreds of billions in funding
How's Tesla Autopilot doing? Larry also "negatively inspired" Elon to start OpenAI with other people
I think if investors/CEOs/automakers had known how it was going to turn out, and how much money they were going to lose 10 years later, they might not have jumped on the FOMO train
But it turns out that AI is a plausible "magic box" that you extrapolate all sorts of economic consequences from
(on the other hand, hype cycles aren't necessarily bad; they're probably necessary to get things done. But I also think this one is masking the fact that software is getting worse and more user hostile at the same time. Probably one of the best ways to increase AI adoption is to make the underlying software more user hostile.)
[1] I think even Apple did some kind of self-driving car thing at one point.
https://en.wikipedia.org/wiki/Apple_car_project
>From 2014 until 2024, Apple undertook a research and development effort to develop an electric and self-driving car,[1] codenamed "Project Titan".[2][3] Apple never openly discussed any of its automotive research,[4] but around 5,000 employees were reported to be working on the project as of 2018.[5] In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform.[6] In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars.[7] In 2020, it was believed that Apple was still working on self-driving related hardware, software and service as a potential product, instead of actual Apple-branded cars.[8] In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024,[9] but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later.[10]
In February 2024, Apple executives canceled their plans to release the autonomous electric vehicle, instead shifting resources on the project to the company's generative artificial intelligence efforts.[11][12] The project had reportedly cost the company over $1 billion per year, with other parts of Apple collaborating and costing hundreds of millions of dollars in additional spend. Additionally, over 600 employees were laid off due to the cancellation of the project.[13]
Please don't use Hacker News for political or ideological battle. It tramples curiosity.
It's also no coincidence America has built no rail in many decades while centrally planned China built a massive HSR network in the past 15 years.
e.g. in 2018, over 7 years ago, I was simply pointing out that people like Chris Urmson (who had WORKED ON self-driving for decades) and Bill Gurley said self-driving would take 25+ years to deploy (which seems totally accurate now)
https://news.ycombinator.com/item?id=16353541
And I got significant pushback
Actually I remember some in-person conversations with MUCH MORE push back than that, including from some close friends.
They believed things because they were told by the media it would happen
People told me in 2018 that their 16 year old would not need to learn how to drive, etc. (In 2025, self-driving is not available in even ONE of their end points for a trip, let alone two end points)
Likewise, at least some people are convinced now that "coding as a job is going away" -- some people are even deathly depressed about it
Hacker News goes for anything that they think they might be able to make money off of, just like all middle-class people. They evaluate events based on how they could affect them personally. Actual plausibility isn't even secondary, they simply defer to the salesmen (whom they admire and hope one day to be.)
Sometimes it is good to disregard the opinion of experts who are absolutely sure something can't be done, might by a prerequisite to making it happen.
The four minute mile comes to mind.
Beliefs are powerful, they can enable you to reach goals, become prisons of the mind trapping you or become delusions when feedback is disregarded.
[0]: https://youtu.be/040ejWnFkj0?si=7yI3eKkirJdTWPwR [1]: https://en.wikipedia.org/wiki/Clanker [2]: https://youtu.be/RpRRejhgtVI?si=aZUVcsY8VyR_jbBA
I suspect stuff like lane following assist and adaptive cruise control
1) will ultimately provide the path to self driving eventually
2) wasn’t particularly helped by the hype cycle
1 is impossible to say at his point, for 2 I guess somebody who works in the field can come along and correct me.
That's where we are.
Waymo has been slow and steady, and has built something pretty great.
In 2016, GM acquired Cruise for $1 billion or so. It seems like the whole thing was cancelled in 2023, written off, and the CEO was let go
It was shut down because they had a collision that made front page news across the country which was followed by a cover-up. Their production lines were shut down, all revenue operations ceased, and the permits they needed to operate were withdrawn. It's not like the decision was random. How much profit is Waymo making now? I'm pretty sure it's $0.
Profit is a fuzzy concept for even the most transparent private companies, but Waymo's revenue is likely in the hundreds of millions. They've received around $12B in funding, not hundreds of billions.* https://www.cbtnews.com/waymo-hits-10m-driverless-rides-eyes...
But yeah, certainly 5-7 years behind the initial schedule. Which I guess was more of your point.
Let’s see it work in Minnesota in the winter where you can’t see lane markings, everything is white, and the camera lenses immediately get covered with road salt spray.
It's important to not confuse activity, with progress, with results.
At the same time, it's important to not confuse or downplay results, with progress, with activity.
There seems to be activity, progress, and results. It seems to be speeding up.
I don't have any preference for or against Tesla. Just observing.
What can incremental progress do to make a camera see through road salt deposited on its lens? I call bullshit. There isn't any incremental path because it's not physically possible. The photons are stopped by the salt. No amount of "AI" or what the fuck ever else will change this. There is no path towards "progress" here.
I don't operate from an assumption that cameras will remain the same as they are today.
Your comment did remind me about Comma, though.
which is what people like Chris Urmson and Bill Gurley already said prior to 2018 (see my sibling comment)
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
We're going to end up with complete autonomy
Ultimately you'll be able to summon your car anywhere … your car can get to you. I think that within two years, you'll be able to summon your car from across the country
---
(Also, in 2018 I said I'd be the first to buy a car where I could sleep behind the wheel while going from SF to Portland or LA. That obviously doesn't exist now.
Anyone want to take a bet on whether this will be possible in 2032, 7 years from now? I'd bet NO, but we can check in 2032 :-) )
1) crypto: raise funding, buy crypto as collateral, raise more funding with said collateral, rinse and repeat.
2) gpu datacenters: raise funding, buy gpus as collateral, raise more funding, buy more gpus, rinse and repeat.
3) zero day options: average folks want a daily lottery thrill. rinse and repeat.
All of the above are fed by fomo and to some extent hype, and ripe for a reckoning.
Marketers are trying to keep their jobs, sales people are trying to keep their jobs, etc.
I think my time frame is firmly after the invention of excel but before the web was it's own thing
On the energy side, Google recently estimated that an average Gemini inference consumes around 0.24 Wh, which is roughly the same as running a microwave for a single second. Older rule-of-thumb comparisons put the figure closer to 3–6 seconds of microwave use, or about 0.8–1.7 Wh per prompt. If you apply those numbers to U.S. usage, you get somewhere between 79 MWh and 550 MWh per day nationally, which translates to only a few to a few dozen megawatts of continuous load. Spread across the population, that works out to between 0.09 and 0.6 kWh per person per year — just pennies worth of electricity, comparable to a few minutes of running a clothes dryer. The bigger concern for the grid is not individual prompts but the growth of AI data centers and the energy cost of training ever-larger models.
The tire geometry causes a bit of oversteering, but they generally corner well, etc.
One of the major issues I'm seeing is how much technical people haven't been involved in the application of AI, which leaves non-technical people to pontificate and try.
With any new tech, after the hype is gone, what remains, is adopted and used.
The internet, social media, smartphones, all seemed foreign.
LLMs are no different. They will solve things other things haven't before.
LLMs are only as good as the users using it. Users only get better at using AI but putting in the repetitions. It's not a tool that's alive or a psychic.
For example, I work in operations, so most of what I touch is bash, Ansible, Terraform, GitHub workflows and actions, and some Python. Recently, our development team demonstrated a proposed strategy to use GitHub Copilot: assign it a JIRA ticket, let it generate code within our repos, and then have it automatically come back with a pull request.
That approach makes sense if you are building web or client-side applications. My team, however, focuses on infrastructure configuration code. It is software in the sense that we are managing discrete components that interact, but not in a way where you can simply hand off a task, run tests, and expect a PR to appear.
Large language models are more like asking a genie. Even if you give perfectly clear instructions, the result is often not exactly what you wanted. That is why I use Copilot and Gemini Code Assist in VS Code as assistive tools. I guide them step by step, and when they go off track, I can nudge them back in the right direction.
To me, that highlights the gap between management’s expectations and the reality of how these tools actually work in practice.
Doesn’t change the fact that it’s stupid, annoying, and bad design, but I don’t know that outright deception is needed to explain it.
Yes! "Forced features" are a misguided effort to drive internal usage metrics. There are other ways to let users know about new features, short of forcing it on them obnoxiously.
It's a rather perverse cycle.
Microsoft is a software company. They wouldn't have released this to begin with to have an extreme competitive edge!
Sometimes people are resistant to use things that improve their life and have to be convinced to work in their own self interest.
https://www.cnn.com/2022/05/14/business/grocery-shopping-car...
When I first heard about git, I knew that it would be very useful in the future, even if I had to spend some time and effort in mastering it. Same with CI, project planners, release engineering, etc. Nobody had to convince me to use them. But AI just doesn't belong to that category, at least in my experience. It misses results that a simple web/site search reveals. And it makes mistakes or outright hallucinates in ways even junior developers don't. It's in an uncanny valley between the classic non-AI services and plain old manual effort with disadvantages of both and advantages of neither. Again, others may not agree with this experience. But it's definitely not unique to me. The net gain/loss that AI brings to this field is not clear. At least not yet.
Since right now there is an aire of competition, I would guess that these companies believe its winner-take-all, and are doing their “one monopoly to aid another” to get this market before theres another verb-leader (like chatgpt for llm, or google for search).
It could also be that they think that people won’t know how good they are until they try it, that it has to be seen to be believed. So getting people to touch it is important to them.
But, I think I agree with you, its so heavy handed that it makes me want to abandon the tools that force it on me.
it's just the copilot popups that are hardcoded in vscode right now despite no extension being installed, that are very annoying and I'd like those to go away.
But then who made it critical over the intervening years? That's on us.
It's easy to knee jerk on HN but let's try to do better than this.
> But then who made it critical over the intervening years? That's on us.
That's blaming the victim. The vast majority of the opensource projects were hosted on GH since before Microsoft's acquisition. I remember back in 2018 when my team made the decision to move from bitbucket to GitHub, the main consideration was the platform quality but also the community we were getting access to.
As the centralized git repo, it allows devs to collaborate, by exchanging code/features, tracking issues and doing code reviews. It also provides dependencies management ("Package") and code building/shipping (GH Actions).
Sure, if you usually spend one day or more writing code locally, you're fine. But if you work on multiple features a day, an outage, even of 30 minutes, can have a big impact on a company because of the multiplier effect on all the people affected.
This is a sign that their CTOs should be replaced. Not that github is critical.
They should fucking learn how to code because no one in their right mind would depend on such an external service that can be easily replaced by cloning repos locally or using proxies like Artifactory. Even worse when you know that Microsoft is behind it.
Yes, most companies don't have good practices and suck at maintaining a basic infrastructure, but it doesn't mean GitHub is the center of the internet. It's only a stupid git server with PRs.
I feel like you’re missing a few features here
There's a whole generation on HN who came up after Microsoft's worst phase, and have spent the last five years defending MS on this very forum.
They're convinced that any bad thing Microsoft does is a "boomer" grudge, and will defend MS to the end.
I hope I'm never so weak-minded that I tie my identity and allegiance to a trillion-dollar company. Or any company that I haven't founded, for that matter.
I would say all of those things were present before the acquisition, enough that Microsoft itself started to use the site for its own open source code hosting.
Do you think they would have bought it otherwise? Same for NPM, they got bought for huge sums of money because they were "critical" already.
And since the acquisition, they have built it out to be critical. Similar to what META did with Instagram. Instagram wasn't critical when META purchased it, but now it is the cornerstone of any business's online presence as it has been built out.
And git lives on regardless of GitHub
Regulators can (and do) stop purchases which can be considered harmful to consumers. Just look at the Adobe/Figma deal.
The same could not be said for Figma, where if lost, you'd end up looking at the company that tried to buy it. That's what those laws are for.
When GitHub goes down, the company I work at is pretty much kneecapped for the duration of the outage. If you’re in the middle of a PR, waiting for GitHub actions, doing work in a codespace, or just need to pull/fetch/push changes before you can work, you’re just stuck!
It's definitely not easy to self-host Gitlab for hundreds of devs working of hundreds of projects. Especially if you use it as your CI/CD pipeline, because now you have to also manage your workers.
Why company chose to pay GitHub instead of self-hosting their Gitlab instance? For the same reason they pay Microsoft for their emails instead of self-hosting them.
From the discussion:
> Allow us to block Copilot-generated issues (and PRs) from our own repositories
> ... This says to me that github will soon start allowing github users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct¹.
> Note: Because it appears that both issues and PRs written this way are posted by the "copilot" bot, a straightforward way to implement this would be if users could simply block the "copilot" bot. In my testing, it appears that you have special-cased "copilot" so that it is exempt from the block feature.
How does one see that a user, e.g. "chickenpants" submitted an issue or PR that was generated by "Copilot"? Isn't there only one creator?
On bigger PRs, I regularly have diffs that take seconds to load. The actions also started hanging a lot more often and will run for 30 minutes stuck in some kind of loop unless they time out or I cancel them manually. This did not use to happen before or least not as frequently as now.
Finally when I try to cancel the hung actions, the cancel button never gets disabled after I click it and it is possible to click it multiple times without any effect. Once clicked, surely it shouldn't be possible to click it again unless the API calls failed.
Clearly there is a quality decrease happening here.
Making things work properly is terribly passé in this brave new world of magic nonsense-generating robots.
You see this with Google Docs, too; after about a decade of stagnation, Google _finally_ started adding a few features (basic Markdown support, say, better comments, a few other bits and pieces) around 2022... And it finally got a bit less slow. But now that seems to have come to a shuddering halt; once more Docs stagnates, but it has about a hundred Gemini buttons now! It also feels like it's getting slower and buggier again.
I don't want AI getting in the way on Github. I don't want an unremovable AI button in my Office 365 mail client. I don't want to get nagging AI popups every. single. time. I open the GCP console.
A year or two I was ambivalent about AI, and willing to give it a try. These days? I actively hate it. Like all nagging ads: if you have to force it on me this badly, how can it be anything but complete garbage?
||api.individual.githubcopilot.com/agents/github-commit-message-generation$xhr,domain=github.com
Considering that they force it upon users and user cannot disable it, this sounds like a worthless metric.
I get an email every month telling me that my Copilot access has been renewed for another month. I'm probably being counted amongst those 20M users.
I could stand at the train station and yell "Cthulhu is our saviour" all day and later claim that the word of Cthulhu reached thousands of people today.
HOTSPUR: Why, so can I, or so can any man; But will they come when you do call for them?
"How sharper than a serpent's tooth it is to have a thankless child!“
My father used it frequently when we were kids. I found out decades later it was a quote from King Lear.
I don't; any ideas what's different?
It's mandatory, on the orders of a senior manager who has no background in software development, for all developers in my department to have a Copilot subscription. I've never used it for anything, and I imagine it's the same for most of my colleagues(we do highly specialised embedded development with in-house custom everything - compiler, standard library, operating system, hardware), and it seems no-one is interested in whether it's used or not.
Consequently Microsoft is being paid $240 a year per person to do nothing whatsoever, which is surely a great business for them.
And before that they posted their open source code to a centralized site that wasn't open source.
This is one of those things where of course it was going to happen. GitHub was VC funded, they were going to either exit to a big company or try to become one.
Eventually the bill was going to come due and everyone knew this. You can choose to rely on VC subsidized services but the risk is you are still dependent on them when they switch things up.
I think GitHub added the “pull request” as a really useful add on to git and that really made it take off.
Oddly I used selfhosted git at an academic institution. I liked it because it was set up to use “hooks” https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks after check ins. This became much harder when we were pushed off to a commercial host ( gitlab a git hub competitor)
For the sake of correctness, the concept of pull requests was not introduced by Github. It already existed in git in the form of the 'request-pull' subcommand. The fundamental workflow is the same. You send the project maintainer a message requesting a pull of your changes from your own online clone repo. The difference is that the message was in the form of an email. Code reviews could be conducted using mails/mailing lists too.
This is not the same as sending patches by email. But considering how people hate emails, I can see why it didn't catch on. However, Torvalds considered this implementation to be superior to Github's and once complained about the latter on Github itself [1].
[1] https://github.com/torvalds/linux/pull/17#issuecomment-56546...
How some people, like you sir, are able to recall such minute events, is amazing.
Oh! That's easy. I forgot that it is 13+ years old! XD
Added later: Your comment made me look up more details about it. It was a widely discussed comment at the time. The HN discussion about it is as interesting as the comment itself [1].
Personally, I remember the initial selling point of GitHub being that it was more "social" than any other forges at the time, since we were all wrapped up in the Web 2.0 hype and what not. I think they pushed that on their landing page back in the day too.
It was basically Twitter but redone specifically for developers, and focus on code rather than random thoughts.
When I started using it, public repositories were free, and private repositories needed a paid account.
The ToS did not require public repos to be open source, only permission for basic operations like fork (the button which clones, not creating derivative works) and download was required.
I'm pretty sure the term "pull request" existed before GitHub. (Meaning writing an email saying "I have changes in my copy of repo that I want you to merge into the main repo".) But GitHub put an UI around it, and they may've been the first to do that.
Negative. The only thing GitHub added to the parlance is "forks" which are essentially like namespaced branches in the same repo.
I worked for a company that used the on-prem version of their forge back in the 00s, I remember liking it alot. It felt novel, cool and useful to have fully interlinked bug tracking, version control, documentation, project management and release management.
I really don't remember it like this at all. I do remember looking for actually open source forges and choosing Gitorious, which was then bought and shutdown by GitLab (and projects were offered to be seamlessly migrated, which worked well, and somehow we ended up being hosted on an open core platform, but that's another story).
GitHub always looked like the closed platform the whole open source world somehow elected to trust for their code hosting despite being proprietary, and then there was this FOMO where if you weren't on GitHub, your open source software would not find contributors, which still seems to be going strong btw.
I understand their was hope that GitHub would be open sourced, but I don't think there was any reason to believe it would happen.
Yeah, I don't think me myself had good reasons beyond "They seem like the good guys who won't sell out", but I was also way younger and more naive at that point (it was like 15 years ago after all).
I think I mostly just drank the cool-aid of what you mentioned as "if you weren't on GitHub, your open source software would not find contributors". There was a lot of "We love Open Source and Open Source loves us" from GitHub that I guess was confusing to a young formative mind who wanted it to become like the projects they wanted to host. This hope was especially fueled when they started open sourcing parts of GitHub, like the Gollum stuff for rendering the wikis.
It looks like private repos started being free in 2019.
https://github.blog/news-insights/product-news/new-year-new-...
It's like using Instagram or Facebook. It's not at all a matter of individual choice when all your friends are on one single platform.
Sure you can host your code anywhere, but by not using GitHub you are potentially missing out on a very vibrant community.
It's all Microsoft to blame. It bought the medium and took an entire community hostage in the process just for the sake of profit.
As an aside, I don’t really see GitHub as a whole as a community. It’s a go-to place with network effects, but network effects doesn’t by itself imply “community”.
People aren't morally reprehensible because they prefer convenience over hardship. People like using easy things, and they like making money. This means that people will make easy things so other people will give them money. If you don't like it, make easy things that work the way you like them, run them ethically, and don't sell them to anyone.
To clarify my point isn't that anyone is morally reprehensible. My point is that using a free VC-backed service is like selling an implied option. You don't know when they're going to invoke the option, but eventually they will. And often it will be when you've gotten used to the income from selling the option.
It's not a question of morality or judgment, it's just meant to be a description of what the game we're playing is.
> If you don't like it, make easy things that work the way you like them, run them ethically, and don't sell them to anyone.
I'm trying to
Being VC backed isn't a deciding factor for adopting a forge. It's the community that drives adoption.
> I don’t really see GitHub as a whole as a community.
It's basically a social network on top of a source code forge. You have a profile that is individually identifiable, you can open issues and contribute to discussions on pull requests. All this can be tracked back to every individual while they collaborate and make connections while they contribute to each other. How is this not a community?
OP is arguing that VC should be a deciding factor. The “community” wouldn’t exist if people had made that a deciding factor.
A social network is not a community. It may contain many communities. GitHub has communities around projects. But GitHub as a whole isn’t a community.
Counterpoint is that is what companies are supposed to do. They are made to make money, the end. The only hope against this for humans is regulation, and that has fallen off the face of the earth. It’s like humans are doomed to repeat the late 19th and early 20th century era over and over.
In the organization:
Organization -> Settings -> Copilot -> Access... Turn it off.
Any tips for finding other interesting codeberg hosted projects?
Also, I remember there was Radicle https://radicle.xyz
Any Radicle users?
> the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories.
but Microsoft doesn't automatically make these issues and PRs. Users have to trigger it.
I mean, I do think you should be able to block the `copilot` user but I looked at this users repos and their most popular one has a total of 3 PRs with no Copilot ones.
I also checked the Rust compiler which is obviously waaaay more popular and it appears to have had zero copilot PRs.
I think it's just an unfortunate fact now in 2025 that if you look after a text box online, you're going to have to deal with AI sludge in one way or another. If you don't want to do that, close the text box.
I mean if Microsoft is "training" on your source code without consent (and potentially violating licenses) , that is a huge problem.
> I also checked the Rust compiler which is obviously waaaay more popular and it appears to have had zero copilot PRs
How do you asess whether some PR was made by an AI(like the user did)?
Not what this is about.
> How do you asess whether some PR was made by an AI(like the user did)?
Searched for PRs authored by copilot or mentioning copilot.
So Github copilot forces your PR to tag them as coauthored by Copilot or users can be slick without mentioning it?
Sometimes i wonder why this hasn't happened anyway: Internet's not what it used to be.
And maybe companies would eventually take a clue, after they banned all actual contributors for having foul fingers (very unlikely i know)
thanks for attending my ted talk
That said I would have a hard time justifying paying for it for my personal life because it's really that expensive. I look forward to 10 years from now when the local ML is good enough or free.