Let's review the current state of things:
- Terminal CLI agents are several orders of magnitude less $$$ to develop than forking an entire IDE.
- CC is dead simple to onboard (use whatever IDE you're using now, with a simple extension for some UX improvements).
- Anthropic is free to aggressively undercut their own API margins (and middlemen like Cursor) in exchange for more predictable subscription revenue + training data access.
What does Cursor/Windsurf offer over VS Code + CC?
- Tab completion model (Cursor's remaining moat)
- Some UI niceties like "add selection to chat", and etc.
Personally I think this is a harbinger of where things are going. Cursor was fastest to $900M ARR and IMO will be fastest back down again.
I think only way Cursor and other UX wrappers still win is if on device models or at least open source models catch up in the next 2 years. Then i can see a big push for UX if models are truly a commodity. But as long as claude is much better then yes they hold all the cards. (And don't have a bigger company to have a civil war with like openai)
Input │ Output │ Cache Create │ Cache Read
916,134 │ 11,106,507 │ 199,684,538 │ 2,767,614,506
as an example here's my usage. Massive daily usage for the past two months.Do you have a citation for this?
It might be at a loss, but I don’t think it is that extravagant.
also you probably talking about distilled deepseek model
I am on Max and I can work 5 hrs+ a day easily. It does fall back to Sonnet pretty fast, but I don't seem to notice any big differece.
The reason they are talking about building new nuclear power plants in the US isn't just for a few training runs, its for inference. At scale the AI tools are going to be extremely expensive.
Also note China produces twice as much electricity as the United States. Software development and agent demand is going to be competitive across industries. You may think, oh I can just use a few hours of this a day and I got a week of work done (happens to me some days), but you are going to end up needing to match what your competitors are doing - not what you got comfortable with. This is the recurring trap of new technology (no capitalism required.)
There is a danger to independent developers becoming reliant on models. $100-$200 is a customer acquisition cost giveaway. The state of the art models probably will end up costing hourly what a human developer costs. There is also the speed and batching part. How willing is the developer to, for example, get 50% off but maybe wait twice as long for the output. Hopefully the good dev models end up only costing $1000-$2000 a month in a year. At least that will be more accessible.
Somewhere in the future these good models will run on device and just cost the price of your hardware. Will it be the AGI models? We will find out.
I wonder how this comment will age, will look back at it in 5 or 10 years.
Probably because I am an old man, but I don’t personally vibe with full time AI assistant use, rather I will use the best models available for brief periods on specific problems.
Ironically, when I do use the best models available to me it is almost always to work on making weaker and smaller models running on Ollama more effective for my interests.
BTW, I have used neural network tech in production since 1985, and I am thrilled by the rate of progress, but worry about such externalities as energy use, environmental factors, and hurting the job market for many young people.
There are a lot of parts in the near term to dislike here, especially the consequences for privacy, adtech, energy use. I do have concerns that the greatest pitfalls in the short terms are being ignored while other uncertainties are being exaggerated. (I've been warning on deep learning model use for recommendation engines for years, and only a sliver of people seem to have picked up on that one, for example.)
On the other hand, if good enough models can run locally, humans can end up with a lot more autonomy and choice with their software and operating systems than they have today. The most powerful models might run on supercomputers and just be solving the really big science problems. There is a lot of fantastic software out there that does not improve by throwing infinite resources at it.
Another consideration is while the big tech firms are spending (what will likely approach) hundreds of billions of dollars in a race to "AGI", what matters to those same companies even more than winning is making sure that the winner isn't a winner takes all. In that case, hopefully the outcome looks more like open source.
I don’t see how that can be true, but if it is…
Either you, or I are definitely use Claude Code incorrectly.
Nobody's asking for $200 in single-line diffs in less than a day - right?
You mean… it’s almost exactly like working with interns and jr developers? ;)
It rears its head regardless of what sociopolitical environment you place us in.
You’re either competing to offer better products or services to customers…or you’re competing for your position in the breadline or politburo via black markets.
And, since there is no global super-state, the world economy is a market economy, so even if every state were a state-owned planned economy, North Korea style, still there would exist this type of competition between states.
So yeah it basically comes down to your definition of "worker-owned". What fraction of worker ownership is necessary? Do C-level execs count as workers? Can it be "worker-owned" if the "workers" are people working elsewhere?
Beyond the "worker-owned" terminology, why is this distinction supposed to matter exactly? Supposing there was an SV startup that was relatively generous with equity compensation, so over 50% of equity is owned by non-C-level employees. What would you expect to change, if anything, if that threshold was passed?
If the workers are majority owners, then they can, for example, fire a CEO that is leading the company in the wrong direction, or trying to cut their salaries, or anything like that.
Estimating productivity gains is a flame war I don’t want to start, but as a signal: if the CC Max plan goes up 10x in price, I’m still keeping my subscription.
I maintain top-tier subscription to every frontier service (~$1k/mo) and throughout the week spend multiple hours with each of Cursor, Amp, Augment, Windsurf, Codex CLI, Gemini CLI, but keep on defaulting to Claude Code.
Are you doing front end backend full stack or model development itself?
Are you destilling models for training your own?
I have never heard someone using so much subscription?
Is this for your full time job or startup?
Why not use qwen or deep seek and host it yourself?
I am impressed with what you are doing.
As to “why”: I’ve been coding for 25 years, and LLMs is the first technology that has a non-linear impact on my output. It’s simultaneously moronic and jaw-dropping. I’m good at what I do (eg, merged fixes into Node) and Claude/o3 regularly finds material edge cases in my code that I was confident in. Then they add a test case (as per our style), write a fix, and update docs/examples within two minutes.
I love coding and the art&craft of software development. I’ve written millions of lines of revenue generating code, and made millions doing it. If someone forced me to stop using LLMs in my production process, I’d quit on the spot.
Why not self host: open source models are a generation behind SOTA. R1 is just not in the same league as the pro commercial models.
Yup 100% agree. I’d rather try to convince them of the benefits than go back to what feels like an unnecessarily inefficient process of writing all code by hand again.
And I’ve got 25+ years of solid coding experience. Never going back.
i've tried agent-style workflows in copilot and windsurf (on claude 3.5 and 4), and honestly, they often just get stuck or build themselves into a corner. they don’t seem to reason across structure or long-term architecture in any meaningful way. it might look helpful at first, but what comes out tends to be fragile and usually something i’d refactor immediately.
sure, the model writes fast – but that speed doesn't translate into actual productivity for me unless it’s something dead simple. and if i’m spending a lot of time generating boilerplate, i usually take that as a design smell, not a task i want to automate harder.
so i’m honestly wondering: is cc max really that much better? are those productivity claims based on something fundamentally different? or is it more about tool enthusiasm + selective wins?
Which frameworks & libraries have you found work well in this (agentic) context? I feel much of the js lib. landscape does not do enough to enforce an easily-understood project structure that would "constrain" the architecture and force modularity. (I might have this bias from my many years of work with Rails that is highly opinionated in this regard).
I think Fiction LiveBench captures some of those differences via a standardized benchmark that spreads interconnected facts through an increasingly large context to see how models can continue connecting the dots (similar to how in codebases you often have related ideas spread across many files)
https://fiction.live/stories/Fiction-liveBench-May-22-2025/o...
> I’ve written millions of lines of revenue generating code
This is a wild claim.Approx 250 working days in a year. 25 years coding. Just one million lines would be phenom output, at 160 lines per day forever. Now you are claiming multiple millions? Come on.
1. Before wife&kids, every weekend I would learn a library or a concept by recreating it from scratch. Re-implementing jQuery, fetch API via XHR, Promises, barebones React, a basic web router, express + common middlewares, etc. Usually, at least 1,000 lines of code every weekend. That's 1M+ over 25 years.
2. My last product is currently 400k LOCs, 95% built by me over three years. I didn't one-shot it, so assuming 2-3x ongoing refactors, that's more than 1M LOCs written.
3. In my current product repo, GitHub says for the last 6 months I'm +120k,-80k. I code less than I used to, but even at this rate, it's safely 100k-250k per year (times 20 years).
4. Even in open source, there are examples like esbuild, which is a side project from one person (cofounder and architect of Figma). esbuild is currently at ~150k LOCs, and GitHub says his contributions were +600k,-400k.
5. LOCs are not the same. 10k lines of algorithms can take a month, but 10K of React widgets is like a week of work (on a greenfield project where you know exactly what you're building). These days, when a frontend developer says their most extensive UI codebase was 100k LOCs in an interview, I assume they haven't built a big UI thing.
So yes, if the reference point is "how many sprint tickets is that", it seems impossible. If the reference point is "a creative outlet that aligns with startup-level rewards", I think my statement of "millions of lines" is conservative.
Granted, not all of it was revenue-generating - much was experimental, exploratory, or just for fun. My overarching point was that I build software products for (great) living, as opposed to a marketer who stumbled into Claude Code and now evangelizes it as some huge unlock.
10 years would make 500k and you just cross a million at 20.
So that would have to be 20 years straight of that style of working and you’re still not into plural millions until 40 years.
If someone actually produced multiple millions of lines in 25 years, it would have to be a side effect of some extremely verbose language where trivial changes take up many lines (maybe Java).
Ultimately, my not using the best tools for my personal research projects has zero effect on the world but I am still very curious what elite developers with the best tools can accomplish, and what capability I am ‘leaving on the table.’
It’s so stupid fast to get running that you aren’t out anything if you don’t like it.
There was no way I was going to switch to a different IDE.
My app builds and runs fine on Termux, so my CLAUDE.md says to always run unit tests after making changes. So I punch in a request, close my phone for a bit, then check back later and review the diff. Usually takes one or two follow-up asks to get right, but since it always builds and passes tests, I never get complete garbage back.
There are some tasks that I never give it. Most of that is just intuition. Anything I need to understand deeply or care about the implementation of I do myself. And the app was originally hand-built by me, which I think is important - I would not trust CC to design the entire thing from scratch. It's much easier to review changes when you understand the overall architecture deeply.
i found opus is significantly more capable in coding than sonnet, especcially for the task that is poorly defined, thinking mode can fulfill alot of missing detail and you just need to edit a little before let it code.
"Agentic" workflows burn through tokens like there's no tomorrow, and the new Opus model is so expensive per-token that the Max plan pays itself back in one or two days of moderate usage. When people reports their Claude Code sessions costing $100+ per day, I read that as the API price equivalent - it makes no sense to actually "pay as you go" with Claude right now.
This is arguably the cheapest option available on the market right now in terms of results per dollar, but only if you can afford the subscription itself. There's also time/value component here: on Max x5, it's quite easy to hit the usage limits of Opus (fortunately the limit is per 5 hours or so); Max x20 is only twice the price of Max x5 but gives you 4x more Opus; better model = less time spent fighting with and cleaning up after the AI. It's expensive to be poor, unfortunately.
I've yet to use anything but copilot in vscode, which is 1/2 the time helpful, and 1/2 wasting my time. For me it's almost break-even, if I don't count the frustration it causes.
I've been reading all these AI-related comment sections and none of it is convincing me there is really anything better out there. AI seems like break-even at best, but usually it's just "fighting with and cleaning up after the AI", and I'm really not interested in doing any of that. I was a lot happier when I wasn't constantly being shown bad code that I need to read and decide about, when I'm perfectly capable of writing the code myself without the hasle of AI getting in my way.
AI burnout is probably already a thing, and I'm close to that point already. I do not have hope that it will get much better than it is, as the core of the tech is essentially just a guessing game.
So I vibe coded it. I was extremely specific about how the back end should operate and pretty vague about the UI, and basically everything worked.
But there were a few things about this one: first, it was just a prototype. I wanted to kick around some ideas quickly, and I didn't care at all about code quality. Second, I already knew exactly how to do the hard parts in the back end, so part of the prompt input was the architecture and mechanism that I wanted.
But it spat out that html app way way faster than I could have.
It is also BYOA or you can buy a subscription from Zed themselves and help them out. I currently use it with my free Copilot+ subscription (GitHub hands it out to pretty much any free/open source dev).
Since they announced that you can use the Pro subscription with Claude Code, I've been using it much more and I've never ever been rate limited.
The basic concept is out there.
Lots of smart people studying hard to catch up to also be poached. No shortage of those I assume.
Good trainingsdata still seems the most important to me.
(and lots of hardware)
Or does the specific training still involves lots of smart decisions all the time?
And those small or big decisions make all the difference?
We’d probably see more companies training their own models if it was cheaper, for sure. Maybe some of them would do very well. But even having a lot of money to throw at this doesn’t guarantee success, e.g. Meta’s Llama 4 was a big disappointment.
That said, it’s not impossible to catch up to close to state-of-the-art, as Deepseek showed.
But the truth is to have experience building models at this scale requires working at a high level job at a major FAANG/LLM provider. Building what Meta needs is not something you can do in your basement.
The reality is the set of people who really understand this stuff and have experience working on it at scale is very, very small. And the people in this space are already paid very well.
The basic concept is out there: run very fast.
Lots of people running every day who could be poached. No shortage of those I assume.
Good running shoes still seem the most important to me.
2. Cost to train is also prohibitive. Grok data centre has 200,000 H100 Graphics cards. Impossible for a startup to compete with this.
its funny to me since xAI literally the "youngest" in this space and recently made an Grok4 that surpass all frontier model
it literally not impossible
I assume startup here means the average one, that has a little bit less of funding and connections.
money is "less" important factor, I don't say they don't matters but much less than you would think
xAI was just spun out to raise more money / fix the x finance issues.
It’s the difference between running a marathon (impressive) and winning a marathon (here’s a giant sponsorship check).
Coding startups also try to fine-tune OSS models to their own ends. But this is also very difficult, and usually just done as a cost optimization, not as a way to get better functionality.
- Anthropic doesn't use the inputs for training.
- Cursor doesn't have $900M ARR. That was the raise. Their ARR is ~$500m [1].
- Claude Code already support the niceties, including "add selection to chat", accessing IDE's realtime warnings and errors (built-in tool 'ideDiagnostics'), and using IDE's native diff viewer for reviewing the edits.
[1] https://techcrunch.com/2025/06/05/cursors-anysphere-nabs-9-9...
But the chat UX is so simple it doesn't take up any extra brain-cycles. It's easier to alt-tab to and from; it feels like slacking a coworker. I can have one or more terminal windows open with agents I'm managing, and still monitor/intervene in my editor as they work. Fits much nicer with my brain, and accelerates my flow instead of disrupting it
There's something starkly different for me about not having to think about exactly what context to feed to the tool, which text to highlight or tabs to open, which predefined agent to select, which IDE button to press
Just formulate my concepts and intent and then express those in words. If I need to be more precise in my words then I will be, but I stay in a concepts + words headspace. That's very important for conserving my own mental context window
Their base is $20/mth. That would equal 3.75M people paying a sub to Cursor.
If literally everyone is on their $200/mth plan, then that would be 375K paid users.
There’s 50M VS Code + VS users (May 2025). [1] 7% of all VS Code users having switched to Cursor does not match my personal circle of developers. 0.7% . . . Maybe? But, that would be if everyone using Cursor were paying $200/month.
Seems impossibly high, especially given the number of other AI subscription options as well.
[1] https://devblogs.microsoft.com/blog/celebrating-50-million-d...
Last disclosed revenue from Cursor was $500mil. https://www.bloomberg.com/news/articles/2025-06-05/anysphere...
I actually do prefer the view that having the agent built into an IDE brings me but I'll be damned if I'm forced to use CoPilot/OpenAI. Second to that, the agent does have access to a lot more contextual tools by being built into the editor like focused linting errors and test failures. Of course that demands your development environment is setup correctly and could be replicatable with Claude Code to some extent.
And so I’d say this isn’t a harbinger of the death of Cursor, instead proof that there’s a future in the market they were just recently winning.
They either need to create their own model and compete on cost, or hope that token costs come down dramatically so as to be too cheap to meter.
My mental model is that these foundation model companies will need to invest in and win in a significant number of the app layer markets in order to realize enough revenue to drive returns. And if coding / agentic coding is one of the top X use cases for tokens at the app layer, seems logical that they'd want to be a winner in this market.
Is your view that these companies will be content to win at the model layer and be agnostic as to the app layer?
You may be right about “they need to invest in and win” in order to have __enough__ revenue to outcompete the nation-state sized competition, but this stuff is moving way to fast for anyone know.
It’s a huge risk as Cursor can get acquired, just like what this news article is about.
The bigger issue is the advantage Anthropic, Google and OpenAI have in developing and deploying their own models. It wasn't that long ago that Cursor was reading 50 lines of code at a time to save on token costs. Anthropic just came out and yolo'd the context window because they could afford to, and it blew everything else away.
Cursor could release a cli tomorrow but it wouldn't help them compete when Anthropic and Google can always be multiples cheaper
I don’t think this is true at all. The reason CC is so good is that they’re very deliberate about what goes in the context. CC often spends ages reading 5 LOC snippets, but afterwards it only has relevant stuff in context.
Prompt: https://gist.github.com/transitive-bullshit/487c9cb52c75a970...
- AI is not good enough yet to abandon the traditional IDE experierence if you're doing anything non-trivial. Hard finding use cases for this right now.
- There's no moat here. There are already a dozen "Claude Code UI" OSS projects with similar basic functionality.
Auto-regressive nature of these things mean that errors accumulate, and IDEs are well placed to give that observability to the human, than a coding agent. I can course correct more easily in an IDE with clear diffs, coding navigation, than following a terminal timeline.
CC has some integration with VSC it is not all or nothing.
I resisted moving from Roo in VS Code to CC for this reason, and then tried it for a day, and didn't go back.
I am genuinely curious if any Cursor or Windsurf users who have also tried Claude Code could speak to why they prefer the IDE-fork tools? I’ve only ever used Claude Code myself - what am I missing?
While Zed's model is not as good the UI is so much better IMO.
The story I've heard is that Cursor is making all their money on context management and prompting, to help smooth over the gap between "you know what I meant" and getting the underlying model to "know what you meant"
I haven't had as much experience with Claude or Claude Code to speak to those, but my colleagues speak of them highly
It's quite interesting how little the Cursor power users use tab. Majority of the posts are some insane number of agent edits and close to (or exactly) 0 tabs.
It's interesting when I see videos or reddit posts about cursor and people getting rate limited and being super angry. In my experience tab is the number one feature, and I feel like most people using agent are probably overusing it tasks that would honestly take less time to do myself or using models way smarter than they need to be for the task at hand.
Many of my co-workers do the same. VC Code is vastly inferior when it comes to editing and actual IDE feature so it is a non-starter when you do programming yourself.
I once tried AI tab-complete on Zed and it was all right but breaks my flow. Either the AI does the editing or I do it but mixing both annoys me.
I haven't tried Claude Code VS Code extension. Did anyone replaced Cursor with this setup?
Besides that, the IDE seems poorly designed - some navigation options are confusing and it makes way too many intrusive changes (ex: automatically finishing strings).
I've since gone back to VS Code - with Cline (with OpenRouter and super cheap Qwen Coder models, Windsurf FREE, Claude Code with $20 per month) and I get great mileage from all of them.
I honestly don't know how great that is, because it just reiterates what I was planning anyways, and I can't tell if it's just glazing, or it's just drawing the same general conclusions. Seriously though, it does a decent job, and you can discuss / ruminate over approaches.
I assume you can do all the same things in an editor. I'm just comfortable with a shell is all, and as a hardcore Vi user, I don't really want to use Visual Studio.
Occasionally they lose their connection to the terminal in VSCode, but I’ve got no other integration complaints.
And I really prefer the bring-your-own-key model as opposed to letting the IDE be my middleman.
I can do most of what I want with cline, and I've gone back from large changes to just small changes and been moving much quicker. Large refactors/changes start to deviate from what you actually want to accomplish unless you have written a dissertation, and even then they fail.
I find just referencing this file over and over works wonders and it respects items that were already checked off really well.
I can get a lot done really fast this way in small enough chunks so i know every bit of code and how it works (tweaking manually of course where needed).
But I can blow through some tickets way faster than before this way.
Not if you want custom UI. There are a lot of things you can do in extension land (continue, cline, roocode, kilocode, etc. are good examples) but there are some things you can't.
One thing I would have thought would be really cool to try is to integrate it at the LSP level, and use all that good stuff, but apparently people trying (I think there was a company from .il trying) either went closed or didn't release anything note worthy...
I've been using Augment for over a year with IntelliJ, and never understood why my colleagues were all raving about Cursor and Windsurf. I gave Cursor a real try, but it wasn't any better, and the value proposition of having to adopt a dedicated IDE wasn't attractive to me.
A plugin to leverage your existing tools makes a lot more sense than an IDE. Or at least until/if AI agents get so smart that you don't need most of the IDE's functionality, which might change what kinds of tooling are needed when you're in the passenger seat rather than the driver's seat.
So an extension will never be able to compete with Copilot.
Does anyone have a comparison between this and OpenAI Codex? I find OpenAI's thing really good actually (vastly better workflow that Windsurf). Maybe I am missing out however.
What are the UX improvements?
I was using the Pycharm plugin and didn’t notice any actual integration.
I had problems with pycharm’s terminal—not least of which was default 5k line scroll back which while easy to change was worst part of CC for me at first.
I finally jumped to using iterm and then using pycharm separately to do code review, visual git workflows, some run config etc.
But the actual value of Pycharm—-and I’ve been a real booster of that IDE has shrank due to CC and moving out of the built in terminal is a threat to usage of the product for me.
If the plugin offered some big value I might stick with it but I’m not sure what they could even do.
Plus recently launched OpenCode, open source CC is gaining traction fast.
There was always very little moat in the model wrapper.
The main value of CC is the free tool built by people who understand all the internals of their own models.
VSCode & CoPilot now offer it.
Is it as good? Maybe not.
But they are really working hard over there at Copilot and seem to be catching up.
I get an Edu license for Copilot, so just ditched Cursor!
I truly do not understand people's affinity for a CLI interface for coding agents. Scriptability I understand, but surely we could agree that CC with Cursor's UX would be superior to CC's terminal alone, right? That's why CC is pushing IDE integration -- they're just not there yet.
I can't stand the UX, or VS Code's UX in general. I vastly prefer having CC open in a terminal alongside neovim. CC is fully capable of opening diffs in neovim or otherwise completely controlling neovim by talking to its socket.
They're likely artificially holding it back either because its a loss leader they want to use a very specific way, or because they're planning the next big boom/launch (maybe with a new model to build hype?).
> - Tab completion model (Cursor's remaining moat)
What is that? I have Gemini Code Assist installed in VSCode and I'm getting tab completion. (yes, LLM based tab completion)
Which, as an aside I find useful when it works but also often extremely confusing to read. Like say in C++ I type
int myVar = 123
The editor might show int myVar = 123;
And it's nearly impossible to tell that I didn't enter that `;` so I move on to the next line instead of pressing tab only to find the `;` wasn't really there. That's also probably an easy example. Literally it feels like 1 of 6 lines I type I can't tell what is actually in the file and what is being suggested. Any tips? Maybe I just need to set some special background color for text being suggested.and PS: that tiny example is not an example of a great tab completion. A better one is when I start editing 1 of 10 similar lines, I edit the first one, it sees the pattern and auto does the other 9. Can also do the "type a comment and it fills in the code" thing. Just trying to be clear I'm getting LLM tab completion and not using Cursor
I get all AI or none, so it’s always obvious what’s happening.
Completions are OK, but I did not enjoy the feeling of both us having a hand on the wheel and trying to type at the same time.
My experience with this has been hair-pullingly frustrating
Cursor's @Docs is still unparalleled and no MCP server for documentation fetching even comes close. That is the only reason why I still use Cursor, sometimes I have esoteric packages that must be used in my code and other IDEs will simply hallucinate due to not having such a robust docs feature, if any, which is useless to me, and I believe Claude Code also falls into that bucket.
I strongly disagree. It will put the wrong doc snippets into context 99% of the time. If the docs are slightly long then forget it, it’ll be even worse.
I never use it because of this.
My local ollama + continue + Qwen 2.5 coder gives good tab completion with minimal latency; how much better is Cursor’s tab completion model?
I’m still weary of letting LLM edit my code so my local setup gives me sufficient assistance with tab completion and occasional chat.
I am thinking about a new setup as I write this: in Emacs, I explicitly choose a local Ollama model or a paid API like Gemini or OpenAI, so I should just make calling Perplexity Sonar APIs another manual choice. (Currently I only use Perplexity from Python scripts.)
If I owned a company, I would frequently evaluate privacy and security aspects of using commercial APIs. Using Ollama solves that.
During the evaluation at a previous job, we found that windsurf is waaaay better than anything else. They were expensive (to train on our source code directly) but the solution the offered was outperforming others.
Claude Code - Agentic/Autonomous coding usecases.
Both have their own place in programming, though there are overlaps.
That said, the creator of Claude Code jumped to Cursor so they must see a there there.
A lot of devs are not superstar devs.
They don't want a terminal tool, or anything they have to configure.
A IDE you can just download and 'it just works' has value. And there are companies that will pay.
gemini cli is very expensive.
https://blog.google/technology/developers/introducing-gemini...
And even switching is not smooth either. for me when the switch happens it just get stuck sitting there so i have to restart cli.
There are IDE integrations where you can run it in a terminal session while perusing the files through your IDE, but it's not powering any autocomplete there AFAIK.
I love CC, but letting it auto-write changes is, at best, a waste of time trying to find the bugs after they start compounding.
I currently have a Copilot subscription that has 4.1 for free but Sonnet 4 and Gemini Pro 2.5 with monthly limits. Thinking to switch to CC
I am curious to know which Claude Code subscription most people are using... ?
Trivial/easy stuff - let it make a PR at the end and review in GitHub. It rarely gets this stuff wrong IME or does anything stupid.
Moderately complex stuff - let it code away, review/test it in my IDE and make any changes myself and tell claude what I've changed (and get it to do a quick review of my code)
Complex stuff - watch it like a hawk as it is thinking and interrupt it constantly asking questions/telling it what to do, then review in my IDE.
agentic tool + anthropic subsidized pricing.
Second part is why it has "exploded"
- > curl -fsSL http://claude.ai/install.sh | bash
- > claude
- > OAuth to your Anthropic account
Done. Now you have a SOTA agentic AI with pretty forgiving usage limits up and running immediately. This is why it's capturing developer mindshare. The simplicity of getting up and going with it is a selling point.
I doubt MS has ever made $900M off of VS Code.
What a harsh time to work for an AI startup as a rank and file employee! I wonder how the founders justify going along with it inside their mind.
[0] Character.ai CEO Noam Shazeer Returns to Google https://news.ycombinator.com/item?id=41141112 - 11 months ago (87 comments)
Edit: Thank you @jonny_eh for the clarification. I can't imagine it feels awesome being a leftover but at least you vested out. "Take the money and leave" is still a bit raw when the founders and researchers are now getting the initial payout + generous Google RSU's.
Hopefully Windsurf employees are treated well here.
Note: I worked at Character until recently.
[1] https://www.theverge.com/2024/8/2/24212348/google-hires-char...
$2.4 billion.
You've reminded me of when I first watched Idiocracy in 2006. At the time, I delighted in the comedic, sophomoreish, and seemingly ridiculous take on a possible trajectory of humanity. But now much of it is actually coming to pass. It's sad.
P.s. As a sidenote, apparently I love all of Mike Judge's productions, which also includes Office Space, and Beavis and Butthead.
Character.ai reached out to me for an opportunity, but they've already been carved up.
I think it's great that the rank and file got some of their equity cash-out (based on the other comment), but I imagine it isn't an attractive prospect as a start-up to join at this point.
I just ignored the recruiter. I can't imagine their would be a second liquidity event.
Source: I was in GDM when character was acquired.
Otherwise why not merge all of engineering into ElGoog?
Windsurf’s value didn’t go to $0 overnight. The company will continue and their equity is likely still worth a decent amount wherever the company ends up.
Obviously a disappointing outcome for the people who thought life changing money was right around the corner, but they didn’t lose everything.
Edit: the people downvoting this clearly can't read, I made the exact same point as jonny_eh.
High interest rates make VC funding more expensive and now bigtech can swoop in, poach all the necessary staff and deprive investors of an exit.
What is the point any more?
Were I a Windsurf investor, I'd be pissed right now and calling my lawyer.
the only reason he'd walk away is because he thinks other opportunities are higher EV. if he believes this, a) the investors investment is likely worth virtually 0 anyway and b) if it's not, removing a leader who doesn't want to be there probably increases P(success) for the company and further increases the value of the investment.
founder departure isn't good for the narrative, but it's a symptom of an investment going bad, not often a cause.
The moat is paper thin.
GitHub has open sourced copilot.
The open source community is working hard on their own projects.
No doubt Cursor is moving fast to create amazing innovations, but if the competition only focuses on thin wrappers they are not worth the billion dollar valuations.
I love watching this space as it is moving extremely fast.
And after that, AGI will be open source.
In the end, ownership of data and compute will be the things that define the victors.
marketing > market > product
Even with AGI in hand, there will still be competition between offerings based on externalities, inertia, or battle-testedness, or authority. Maybe super-intelligence would change the calculus of that, but you'd still probably find opportunities beyond just letting your pool of agents vibe code it.Same for physical services like labors, miners and cooks, even taxi/bus drivers for +99% of the world. Automation immensely improve their efficiency and the Modern Times is the past for half of the globe, but AGI isn’t the main facilitator.
Replace all (most*) Silicon Valley -and cousins- similar "products" and services, perhaps yes !
Because they didn't do their jobs properly?
What happened?
i like cursor fine, but check out the forum/subreddit to see people talking like addicts, pissed their fix is getting more expensive
i think this aggressive reaction is more pronounced for non-programmers who are making things for the first time. they tasted a new power and they don't want it taken away.
Look no further than founders in the sports betting space, like the fanduel founders. Borrow a bunch of money at huge valuations because of hype and ignore the fact, that despite it being exciting and popular, the margins are like <5%. Fanduel founders sold for 400 something million, walked away with nothing. Its now a multibillion dollar company when the new owners realized the product was marketing, not the vig. These AI companies are shifting towards their "marketing" eras.
This is nothing new. I'm not sure if it's "anti-consumer" as much as it's just a risky play from a brand and customer happiness viewpoint. Because your prices can be forced up by your supplier, and your customers will be mad at you, not at your supplier.
I do also think it is on consumers - in some part - to go into it with eyes open and do their research.
Thankfully a product like Cursor is a monthly sub and not a big up-front investment so if you don't like - or can't afford - the new pricing, you can just stop paying.
I'm not an extreme user of Cursor. It has become an essential part of my workflow, but I also probably on the lower/medium section of users. I know that a lot of my friends were spending $XXX amounts/month on extra usage with them, while I've never gone beyond 50% included premium credits usage.
After their changes I'm getting hit with throttling multiple times a day, which likely means that the same thing happens to almost every Cursor user. So that means one or more of:
- They are jacking up the prices, to squeeze out more profit, so it looks good in the VC game
- They had to jack up the prices, so that they aren't running at a loss anymore (that would be a bad indicator regarding profitability for the whole field)
- They are really incompetent about simulating/estimating the impact of their pricing decisions, which also isn't a good future indicator for their customers
Whilst profits aren't important you also can't burn all your current capital, so if the burn rate gets too high you have to put up prices, which seems likely to be what Cursor is doing.
Will users feel that a $200 subscription is worth it or not?
IOW, the market will slowly but surely drive the labour rate for programming down to the cost of the cheapest coding agent.
So, sure, boasting about a 10x speedo on boilerplate has good metrics, but let's not delude ourselves that programmers are going to be paid enough to afford the $200/m coding agent in the future.
So how do I fight this as a programmer?
Because I have no interest in devaluing my skills with this crap, but unfortunately many others are all in!
The thesis is that once you’re paying $200 a month, you’re beholden and won’t pay and compare it with anything else.
Good thing for consumers who use AI coding tools is that there is no lock-in like in Photoshop or similar software where you hone your skills for years to use particular tool. Switching from Cursor to any other platform would literally take 10 minutes.
Seems harsh and cultish to assume malice. He didnt say you parents have false credentials
I would say calling out people and institutions like that is important so as to keep them honest, and if they arent honest and are trying to grift/defraud people then they deserve the reputation loss
> He is losing close to zero by blocking you, but preventing a potential big loss.
Thats great for gary, but the rest of the world isnt there waiting to be optimized for his benefit. If people trust YC to incubate good talent, but feel its becoming a hub for grifters, then some accountability is in order. Institutions are beholden to their public stakeholders, even private institutions, because they still have people who are using and supporting them
Since Claude Code is cli based, I reviewed my cli toolset: Migrated from iTerm2 to Ghostty and Tmux, from Cursor to NeoVim (my God is it good!).
Just had a 14h workday with this tooling. It’s so good that I complete the work of weeks and months within days! Absolutely beast.
At this point I am thinking IDEs do not reflect the changing reality of software development. They are designed for navigating project folders, writing / changing files. But I don’t review files that much anymore. I rather write prompts, watch Claude Code create a plan, implement it, even write meaningful commit messages.
Yes I can navigate the project with neovim, yes I can make commits in git and in lazygit, but my task is best spent in designing, planning, prompting, reviewing and testing.
The reality of companies out there is much simpler than the challenges of a startup that needs to build systems that are state of the art, scale for millions of users, etc There are companies out there that make millions, in areas you‘ve never heard of, and their core business does not depend on software development best practices.
In our company we have an IT team with the median age of fifty, team members who never have developed software, just maintain systems, delegate hard work to expensive consultants.
Now in that setting someone coming from a startup background is like someone coming from the future. I feel like a wizard who can solve problems in days, instead of weeks or months waiting for a consultant to solve.
The thing is that those don't typically take weeks and months to build with conventional tooling. And I find it hard to believe that all you're doing is this type of integration work. But I suppose there are companies that need such roles.
> There are companies out there that make millions, in areas you‘ve never heard of, and their core business does not depend on software development best practices.
That is true.
I do think that this cowboy coding approach is doing these companies a disservice, especially where tech is not their main product. It's only creating more operational risk that on-call and support staff have to deal with, and producing more technical debt that some poor soul will inevitably have to resolve one day. That is, it all appears to work until one edge case out of thousands brings down the entire system. Which could all be mitigated, if not avoided, by taking the time to understand the system and by following standard software development processes, even if it does take longer to implement.
What you describe isn't new. This approach has existed long before the current wave of AI tooling. But AI tools make the problem worse by making it easier to ship code quickly without following any software development practices that ensure the software is robust and reliable.
So, it's great that you're enjoying these tools. But I would suggest you adopt a more measured approach and work closely with those senior and junior engineers, instead of feeling like a wizard from the future.
It sounds like you are moving very fast and probably have people just clicking "approve".
Good luck for the future to who ever owns your company!
In that setting someone with solid software engineering background using AI to solve problems is like a wizard from the team‘s perspective.
When I worked for startups I was constantly panicking to miss the latest tech trends, and I feared that I would be not marketable in case I didn’t catch up. But in mature companies things work much slower. They work with decades old technology. In that setting not the latest tech counts but being able to solve problems, with whatever means you can.
> Who's reviewing all the code you are churning out with ai?
Writing code is the most tedious part, not reviewing.
Personally I've had mixed experience when I let Sonnet 3.7 document my (and its) code and write commit messages, for some wip stuff it's alright but it soon gets out of hand and because it doesn't really have a direct view in my mind it ends up documenting what it sees instead of the intention behind it, which is totally fair but eh.
So yeah, mileage varies and agentic tools usually spit out more code and redundant comments than I'd like to review. I'm still waiting for a company to develop some sanity check for this somehow but snapshot testing and manual review aren't enough sadly.
I build most of not all of my stuff for work, and I ain't sharing that.
It's no panacea, but is there something to be had there? Abso-fucking-lutley. All of this would have been complete scifi at the beginning of this decade.
But I am exceedingly tired of phrases like “complete the work of weeks and months within days”. If AI is making devs 5x to 10x faster then I’d like to see some actual results. Internet is full of hypesters that make bombastic claims of productivity but never actually shown anything they’ve made.
Just putting this here because a lot of times AI coding seems to be dismissed as smth that can't do actual work ie generate revenue, while its more like making money as a solo dev is already pretty rare and if you're working in a corp. instead you're not going to just post your company name when asked for examples on what you're using AI for.
It's irrefutable that AI tools can be used to create software that generates revenue. What's more difficult is using them to create something that brings actual value into the world.
Patio11 famously built, ran for a number of years (profitably) and then sold a "wrapper for a random number generator" (bingocardcreator.com)
Value is in the eye of the beholder, and only tangentially related to the technical complexity or ingenuity.
My point is that the perceived value of a service or product is directly related to its competitive advantage, product differentiation, and so on. When the service is made from the same cookie cutter template as all the others, the only value that can be extracted from it is by duping customers who don't know better.
There are entire industries flooded with cheap and poorly made crap from companies that change brand names every week. Code generation tools have now enabled such grifters to profit from software as well.
I'm only half joking.
There are lots of people that only use LLMs in whatever UI the model companies are providing. I have colleagues that will never venture outside the ChatGPT website, even though with some effort they could make their tooling richer by using the API and building some wrapper or UI for it.
Also its easy to criticize from the sidelines but, do you have products that you made by yourself that are used by hundreds of thousands of people? I have 5 such sites, 2 of which I named above
Good on you for learning how AI tools work, but there's no way for anyone to tell whether your backend is self-managed or not, and practically it doesn't really matter. I reckon your users would get better results from proprietary models that expose an API than self-hosted open source ones, but then your revenue would probably be lower.
> Also its easy to criticize from the sidelines but, do you have products that you made by yourself that are used by hundreds of thousands of people? I have 5 such sites, 2 of which I named above
That's a lazy defense considering anyone is free to criticize anyone else's work, especially if they're familiar with the industry. Just like food and film critics don't need to be chefs and movie producers.
But I'll give you credit for actually building and launching something that generates revenue. I admit that that is more than I have managed with my personal projects.
thanks for sharing.
I'd consider that a liability, not an asset, but they were pretty happy with it.
AI is often used to pump out sites and apps that scam users, SEO spam, etc. So there is definitely a revenue stream that makes scammers and grifters excited for AI. These tools have increased the scope and reach of their scams, and provide a huge boost to their productivity.
That's partly why I'm curious about OP's work. Nobody who's using these tools while following best software engineering practices would claim that they're making them that much more productive. Reviewing the generated code and fixing issues counteracts whatever time is saved by generating code. But if they're vibe coding and don't even look at the code...
Say no more.
You don't review the code? Just test it works?
Usually in 2-3 prompts I can get a python or shell script that reads some file list somewhere, reads some json/csv elsewhere. Combines it in various ways and spits out some output to be ingested by some other pipeline.
I just test this code if it works it’s good.
Never in my life would I put this in a critical system though. When I review these files they are full of tiny errors that would blow up in spectacular manner if the input was slightly off somewhere.
It’s good for what it is. But I’m honestly afraid of production code being vibe coded by these tools.
they don't review files anymore though.
But relax, noone is taking your Emacs from you :) I still like it, but am not a disciple anymore ;)
and yet you're pulling 14 hour workdays..
(Prompt caches are another thing; leaving it for the night and resuming the next day will cost you a little extra on resume, if you're using models via API pay-as-you-go billing.)
At least in the Scale case there seemed to be some form of payout to employees and equity holders, but this takes it a whole lot further by just throwing out all other employees.
There is supposed to be the concept that “all common stock is the same”. These fake-acquisitions completely undermine that.
Nice plan I guess. Kind of obvious to spot though.
They’re just not enforceable against “rank and file” employees.
You think the only people in a company that matter are a few founders? It’s ok to screw over everyone else?
Maybe there’s more to the story.
Gentle reminder that more startups die by suicide than homicide, and that an early-stage startup is a total crapshoot.
This really is a whole new level of getting screwed.
Because for most people, they will end up being worth exactly zero in value. Less if they went and exercised those options prior to a liquidity event that may never happen.
Founders get a big pay day and leave within a couple years while 100 employees share a 1% of the company between themselves.
You sweet summer child.
Were you under the impression that venture capital is anything more than rent-seeking?
Any time I hear someone talk about more or less regulation, instead of talking about better or worse regulation, I suspect they are ideologists and trying to shift the narrative, or else they would be able to criticise based on actual merit.
if it was better it would have survived.
Not sure how you can claim this when:
1. It is still very much alive, and
2. The whole point GP is making is that what made it better got stripped from it because of the acquisition announcement
There are some alternatives like continue.dev or Jetbrains own AI offering but no Cursor or Claude Code ( Sonnet 3.7/4) you can get through Jetbrains plugin or others, but Anthropic does not provide support same with cursor.
Jetbrains Junie is supposedly the same thing but no Rider and that's my current project so didn't get into that yet.
Windsurf was just disappointingly bad in intellij (like any other plugin I've tried so far)
Never tried Windsurf in it’s recent form but we did evaluate it when it was still called Codeium and everyone liked Copilot better.
I am still a paid subscriber but most of my usage is claude code now becaue Windsurf does not Sonnet 4 included in their plan.
...which no one talks about anymore. Okay I guess you have a point.
And AI applied to biomedicine arguably already delivered some acceleration.
AI growth has slowed to a crawl, and it's priced it self out vs cost of compute.
NVIDIA feels a lot like SUN.
> amazing documentary
Been there, done that: 2001, Startup Dot Com https://www.youtube.com/watch?v=cP4PGjnZwJE
But in any case, I just can't see how AI code editors like Windsurf or Cursor, without any proprietary model, can be valued at billions. What's the underlying IP that justifies these valuations?
Not sure how the VCs get their cut. I'm guessing that Google can balance it out by participating in rounds for other startups in that VC's porfolio.
Cursor's Accept / Reject feature for each change it makes in each file is nice whereas I have to use a diff tool to review the changes in Claude Code.
Also, if I go down a prompt alley that's a dead end, Cursor has the Restore Checkpoint feature to get back to the original prompt and try a different path. With Claude Code, you had better have committed the code to git, otherwise you end up with a mess you didn't want.
My company pays for both, but I mostly use Cursor unless I know I am doing a new project or some proof of concept, which Claude Code might have an edge on with a more mature TODO list feature.
Why not an acquisition?
How did Google get Windsurf and investors to agree to this maneuver that decapitated the leadership and key talent, without a big exit event for everyone?
My read of the article: "Here's x% of what OpenAI offered you, you waive legal challenges while we cherry-pick your people and license the tech in their heads, and you can keep the company, and everyone left behind can promote themselves to fill the vacancies."
If the people instead just quit their jobs and start working at Google … nothing to see here.
However, while Cursor and GH Copilot improved, Windsurf went in the opposite direction. On each update, I started to get more and more issues. The agent often tried to run shell commands, and it hung up, or I found minor UI bugs. One day, I decided to give GH Copilot another chance, and I was surprised by how it evolved, to the point that it worked better than Windsurf for my usage. I don’t know what happened internally at Windsurf, but I notice the degradation as a user. If my case indicates what happened to other users, maybe OpenAI saw declining subscriptions and canceled the deal.
> Google hires Windsurf CEO Varun Mohan, others in $2.4 billion AI talent deal
https://www.cnbc.com/2025/07/11/google-windsurf-ceo-varun-mo...
> Mohan and the Windsurf employees will focus on agentic coding efforts at Google DeepMind and work largely on Gemini. Google will not have any control over nor a stake in Windsurf, but it will take a non-exclusive license to some of Windsurf’s technology.
Sounds to me like they're "hiring" them like one "hires" a consultant?
Arguably they have the strongest product moat, and I wouldn’t be surprised if they beat OpenAI in a vertical coding model from that. Easy for them to have users generate evals and have model product feedback loop here.
Zed tab is a lot worse in comparison (partly because it’s slow)
Intellisense has done this for ages. A decade at least.
I wonder what happened with the OpenAI deal. Anyone have any guesses? My first guess is "Look at Claude Code, we can do this ourselves." But, I am likely thinking too simply.
edit: does this mean that Windsurf and its users will stop being iced-out by Anthropic? Or, is this the end of Windsurf?
You must be new around here.
This week I have been using Claude Code and Windsurf side by side. I would make change with one, stash it, ask the other for similar change and then would diff it.
Overall Windsurf was pretty on a par with Claude code.
Claude Code I think misses this. You can get an enterprise account if you commit to over, what.. 70 seats annually?
If you’re an individual you can get Max 5x/20x ..
But for smaller companies, I don’t think they are addressing that space. Am I wrong? Are there any Agentic tools like Claude Code that can provide a fixed cost per user?
I'm a rank and file dev at a non-big tech company and I got a call from a Windsurf sales rep this week who I had connected with on LinkedIn the day before (I never gave them my number). They told me my company was in talks with Windsurf about a licensing deal but that they would give me a 30 day trial of an enterprise account for use on personal projects to let me try it in advance. I guess the idea for them is to build enthusiasm among devs in the company?
Is this a standard sales strategy for products like this? It seems pretty aggressive to me but I'm just an engineer so I wouldn't know.
Otherwise, normally with the amount of capital raised by Windsurf, the founders must have signed some kind of non-compete for the event of a bad-leaver (which this obviously is). Guess covering these penalties was just part of Google's deal, hm?
UI is also worse compared to Claude.
They still have some work to do if they want to compete with Claude TBH.
Dude, I saw a lot crazier things happen on a monthly basis. And don't even get me started on the personal lives and partying that the show didn't display.
They raised A, B, and C round (according to CrunchBase), and then the founders just walk away and get a job/deal at Google?
It is hard to say no when Google/Meta gives you say $100M upfront and hundreds more if not Billion+ in RSUs. After 3 rounds it is not unreasonable to have only 5-10%.
10% of a company worth a few billion burning a lot of cash, that needs to keep raising more rounds i.e more dilution, may have less value than RSUs from multi-trillion dollar publicly traded liquid tech company today.
It is also quite hard to raise $5-10+Billion in cash. There are only handful of startups which have ever done so
Very few funds/investors can afford to do so large rounds. This was SoftBank's thesis for most of last decade, compete by just outfunding competing products in a market.
And it was a crazy deal to begin with, for reference JetBrains who's building IDEs for 24 years are evaluated at $7 billions
Windsurf phone's home on every code edit that you have and takes on 30% load on your servers or on your workstation depending on what you're running.
I would strongly discourage the use of windsurf on your systems.
Case in point their AI model that they just built.
The issue isn’t an acquisition not working out, it’s that the founding/exec team felt it appropriate to arrange their own exits and abandon their team before even communicating that their “successful exit” wasn’t actually happening.
Windsurf's value to OpenAI was for the latter to "see the whole chessboard" of context, which is helpful when you're training models to be good at coding.
But codex (and Claude Code) fulfill this from the CLI, and it's a first-party utility, not an acquisition.
The libertarian spin on this would be government should have never scrutinized acquisitions and the result is just worse for everyone.
The progressive spin would be to now ban acquihires somehow, and then whatever new legal invention will be created next. I can imagine the next step being, creating a consulting company out of your startup and then selling yourself as consultants to big techs. Now you are neither acquired nor technically acqui-hired and the whackamole continues.
At some point, we need to realize the solution is the culture of people involved. If the government could just ask to reduce acquisitions to make the ecosystem more competitive and companies tried following it in spirit to the best of their ability, we might have much better results than whatever we have now. When culture degrades, the govt can’t trust companies, the companies can’t trust the govt, everything just gets worse, regardless of what rules you write and enforce.
> The progressive spin would be to now ban acquihires somehow, and then whatever new legal invention will be created next.
Progressive has become a moving target, but the pro-competition view would be to break up the massively concentrated companies that are further consolidating markets. Thats what the Khan FTC was trying to do, but we need a Congress interested in a competitive marketplace, which we haven’t had in a while.
Nothing to do with regulators.
Imagine backing this startup and the founder team takes a parachute...
Given the release of Claude Code, it was already over for them.
I hope no one works for them again.
I commented on the OG thread something like "weird since MSFT owns VS Code" and got downvoted to oblivion.
Yet here we are, always right :).
Google is having a hard time acquiring Wiz for 32b, and if it's blocked they owe 3.2b to Wiz. So why risk it when you can just spend the money to hire the talent behind it and spend a few month building out a new product.