337 pointsby jcuenoda day ago35 comments
  • johnfna day ago
    Impressive seeing Google notch up another ~25 ELO on lmarena, on top of the previous #1, which was also Gemini!

    That being said, I'm starting to doubt the leaderboards as an accurate representation of model ability. While I do think Gemini is a good model, having used both Gemini and Claude Opus 4 extensively in the last couple of weeks I think Opus is in another league entirely. I've been dealing with a number of gnarly TypeScript issues, and after a bit Gemini would spin in circles or actually (I've never seen this before!) give up and say it can't do it. Opus solved the same problems with no sweat. I know that that's a fairly isolated anecdote and not necessarily fully indicative of overall performance, but my experience with Gemini is that it would really want to kludge on code in order to make things work, where I found Opus would tend to find cleaner approaches to the problem. Additionally, Opus just seemed to have a greater imagination? Or perhaps it has been tailored to work better in agentic scenarios? I saw it do things like dump the DOM and inspect it for issues after a particular interaction by writing a one-off playwright script, which I found particularly remarkable. My experience with Gemini is that it tries to solve bugs by reading the code really really hard, which is naturally more limited.

    Again, I think Gemini is a great model, I'm very impressed with what Google has put out, and until 4.0 came out I would have said it was the best.

    • joshmlewisa day ago
      o3 is still my favorite over even Opus 4 in most cases. I've spent hundreds of dollars on AI code gen tools in the last month alone and my ranking is:

      1. o3 - it's just really damn good at nuance, getting to the core of the goal, and writing the closest thing to quality production level code. The only negative is it's cutoff window and cost, especially with it's love of tools. That's not usually a big deal for the Rails projects I work on but sometimes it is.

      2. Opus 4 via Claude Code - also really good and is my daily driver because o3 is so expensive. I will often have Opus 4 come up with the plan and first pass and then let o3 critique and make a list of feedback to make it really good.

      3. Gemini 2.5 Pro - haven't tested this latest release but this was my prior #2 before last week. Now I'd say it's tied or slightly better than Sonnet 4. Depends on the situation.

      4. Sonnet 4 via claude Code - it's not bad but needs a lot of coaching and oversight to produce really good code. It will definitely produce a lot of code if you just let it go do it's thing but it's not the quality, concise, and thoughtful code without more specific prompting and revisions.

      I'm also extremely picky and a bit OCD with code quality and organization in projects down to little details with naming, reusability, etc. I accept only 33% of suggested code based on my Cursor stats from last month. I will often revert and go back to refine the prompt before accepting and going down a less than optimal path.

      • spaceman_2020a day ago
        I use o3 a lot for basic research and analysis. I also find the deep research tool really useful for even basic shopping research

        Like just today, it made a list of toys for my toddler that fit her developmental stage and play style. Would have taken me 1-2 hrs of browsing multiple websites otherwise

        • jml7820 hours ago
          Gemini deep research runs circles around OpenAI deep research. It goes way deeper and uses way more sources.
      • vendiddya day ago
        I find o3 to be the clearest thinker as well.

        If I'm working on a complex problem and want to go back and forth on software architecture, I like having o3 research prior art and have a back and forth on trade-offs.

        If o3 was faster and cheaper I'd use it a lot more.

        I'm curious what your workflows are !

      • monkpita day ago
        Have you used Cline with opus+sonnet? Do you have opinions about Claude code vs cline+api? Curious to hear your thoughts!
      • jonplackett21 hours ago
        How do you find o3 vs o4-mini?
        • joshmlewis17 hours ago
          For coding at least, I don't bother with anything less than the top thinking models. They do have their place for some tasks in agentic systems but time is money and I don't want to waste time trying to coral less skilled models when there are more powerful ones available.
          • jonplackett11 hours ago
            I have the same logic but opposite conclusion - o3 just takes SO LONG to respond that I often just use o4-mini
      • pqdbra day ago
        How do you choose which model to use with Claude Code?
        • joshmlewisa day ago
          I have the Max $200 plan so I set it to Opus until it limits me to Sonnet 4 which has only happened in two out of a few dozen sessions so far. My rule of thumb in Cursor is it's worth paying for the Max reasoning models for pretty much every request unless it's stupid simple because it produces the best code each time without any funny business you get with cheaper models.
          • sunshineraga day ago
            You can use the max plan in cursor? I thought it didn’t support calls via api and only worked in Claude code?
            • symbolicAGI20 hours ago
              I launch Claude Code in VS Studio (similar to Cursor): > claude

              Then I use the /login command that opens a browser window to log into Claude Max.

              You can confirm Claude Max billing going forward in VS Studio/Claude Code: /cost

              "With your Claude Max subscription, no need to monitor cost — your subscription includes Claude Code usage"

        • jasonjmcghee5 hours ago
          In case you're asking for the literal command...

          /model

      • It's interesting you say that because o3, while being a considerable improvement over OpenAI's other models, still doesn't match the performance of Opus 4 and Gemini 2.5 Pro by a long shot for me.

        However, o3 resides in the ChatGPT app, which is still superior to the other chat apps in many ways, particularly the internet search implementation works very well.

        • svachaleka day ago
          If you're coding through chat apps you're really behind the times. Try an agent IDE or plugin.
          • joshmlewisa day ago
            Yeah, exactly. For everyone who might not know, the chat apps add lots of complex system prompting to handle and shape personality, tone, general usability, etc. IDE's also do this (with Claude Code being one of the ones that are closest to "bare" model that you can get) but at they are at least guiding it's behavior to be really good at coding tasks. Another reason is using the Agent feature that IDE's have had for a few months now which gives it the ability to search/read/edit files across your codebase. You may not like the idea of this and it feels like losing control, but it's the future. After months of using it I've learned how to get it to do what I want but I think a lot of people who try it once and stop get frustrated that it does something dumb and just assume it's not good. That's a practice and skill problem not a model problem.
            • jona777than20 hours ago
              This has been my experience. It has been something I’ve had to settle into. After some reps, it is becoming more difficult to imagine going back to regular old non-assisted coding sessions that aren’t purely for hobby.

              Your model rankings are spot on. I’m hesitant to make the jump to top tier premium models as daily drivers, so I hang out with sonnet 4 and/or Gemini 2.5 pro for most of the day (max mode in Cursor). I don’t want to get used to premium quality coming that easy, for some reason. I completely align with the concise, thoughtful code being worth it though. I’m having to do that myself using tier 2 models. I still use o3 periodically for getting clarity of thought or troubleshooting gnarly bugs that Claude gets caught looping on.

              How would you compare Cursor to Claude Code? I’m yet to try the latter.

            • Workaccount2a day ago
              IDE's are intimidating to non-tech people.

              I'm surprised there isn't a VibeIDE yet that is purpose build to make it possible for your grandmother to execute code output by an LLM.

              • dragonwritera day ago
                > I'm surprised there isn't a VibeIDE yet that is purpose build to make it possible for your grandmother to execute code output by an LLM.

                The major LLM chat interfaces often have code execution built in, so there kind of is, it just doesn't look like what an SWE thinks of as an IDE.

              • joshmlewisa day ago
                I have not used them but I feel like there are tools like Replit, Lovable, etc that are for that audience. I totally agree IDE's are intimidating for non-technical people though. Claude Code is pretty cool in that way where it's one command to install and pretty easy to get started with.
          • joshvma day ago
            An important caveat here is yes, for coding. Apps are fine for coming up with one-liners, or doing other research. I haven't found the quality of IDE based code to be significantly better than what ChatGPT would suggest, but it's very useful to ask questions when the model has access to both the code and can prompt you to run tests which rely on local data (or even attached hardware). I really don't trust YOLO mode so I manually approve terminal calls.

            My impression (with Cursor) is that you need to practice some sort of LLM-first design to get the best out of it. Either vibe code your way from the start, or be brutal about limiting what changes the agent can make without your approval. It does force you to be very atomic about your requests, which isn't a bad thing, but writing a robust spec for the prompt is often slower than writing the code by hand and asking for a refactor. As soon as kipple, for lack of a better word, sneaks into the code, it's a reinforcing signal to the agent that it can add more.

            It's definitely worth paying the $20 and playing with a few different clients. The rabbit hole is pretty deep and there's still a ton of prompt engineering suggestions from the community. It encourages a lot of creative guardrails, like using pre-commit to provide negative feedback when the model does something silly like try to write a 200 word commit message. I haven't tried JetBrains' agent yet (Junie), but that seems like it would be a good one to explore as well since it presumably integrates directly with the tooling.

          • baw-baga day ago
            I am really struggling with this. I tried Cline with both OpenAI and Claude to very weird results. Often burning through credits to get no where or just running out of context. I just got Cursor for a try so can't say anything on that yet.
            • joshmlewisa day ago
              It's a skill that takes some persistence and trial and error. Happy to chat with you about it if you want to send me an email.
              • Vetcha day ago
                There is skill to it but that's certainly not the only relevant variable involved. Other important factors are:

                Language: Syntax errors rise, and a common form is the syntax of a more common language bleeding through.

                Domain: Less so than what humans deem complex, quality is more strongly controlled by how much code and documentation there is for a domain. Interesting is that if in a less common subdomain, it will often revert to a more common approach (for example working on shaders for a game that takes place in a cylinder geometry requires a lot more hand-holding than on a plane). It's usually not that they can't do it, but that they require much more involved prompting to get the context appropriately set up and then managing drifting to default, more common patterns. Related is decisions with long term consequences. LLMs are pretty weak at this. In humans this one comes with experience, so it's rare and an instance of low coverage.

                Dates: Related is reverting to obsolete API patterns.

                Complexity: While not as dominant as domain coverage, complexity does play a role. With likelihood of error rising with complexity.

                This means if you're at the intersection of multiple of these (such as a low coverage problem in a functional language), agent mode will likely be too much of a waste for you. But interactive mode can still be highly productive.

              • baw-baga day ago
                I really appreciate that. I will see how I get on and may well give you a shout. Thank you!
          • PeterStuer9 hours ago
            Depends. For devops chat is quite nice as the exploration/understanding is key, not just writing out the configs.
          • I think this is debatable. But I've used Cursor and various extensions for VS Code. They're all fine (but cursor can fuck all the way off for stealing the `code` shell integration from VS Code) but you don't _need_ an IDE as Claude Code has shown us (currently my primary method of vibe coding).

            It's mostly about the cost though. Things are far more affordable in the the various apps/subscriptions. Token-priced API's can get very expensive very quickly.

            • hirako200021 hours ago
              We are trading tokens and mental health for time?

              I used Cursor well over a year ago. It gave me a headache. It was very immature. Used cursor more recently: the headache intensity increased. It's not cursor it is the senseless loops hoping for the LLM to spit out something somewhat correct. Revisiting the prompt. Trying to become an elite in language protocols because we need that machine to understand us.

              Leaving aside the headache, its side effects. It isn't clear we haven't already maxed out on the productivity tools efficiency. Auto complete. Indexed and searchable doc a second screen rather than having to turn the pages of some reference book. Etc etc.

              I'm convinced at this stage that we've already started to trade too far. So far beyond the optimal balance that these aren't diminishing returns. It is absolute diminishing.

              Engineers need to spend more time thinking.

              I'm convinced that engineers, if they were to chose, would throw this thing out and make space for more drawing boards, would use a 5 minute Solitaire break every 1h. Or take a walk.

              For some reason the constant pressure to go faster eventually makes its mark.

              It feels right to see thousands of lines of code written up by this thing. It feels aligned with the inadequate way we've been measured.

              Anyway. It can get expensive and this is by design.

              • throwaway31415520 hours ago
                > We are trading tokens and mental health for time?

                I have bipolar disorder. This makes programming incredibly difficult for me at times. Almost all the recent improvements to code generation tooling have been a tremendous boon for me. Coding is now no longer this test of how frustrated I can get over the most trivial of tasks. I just ask for what I want precisely and treat responses like a GitHub PR where mistakes may occur. In general (and for the trivial tasks I'm describing) Claude Code will generate correct, good code (I inform it very precisely of the style I want, and tell it to use linters/type-checkers/formatters after making changes) on the first attempt. No corrections needed.

                tl;dr - It's been nothing but a boon for this particular mentally ill person.

        • jorvia day ago
          What's most annoying about Gemini 2.5 is that it is obnoxiously verbose compared to Opus 4. Both in explaining the code it wrote and the amount of lines it writes and comments it adds, to the point where the output is often 2-3x more than Opus 4.

          You can obviously alleviate this by asking it to be more concise but even then it bleeds through sometimes.

          • joshmlewisa day ago
            Yes this is what I mean by conciseness with o3. If prompted well it can produce extremely high level quality code that blows me away at times. I've also had several instances now where I gave it slightly wrong context and other models just butchered a solution with dozens of lines for the proposed fix which I could tell wasn't right and then after reverting and asking o3, it immediately went searching for another file I hadn't included and fixed it in one line. That kind of, dare I say independent thinking, is worth a lot when dealing with complex codebases.
            • jorvi8 hours ago
              Personally I still am of the opinion current LLMs are more of a very advanced autocomplete.

              I have to think of the guy posting that he fed his entire project codebase to an AI, it refactored everything, modularizing it but still reducing the file count from 20 to 12. "It was glorious to see. Nothing worked of course, but glorious nonetheless".

              In the future I can certainly see it get better and better, especially because code is a hard science that reduces down to control flow logic which reduces down to math. It's a much more narrow problem space than, say, poetry or visuals.

        • joshmlewisa day ago
          What languages do you use it with and IDE? I use it in Cursor mainly with Max reasoning on. I spent around $300 on token based usage for o3 alone in May still only accepting around 33% of suggestions though. I made a post on X about this the other day but I expect that amount of rejections will go down significantly by the end of this year at the rate things are going.
          • drawnwrena day ago
            Very strange. I find reasoning has very narrow usefulness for me. It's great to get a project in context or to get oriented in the conversation, but on long conversations I find reasoning starts to add way too much extraneous stuff and get distracted from the task at hand.

            I think my coding model ranking is something like Claude Code > Claude 4 raw > Gemini > big gap > o4-mini > o3

            • joshmlewisa day ago
              Claude Code isn't a model in itself. By default it routes some to Opus 4 or Sonnet 4 but mostly Sonnet 4 unless you explicitly set it.
          • i'm using with python, VS Code (not integrated with claude just basic copilot) and Claude Code. For Gemini i'm using AI studio with repomix to package my code into a single file. I copy files over manually in that workflow.

            All subscription based, not per token pricing. I'm currently using Claude Max. Can't see myself exhausting its usage at this rate but who knows.

      • VeejayRampaya day ago
        we need to stop it with the anecdotal evidence presented by one random dude
    • batrata day ago
      What I like about Gemini is the search function that is very very good compared to others. I was blown away when I asked to compose me an email for a company that was sending spam to our domain. It literally searched and found not only the abuse email of the hosting company but all the info about the domain and the host(mx servers, ip owners, datacenters, etc.). Also if you want to convert a research paper into a podcast it did it instantly for me and it's fun to listen.
    • baqa day ago
      I’ve been giving the same tasks to claude 4 and gemini 2.5 this week and gemini provided correct solutions and claude didn’t. These weren’t hard tasks either, they were e.g. comparing sql queries before/after rewrite - Gemini found legitimate issues where claude said all is ok.
    • Szpadela day ago
      in my experience this highly depends case by case. For some cases Gemini crushed my problem, but in next one stuck and couldn't figure out simple bug.

      the same with o3 and sonnet (I didn't tested 4.0 much yet to have opinion)

      I feel thet we need better parallel evaluation support. where u could evaluate all top models and decide with one provided best solution

    • varunneala day ago
      Have you tried o3 on those problems? I've found o3 to be much more impressive than Opus 4 for all of my use cases.
      • johnfna day ago
        To be honest, I haven't, because the "This model is extremely expensive" popup on Cursor makes me a bit anxious - but given the accolades here I'll have to give it a shot.
    • zamadatixa day ago
      I think the only way to be particularly impressed with new leading models lately is to hold the opinion all of the benchmarks are inaccurate and/or irrelevant and it's vibes/anecdotes where the model is really light years ahead. Otherwise you look at the numbers on e.g. lmarena and see it's claiming a ~16% preference win rate for gpt-3.5-turbo from November of 2023 over this new world-leading model from Google.
      • johnfna day ago
        Not sure I follow - Gemini has ELO 1470, GPT3.5-turbo is 1206, which is an 86% win rate. https://chatgpt.com/share/6841f69d-b2ec-800c-9f8c-3e802ebbc0...
      • Workaccount2a day ago
        People can ask whatever they want on LMarena, so a question like "List some good snacks to bring to work" might elicit a win for a old/tiny/deprecated model simply because it lists the snack the user liked more.
        • AstroBena day ago
          are you saying that's a bad way to judge a model? Not sure why we'd want ones that choose bad snacks
    • a day ago
      undefined
    • tempusalariaa day ago
      I agree I find claude easily the best model, at least for programming which is the only thing I use LLMs for
    • lispisoka day ago
      >That being said, I'm starting to doubt the leaderboards as an accurate representation of model ability

      Goodhart's law applies here just like everywhere else. Much more so given how much money these companies are dumping into making these models.

    • cwbriscoea day ago
      I haven't tried all of the favorites, just what is available with Jetbrains AI, but I can say that Gemini 2.5 is very good with Go. I guess that makes sense in a way.
    • Alifatiska day ago
      > after a bit Gemini would spin in circles or actually (I've never seen this before!) give up and say it can't do it

      No way, is there any way to see the dialog or recreate this scenario!?

      • johnfna day ago
        The chat was in Cursor, so I don't know a way to provide a public link, but here is the last paragraph that it output before I (and it) gave up. I honestly could have re-prompted it from scratch and maybe it would have gotten it, but at this point I was pretty sure that even if it did, it was going to make a total mess of things. Note that it was iterating on a test failure and had spun through multiple attempts at this point:

        > Given the persistence of the error despite multiple attempts to refine the type definitions, I'm unable to fix this specific TypeScript error without a more profound change to the type structure or potentially a workaround that might compromise type safety or accuracy elsewhere. The current type definitions are already quite complex.

        The two prior paragraphs, in case you're curious:

        > I suspect the issue might be a fundamental limitation or bug in how TypeScript is resolving these highly recursive and conditional types when they are deeply nested. The type system might be "giving up" or defaulting to a less specific type ({ __raw: T }) prematurely.

        > Since the runtime logic seems to be correctly hydrating the nested objects (as the builder.build method recursively calls hydrateHelper), the problem is confined to the type system's ability to represent this.

        I found, as you can see in the first of the prior two paragraphs, that Gemini often wanted to claim that the issue was on TypeScript's side for some of these more complex issues. As proven by Opus, this simply wasn't the case.

    • AmazingTurtlea day ago
      for bulk data extraction on personal real life data I experienced that even gpt-4o-mini outperforms latest gemini models in both quality and cost. i would use reasoning models but their json schema response is different from the non-reasonig models, as in: they can not deal with union types for optional fields when using strict schemas... anyway.

      idk whats the hype about gemini, it's really not that good imho

    • I just realized that Opus 4 is the first model that produced "beautiful" code for me. Code that is simple, easy to read, not polluted with comments, no unnecessary crap, just pretty, clean and functional. I had my first "wow" moment with it in a while. That being said it occasionally does something absolutely stupid. Like completely dumb. And when I ask it "why did you do this stupid thing", it replies "oh yeah, you're right, this is super wrong, here is an actual working, smart solution" (proceeds to create brilliant code)

      I do not understand how those machines work.

      • diggana day ago
        > Code that is simple, easy to read, not polluted with comments, no unnecessary crap, just pretty, clean and functional

        I get that with most of the better models I've tried, although I'd probably personally favor OpenAI's models overall. I think a good system prompt is probably the best way there, rather than relying in some "innate" "clean code" behavior of specific models. This is a snippet of what I use today for coding guidelines: https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313...

        > That being said it occasionally does something absolutely stupid. Like completely dumb

        That's a bit tougher, but you have to carefully read through exactly what you said, and try to figure out what might have led it down the wrong path, or what you could have said in the first place for it avoid that. Try to work it into your system prompt, then slowly build up your system prompt so every one-shot gets closer and closer to being perfect on every first try.

      • simon1ltda day ago
        I've also experienced the same, except it produced the same stupid code all over again. I usually use one model (doesn't matter which) until it starts chasing it's tail, then I feed it to a different model to have it fix the mistakes by the first model.
      • Tostinoa day ago
        My issue is that every time i've attempted to use Opus 4 to solve any problem, I would burn through my usage cap within a few min and not have solved the problem yet because it misunderstood things about the context and I didn't get the prompt quite right yet.

        With Sonnet, at least I don't run out of usage before I actually get it to understand my problem scope.

    • tomr75a day ago
      how does it have access to DOM? are you using it with cursor/browser MCP?
  • chollida1a day ago
    I'd start to worry about OpenAI, from a valuation standpoint. The company has some serious competition now and is arguably no longer the leader.

    its going to be interesting to see how easily they can raise more money. Their valuation is already in the $300B range. How much larger can it get given their relatively paltry revenue at the moment and increasingly rising costs for hardware and electricity.

    If the next generation of llms needs new data sources, then Facebook and Google seem well positioned there, OpenAI on the other hand seems like its going to lose such race for proprietary data sets as unlike those other two, they don't have another business that generates such data.

    When they were the leader in both research and in user facing applications they certainly deserved their lofty valuation.

    What is new money coming into OpenAI getting now?

    At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

    Or at an extremely lofty P/E ratio of say 100 that would be $3B in annual earnings, that analysts would have to expect you to double each year for the next 10ish years looking out, ala AMZN in the 2000s, to justify this valuation.

    They seem to have boxed themselves into a corner where it will be painful to go public, assuming they can ever figure out the nonprofit/profit issue their company has.

    Congrats to Google here, they have done great work and look like they'll be one of the biggest winners of the AI race.

    • jstummbilliga day ago
      There is some serious confusion about the strength of OpenAIs position.

      "chatgpt" is a verb. People have no idea what claude or gemini are, and they will not be interested in it, unless something absolutely fantastic happens. Being a little better will do absolutely nothing to convince normal people to change product (the little moat that ChatGPT has simply by virtue of chat history is probably enough from a convenience standpoint, add memories and no super obvious path to export/import either and you are done here).

      All that OpenAI would have to do, to easily be worth their evaluation eventually, is to optimize and not become offensively bad to their, what, 500 million active users. And, if we assume the current paradigm that everyone is working with is here to stay, why would they? Instead of leading (as they have done so far, for the most part) they can at any point simply do what others have resorted to successfully and copy with a slight delay. People won't care.

      • aeyesa day ago
        Google has a text input box on google.com, as soon as this gives similar responses there is no need for the average user to use ChatGPT anymore.

        I already see lots of normal people share screenshots of the AI Overview responses.

        • jstummbilliga day ago
          You are skipping over the part where you need to bring normal people, specially young normal people, back to google.com for them to see anything at all on google.com. Hundreds of millions of them don't go there anymore.
          • HDThoreaun19 hours ago
            Is there evidence of this? Googles earnings are as strong as ever.
        • paxys19 hours ago
          > as soon as this gives similar responses

          And when is that going to be? Google clearly has the ability to convert google.com into a ChatGPT clone today if they wanted to. They already have a state of the art model. They have a dozen different AI assistants that no one uses. They have a pointless AI summary on top of search results that returns garbage data 99% of the time. It's been 3+ years and it is clear now that the company is simply too scared to rock the boat and disrupt its search revenue. There is zero appetite for risk, and soon it'll be too late to act.

        • askafrienda day ago
          As the other poster mentioned, young people are not going there. What happens when they grow up?
      • candiddevmikea day ago
        ChatGPT is going to be Kleenex'd. They wasted their first mover advantage. Replace ChatGPT's interface with any other LLM and most users won't be able to tell the difference.
      • ComplexSystems20 hours ago
        "People have no idea what claude or gemini are"

        One well-placed ad campaign could easily change all that. Doesn't hurt that Google can bundle Gemini into Android.

        • jstummbillig8 hours ago
          If it were that simple to sway markets through marketing, we would see Pepsi/Coca-Cola or McDonalds/BurgerKing swing like crazy all the time from "one well-placed ad campaign" to the next. We do not.
      • chollida119 hours ago
        Chatgpt has no moat of any kind though.

        I can switch tomorrow to use gemini or grok or any other llm, and I have, with zero switching cost.

        That means one stumble on the next foundational model and their market share drops in half in like 2 months.

        Now the same is true for the other llms as well.

      • potatolicious21 hours ago
        I think this pretty substantially overstates ChatGPT's stickiness. Just because something is widely (if not universally) known doesn't mean it's universally used, or that such usage is sticky.

        For example, I had occasion to chat with a relative who's still in high school recently, and was curious what the situation was in their classrooms re: AI.

        tl;dr: LLM use is basically universal, but ChatGPT is not the favored tool. The favored tools are LLMs/apps specifically marketed as study/homework aids.

        It seems like the market is fine with seeking specific LLMs for specific kinds of tasks, as opposed to some omni-LLM one-stop-shop that does everything. The market has already and rapidly moved beyond from ChatGPT.

        Not to mention I am willing to bet that Gemini has radically more usage than OpenAI's models simply by virtue of being plugged into Google Search. There are distribution effects, I just don't think OpenAI has the strongest position!

        I think OpenAI has some first-mover advantage, I just don't think it's anywhere near as durable (nor as large) as you're making it out to be.

    • PantaloonFlames21 hours ago
      > At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

      Oops I think you may have flipped the numerator and the denominator there, if I’m understanding you. Valuation of 300B , if 2x sales, would imply 150B sales.

      Probably your point still stands.

    • jadboxa day ago
      Currently I only find OpenAI to be clearly better for image generation: like illustrations, comics, or photo editing for home project ideation.
      • bufferoverflow18 hours ago
        And open-source Flux.1 Kontext is already better than it.
    • energy123a day ago
      Even if they're winning the AI race, their search business is still going to be cannibalized, and it's unclear if they'll be able to extract any economic rents from AI thanks to market competition. Of course they have no choice but to compete, but they probably would have preferred the pre-AI status quo of unquestioned monopoly and eyeballs on ads.
      • xmprta day ago
        Historically, every company has failed by not adapting to new technologies and trying to protect their core business (eg. Kodak, Blockbuster, Blackberry, Intel, etc). I applaud Google for going against their instincts and actively trying to disrupt their cash cow in order to gain an advantage in the AI race.
    • orionsbelta day ago
      I think it’s too early to say they are not the leader given they have o3 pro and GPT 5 coming out within the next month or two. Only if those are not impressive would I start to consider that they have lost their edge.

      Although it does feel likely that at minimum, they are neck and neck with Google and others.

    • sebzim4500a day ago
      >At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

      What? Apple has a revenue of 400B and a market cap of 3T

    • Rudybegaa day ago
      I think OpenAI has projected 12.7B in revenue this year and 29.4B in 2026.

      Edit: I am dumb, ignore the second half of my post.

      • eamaga day ago
        isn't P/E about earnings, not revenue?
        • Rudybegaa day ago
          You are correct. I need some coffee.
    • raincolea day ago
      > At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

      Even Google doesn't have $600B revenue. Sorry, it sounds like numbers pulled from someone's rear.

    • ketzoa day ago
      OpenAI has already forecast $12B in revenue by the end of this year.

      I agree that Google is well-positioned, but the mindshare/product advantage OpenAI has gives them a stupendous amount of leeway

      • Workaccount2a day ago
        The hurdle for OpenAI is going to be on the profit side. Google has their own hardware acceleration and their own data centers. OpenAI has to pay a monopolist for hardware acceleration and beholden to another tech giant for data centers. Never mind that Google can customize it's hardware specifically for it's models.

        The only way for OpenAI to really get ahead on solid ground is to discover some sort of absolute game changer (new architecture, new algorithm) and manage to keep it bottled away.

        • geodela day ago
          OpenAI has now partnered with Jony Ive now and they are going to have thinnest data centers with thinnest servers mounted on thinnest racks. And since everything is so thin, servers can just whisper to each other instead of communicating via fat cables.

          I think that will be the game changer OpenAI will show us soon.

          • gotoeleven3 hours ago
            Yep and I heard the servers will only have two USB-C ports for all I/O, but of course dongles will be available.
          • falloona day ago
            All servers will have a single thunderbolt port.
          • a day ago
            undefined
        • diggana day ago
          > OpenAI has to pay a monopolist for hardware acceleration and beholden to another tech giant for data centers.

          Don't they have a data center in progress as we speak? Seems by now they're planning on building not just one huge data center in Texas, but more in other countries too.

          • geodel16 hours ago
            Well that data center is just going to be full of Nvidia GPUs hence "pay to monopolist" part.
            • diggan9 hours ago
              Guess the part I put in quotes should have had a "or" instead of "and" there.
      • chollida1a day ago
        Agreed, its the doubling of that each year for the next 4-5 years that I see as being difficult.
      • VeejayRampaya day ago
        the leeway comes from the grotesque fanboyism the company benefits from

        they haven't been number one for quite some time and still people can't stop presenting them as the leaders

        • ketzoa day ago
          People said much the same thing about Apple for decades, and they’re a $3T company; not a bad thing to have fans.

          Plus, it’s a consumer product; it doesn’t matter if people are “presenting them as leaders”, it matters if hundreds of millions of totally average people will open their computers and use the product. OpenAI has that.

          • aryehof14 hours ago
            Actually, their speculative value is about 3 trillion. Their book value is around 68 billion. Their speculative value might be halved (or more) overnight based on the whim of the economy, markets and opinion. A company isn't actually worth its speculative value.
    • qeternitya day ago
      > At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

      Lmfao where did you get this from? Microsoft has less than half of that revenue, and is valued > 10x than OpenAI.

      Revenue is not the metric by which these companies are valued...

      • Yizahi7 hours ago
        The difference between Microsoft and OAI is that Microsoft can spend a lump sum of money on Excel and a fraction of that on its support and then sell it infinitely with almost no additional costs. MS can add a million of new Excel users tomorrow and that would be almost pure profit. (I'm very simplifying)

        OAI on the other hand must spend a lot of additional money for every single new user, both free and paid. Adding million new OAI users tomorrow would mean gigantic negative red hole in the profits (adding to the existing negative). OAI has no or almost no benefits of scale, unlike other industries.

        I have no knowledge about corporate valuations, but I strongly suspect that OAI valuation need to include this issue.

    • Oleksa_dra day ago
      I was tempted by the ratings and immediately paid for a subscription to Gemini 2.5. Half an hour later, I canceled the subscription and got a refund. This is the laziest and stupidest LLM. What he had to do, he told me to do on my own. And also when analyzing simple short documents, he pulled up some completely strange documents from the Internet not related to the topic. Even local LLMs (3B) were not so stupid and lazy.
      • sigmoid1021 hours ago
        Exactly my experience as well. I don't get why people here now seem to blindly take every new gamed benchmark as some harbinger of OpenAI's imminent downfall. Google is still way behind in day-to day personal and professional use for me.
  • vthallama day ago
    As if 3 different preview versions of the same model is not confusing enough, the last two dates are 05-06 and 06-05. They could have held off for a day:)
    • tomComba day ago
      Since those days are ambiguous anyway, they would have had to hold off until the 13th.

      In Canada, a third of the dates we see are British, and another third are American, so it’s really confusing. Thankfully y-m-d is now a legal format and seems to be gaining ground.

      • layer8a day ago
        > they would have had to hold off until the 13th.

        06-06 is unambiguously after 05-06 regardless of date format.

        • Sammi10 hours ago
          The problems is that I mentally just panic and abort without even trying when I see 06-06 and 05-06. The ambiguity just flips my brain off.
    • dist-epocha day ago
      > the last two dates are 05-06 and 06-05

      they are clearly trolling OpenAI's 4o and o4 models.

      • oezia day ago
        Don't repeat the same mistake if you want to troll somebody.

        It makes you look even more stupid.

      • fragmedea day ago
        ChatGPT itself suggests better names than that!
    • UncleOxidanta day ago
      At what point will they move from Gemini 2.5 pro to Gemini 2.6 pro? I'd guess Gemini 3 will be a larger model.
    • a day ago
      undefined
    • Engineers are surprisingly bad at naming things!
      • jacob019a day ago
        I rather like date codes as versions.
  • wiradikusumaa day ago
    I have two issues with Gemini that I don't experience with Claude: 1. It RENAMES VARIABLE NAMES even in places where I don't tell it to change (I pass them just as context). and 2. Sometimes it's missing closing square brackets.

    Sure I'm a lazy bum, I call the variable "json" instead of "jsonStringForX", but it's contextual (within a closure or function), and I appreciate the feedback, but it makes reviewing the changes difficult (too much noise).

    • xtractoa day ago
      I have a very clear example of Gemini getting it wrong:

      For a code like this, it keeps changing processing_class=tokenizer to "tokenizer=tokenizer", even though the parameter was renamed and even after adding the all caps comment.

          #Set up the SFTTrainer
          print("Setting up SFTTrainer...")
          trainer = SFTTrainer(
          model=model,
          train_dataset=train_dataset,
          args=sft_config,
          processing_class=tokenizer, # DO NOT CHANGE. THIS IS NOW THE CORRECT PROPERTY NAME
          )
          print("SFTTrainer ready.")
      
      I haven't tried with this latest version, but the 05-06 pro still did it wrong.
      • diggana day ago
        Do you have in the system prompt to actually not edit lines that has comments about not editing them? Had that happen to me too, that code comments been ignored, and adding instructions about actually following code comments helped for that. But different models so YMMV.
    • AaronAPU21 hours ago
      I find o1-pro, which nobody ever mentions, is in the top spot along with Gemini. But Gemini is an absolute mess to work with because it constantly adds tons of comments and changes unrelated code.

      It is worth it sometimes, but usually I use it to explore ideas and then have o1-pro spit out a perfect solution ready diff test and merge.

    • danielblna day ago
      Gemini loves to add idiotic non-functional inline comments.

      "# Added this function" "# Changed this to fix the issue"

      No, I know, I was there! This is what commit messages for, not comments that are only relevant in one PR.

      • macNchza day ago
        I love when I ask it to remove things and it doesn't want to truly let go, so it leaves a comment instead:

           # Removed iterMod variable here because it is no longer needed.
        
        It's like it spent too much time hanging out with an engineer who doesn't trust version control and prefers to just comment everything out.

        Still enjoying Gemini 2.5 Pro more than Claude Sonnet these days, though, purely on vibes.

      • oezia day ago
        And it sure loves removing your carefully inserted comments for human readers.
        • sweetjuly21 hours ago
          It feels like I'm negotiating with a toddler. If I say nothing, it adds useless comments everywhere. If I tell it to not add comments, it deletes all of my comments. Tell it to put the comments back, and it still throws away half of my comments and rewrites the reset in a less precise way.
      • Workaccount2a day ago
        I think it is likely that the comments are more for the model than for the user. I would not be even slightly surprised if verbose coding versions outperformed light commenting versions.
        • xmprta day ago
          On the other hand, I'm skeptical if that has any impact because these models have thinking tokens where they can put all those comments and attention shouldn't care about how close the tokens are as long as they're within the context window.
          • vikramkr6 hours ago
            The excessive comments might help the model when it's called again to re edit the code in the future - wouldn't be surprised if it was optimizing for vibe coding and the redundant comments reinforce the function/intent of the line when it's being modified down the line
      • PantaloonFlames21 hours ago
        Have you tried modifying the system instructions to get it to stop doing that?
    • 93poa day ago
      i've noticed with ChatGPT is will 100% ignore certain instructions and I wonder if it's just an LLM thing. For example, I can scream and yell in caps at ChatGPT to not use em or en dashes and if anything it makes it use them even more. I've literally never once made it successfully not use them, even when it ignored it the first time, and my follow up is "output the same thing again but NO EM or EN DASHES!"

      i've not tested this thoroughly, it's just my ancedotal experience over like a dozen attempts.

      • creescha day ago
        There are some things so ubiquitous in the training data that it is really difficult to tell models to not so them. Simply because it is so ingrained in their core training. Em dashes are apparently one of those things.

        It's something I read a lottle while ago in a larger article but can't remember which article it was.

      • tacotimea day ago
        I wonder if using the character itself in the directions, instead of the name for the character, might help with this.

        Something like, "Forbidden character list: [—, –]" or "Do NOT use the characters '—' or '–' in any of your output"

      • EnPissant12 hours ago
        I have had 95% success rate telling it not to use emdash or semicolon.
  • hu3a day ago
    I pay for both ChatGPT Plus and Gemini Pro.

    I'm thinking of cancelling my ChatGPT subscription because I keep hitting rate limits.

    Meanwhile I have yet to hit any rate limit with Gemini/AI Studio.

    • HenriNexta day ago
      AI Studio uses your API account behind the scenes, and it is subject to normal API limits. When you signup for AI Studio, it creates a Google Cloud free tier project with "gen-lang-client-" prefix behind the scenes. You can link a billing account at the bottom of the "get an api key page".

      Also note that AI studio via default free tier API access doesn't seem to fall within "commercial use" in Google's terms of service, which would mean that your prompts can be reviewed by humans and used for training. All info AFAIK.

      • sysoleg11 hours ago
        > AI Studio uses your API account behind the scenes

        This is not true for the Gemini 2.5 Pro Preview model, at least. Although this model API is not available on the Free Tier [1], you can still use it on AI Studio.

        [1] https://ai.google.dev/gemini-api/docs/pricing

      • PantaloonFlames21 hours ago
        > AI studio via default free tier API access doesn't seem to fall within "commercial use" in Google's terms of service, which would mean that your prompts can be reviewed by humans and used for training. All info AFAIK.

        Seconded.

    • oofbaroomfa day ago
      I think AI Studio uses the API, so rate limits are extremely high and almost impossible for a normal human to reach if using the paid preview model.
      • staticman2a day ago
        As far as I know AI Studio is always free, even on pay accounts, and you can definetly hit the rate limit.
    • Squarexa day ago
      I much prefer Gemini over chapgpt, but they recently introduced a limit of 100 messages a day on a pro plan :( aistudio is probably still fine
      • MisterPeaa day ago
        I've heard it's only on mobile? I was using gemini for work on desktop for at least 6 hours yesterday (definitely over 100 back and forths) for work and did not get hit with any rate limits

        Either way, Google's transparency with this is very poor - I saw the limits from a VP's tweet

    • fermentationa day ago
      Is there a reason not to just use the API through openrouter or something?
  • abraxasa day ago
    I found all the previous Gemini models somewhat inferior even compared to Claude 3.7 Sonnet (and much worse than 4) as my coding assistants. I'm keeping an open mind but also not rushing to try this one until some evaluations roll in. I'm actually baffled that the internet at large seems to be very pumped about Gemini but it's not reflective of my personal experience. Not to be that tinfoil hat guy but I smell at least a bit of astroturf activity around Gemini.
    • veralla day ago
      I think it's just very dependent on what you're doing. Claude 3.5/3.7 Sonnet (thinking or not) were just absolutely terrible at almost anything I asked of it (C/C++/Make/CMake). Like constantly giving wrong facts, generating code that could never work, hallucinating syntax and APIs, thinking about something then concluding the opposite, etc. Gemini 2.5-pro and o3 (even old o1-preview, o1-mini) were miles better. I haven't used Claude 4 yet.

      But everyone is using them for different things and it doesn't always generalize. Maybe Claude was great at typescript or ruby or something else I don't do. But for some of us, it definitely was not astroturf for Gemini. My whole team was talking about how much better it was.

    • bachmeiera day ago
      > I'm actually baffled that the internet at large seems to be very pumped about Gemini but it's not reflective of my personal experience. Not to be that tinfoil hat guy but I smell at least a bit of astroturf activity around Gemini.

      I haven't used Claude, but Gemini has always returned better answers to general questions relative to ChatGPT or Copilot. My impression, which could be wrong, is that Gemini is better in situations that are a substitute for search. How do I do this on the command line, tell me about this product, etc. all give better results, sometimes much better, on Gemini.

      • praveer13a day ago
        I’ve honestly had consistently the opposite experiences for general questions. Also for images, Gemini just hallucinates crazily. ChatGPT even on free tier is giving perfectly correct answers, and I’m on Gemini pro. I canceled it yesterday because of this
      • dist-epocha day ago
        You should try Grok then. It's by far the best when searching is required, especially if you enable DeepSearch.
        • Take8435a day ago
          I don't really want to use the X platform. What's the best alternative? Claude?
        • morgannewmana day ago
          [dead]
    • strobea day ago
      I'm switching a lot between Sonnet and Gemini in Aider - for some reason some of my coding problems only one of models capable to solve and I don't see any pattern which cold give answer upfront which I should to use for specific need.
    • 3abitona day ago
      > I found all the previous Gemini models somewhat inferior even compared to Claude 3.7 Sonnet (and much worse than 4) as my coding assistants.

      What are your usecases? Really not my experience, Claude disappoints in Data Science and complex ETL requests in python. O3 on the other hand really is phenomenal.

      • abraxasa day ago
        Backend python code, postgres database. Front end: Reeact/NextJS. Very common stack in 2025. Using LLMs in assist mode (not as agents) for enhancing the existing code base that weighs in under 1MM LoC. So not a greenfield project anymore but not a huge amount of legacy cruft either.
        • 3abiton6 hours ago
          I still have the Claude subscription, so I will take a look again and see.
    • Fergusonba day ago
      I think they are fairly interchangeable, In Roo Code, Claude uses the tools better, but I prefer gemini's coding style and brevity (except for comments, it loves to write comments) Sometimes I mix and match if one fails or pursues a path I don't like.
    • vikramkr6 hours ago
      I mean, they're cheaper models and they aren't as much if a pain about rate limiting as Claude was/they have a pretty solid depenresesrch without restrictive usage limits. IDK how it is for long running agentic stuff, would be surprised if it was anywhere near the other models, but for a general chatgpt competitor it doesn't matter if it's not as good as opus 4 if it's way cheaper and won't use up your usage limit
    • nprateem15 hours ago
      Gemini sucks for its stupid comment verbosity like others have mentioned but wins on price to value.
    • tiahuraa day ago
      As a lawyer, Claude 4 is the best writer, and usually, but not always, the leader in legal reasoning. That said, o3 often grinds out the best response, and Gemini seems to be the most exhaustive researcher.
    • My experience has been that Gemini's code (and even conversation) is a little bit uglier in general - but that the code tends to solve the issue you asked with fewer hallucinations.

      I can't speak to it now - have mostly been using Claude Code w/ Opus 4 recently.

  • unpwna day ago
    I feel like instead of constantly releasing these preview versions with different dates attached they should just add a patch version and bump that.
    • impulser_a day ago
      They can't because if someone has built something around that version they don't want to replace that model with a new model that could provide different results.
      • jfostera day ago
        In what way are dates better than integers at preventing that kind of mistake?
      • dist-epocha day ago
        Except google did exactly that with the previous release, where they silently redirect 03-25 requests to 05-06.
      • nsriva day ago
        Looking at you Anthropic. 4.0 markedly different from 3.7 in my experience.
        • Aeoluna day ago
          The model name is completely different? How do you accidentally switch from 3.7 to 4.0?
  • aienjoyeran hour ago
    The truth is that Gemini 2.5 6-05 is a fraud in coding; before, out of 10 codes you wrote, 1 or 2 might not work, meaning they had errors. Now, out of 10 codes, 9 or 10 are wrong. Why does it have so many errors???
    • aienjoyeran hour ago
      it have more skill in coding but have a lot of errors, i can't code anything
  • jcuenoda day ago
    82.2 on Aider

    Still actually falling behind the official scores for o3 high. https://aider.chat/docs/leaderboards/

    • sottola day ago
      Does 82.2 correspond to the "Percent correct" of the other models?

      Not sure if OpenAI has updated O3, but it looks like "pure" o3 (high) has a score of 79.6% in the linked table, "o3 (high) + gpt-4.1" combo has a the highest score of 82.7%.

      The previous Gemini 2.5 Pro Preview 05-06 (yea, not current 06-05!) was at 76.9%.

      That looks like a pretty nice bump!

      But either way, these Aider benchmarks seem to be most useful/trustworthy benchmarks currently and really the only ones I'm paying attention to.

    • vessenesa day ago
      But so.much.cheaper.and.faster. Pretty amazing.
    • hobofana day ago
      That's the older 05-06 preview, not the new one from today.
      • energy123a day ago
        They knew that. The 82.2 comes from the new benchmarks in the OP not from the aider url. The aider url was supplied for comparison.
        • hobofana day ago
          Ah, thanks for clearing that up!
  • Workaccount2a day ago
    Apparently 06-05 bridges the gap that people were feeling between the 03-25 and 05-06 release[1]

    [1]https://nitter.net/OfficialLoganK/status/1930657743251349854...

  • unsupp0rteda day ago
    Curious to see how this compares to Claude 4 Sonnet in code.

    This table seems to indicate it's markedly worse?

    https://blog.google/products/gemini/gemini-2-5-pro-latest-pr...

    • gundmca day ago
      Almost all of those benchmarks are coding related. It looks like SWE-Bench is the only one where Claude is higher. Hard to say which benchmark is most representative of actual work. The community seems to like Aider Polyglot from what I've seen
  • Alifatiska day ago
    Finally Google is advertising their ai studio, it's a shame they didn't push that beautiful app before.
  • zone411a day ago
    Omproves on the Extended NYT Connections benchmark compared to both Gemini 2.5 Pro Exp (03-25) and Gemini 2.5 Pro Preview (05-06), scoring 58.7. The decline observed between 03-25 and 05-06 has been reversed - https://github.com/lechmazur/nyt-connections/.
  • pu_pea day ago
    I just checked and it looks like the limits for Jules has been bumped from 5 free daily tasks to 60. Not sure it uses the latest model, but I would assume it does
  • jbellisa day ago
    Did it get upgraded in-place again or do you need to opt in to the new model?
  • pelorata day ago
    Why not call it Gemini 2.6?
    • laweijfmvoa day ago
      because the plethora of models and versions is getting ridiculous, and for anyone who's not following LLM news daily, you have no clue what to use. There was never a "Google Search 2.6.4 04-13". You just went to google.com and searched.
      • johnfna day ago
        Well, Google Search never released an API that millions of people depended on.
        • ZeroTalent11 hours ago
          Yes, they did on Google Cloud:

          "Custom Search JSON API: The primary solution offered by Google is the Custom Search JSON API. This API allows you to create a customized search engine that can search a collection of specified websites. While it's not a direct equivalent to a full-fledged Google Search API, it can be configured to search the entire web."

          In my experience it's essentially the same as Google Search if configured properly.

      • AISnakeOila day ago
        These api models are for developers. Gemini is for consumers.
    • Beta, beta, release candidate (this version)
    • Szpadela day ago
      next year maybe? they they so not have year in version so they will need to bump the number make sure you can just sort by name
  • op00toa day ago
    I found Gemini 2.5 Pro highly useful for text summaries, and even reasoning in long conversations... UP TO the last 2 weeks or month. Recently, it seems to totally forget what I'm talking about after 4-5 messages of a paragraph of text each. We're not talking huge amounts of context, but conversational braindeadness. Between ChatGPT's sycophancy, Gemini's forgetfulness and poor attention, I'm just sticking with whatever local model du jour fits my needs and whatever crap my company is paying for today. It's super annoying, hopefully Gemini gets its memory back!
    • energy123a day ago
      I believe it's intentionally nerfed if you use it through the app. Once you use Gemini for a long time you realize they have a number of dark patterns to deter heavy users but maintain the experience for light users. These dark patterns are:

      - "Something went wrong error" after too many prompts in a day. This was an undocumented rate limit because it never occurs earlier in the day and will immediately disappear if you subscribe for and use a new paid account, but it won't disappear if you make a new free account, and the error going away is strictly tied to how long you wait. Users complained about this for over a year. Of course they lied about the real reasons for this error, and it was never fixed until a few days ago when they rug pulled paying users by introducing actual documented tight rate limits.

      - "You've been signed out" error if the model has exceeded its output token budget (or runtime duration) for a single inference, so you can't do things like what Anthropic recommends where you coax the model to think longer.

      - I have less definitive evidence for this but I would not be surprised if they programmatically nerf the reasoning effort parameter for multiturn conversations. I have no other explanation for why the chain of thought fails to generate for small context multiturn chats but will consistently generate for ultra long context singleturn chats.

      • op00toa day ago
        Right! I feel like it will sail through MBs of text data, but remembering what I said two turns ago is just too much.
    • harrisoneda day ago
      I noticed that same behavior across older Gemini models. I build a chatbot at work around 1.5 Flash, and one day suddenly it was behaving like that. it was perfect before, but after it always saluted the user like it was their first chat, despite me sending the history. And i didn't found any changelog regarding that at the time.

      After that i moved to OpenAI, Gemini models just seem unreliable on that regard.

      • 85392_schoola day ago
        This might be because Gemini silently updates checkpoints (1.5 001 -> 1.5 002, 2.5 0325 -> 2.5 0506 -> 2.5 0605) while OpenAI doesn't update them without ensuring that they're uniformly better and typically emails customers when they are updated.
  • carbocationa day ago
    Is it possible to know which model version their chat app ( https://gemini.google.com/app ) is using?
  • lxea day ago
    Gemini is a good and fast model, but I think the style of code it writes is... amateur / inexperienced. It doesn't make a lot of mistakes typical of an LLM, but rather chooses approaches that are typical of someone who just learned programming. I have to always nudge it to avoid verbosity, keep structure less repetitive, optimize async code, etc. With claude, I rarely have this problem -- it feels more like working with a more experienced developer.
    • PantaloonFlames21 hours ago
      > I have to always nudge it to avoid verbosity, keep structure less repetitive, optimize async code, etc.

      Isn’t this what you can do with system instructions?

  • fallinditch21 hours ago
    As a Windsurf user I was happy with Claude 3.7 but then switched to Google Gemini 2.5 when Claude started glitching on a particularly large file. It's a bummer that 3.7 has gone from Windsurf - I considered cancelling my Windsurf subscription, but decided not to because it is still good value for money.
    • sumedh9 hours ago
      No models have gone from WindSurf.

      Are you talking about Sonnet 4 which never came to Windsurf because Anthropic does not want to support OpenAI?

  • consumer451a day ago
    Man, if the benchmarks are to be believed, this is a lifeline for Windsurf as Anthropic becomes less and less friendly.

    However, in my personal experience Sonnet 3.x has still been king so far. Will be interesting to watch this unfold. At this point, it's still looking grim for Windsurf.

    • lexandstuff21 hours ago
      Well, they just had a $3B exit, so not that grim, all things considered.
      • consumer45120 hours ago
        Yeah, true.. but I just meant for users/user growth. Even if not completely warranted, users in their subreddit are upset that they don't have access to Sonnet 4.

        With the Claude Max development, non-vibing users seem to be going to Claude Code. This makes me think that maybe Cursor should have taken an exit, cause Claude Code is gonna eat everyone's lunch?

  • excerionsforte17 hours ago
    Ok Google, I was deflated after you guys took away 03-25, but now I am happy again with 06-05. Hell yes, we are back baby!
  • jdmoreiraa day ago
    Is there a no brainer alternative to Claude Code where I can try other models?
    • ketzoa day ago
      People quite like aider! I’m not as much of a fan of the CLI workflow but it’s quite comparable, I think.
    • rubslopes20 hours ago
      Roo Code, or Cline. You can allow it to run everything by itself and just watch.

      I've been preferring to use Copilot agent mode with Sonnet 4, but it asks you to intervene a lot.

    • aiiizzz15 hours ago
      Openai codex cli
  • emehexa day ago
    Is this "kingfall"?
    • No, Kingfall is a separate model which is supposed to deliver slightly better performance, around 2.5% to 5% improvement over this.
    • Workaccount2a day ago
      Sundar tweeted a lion so it's probably goldmane. Kingfall is probably their deep think model, and they might wait for O3 pro to drop so they can swing back.
  • bli940505a day ago
    I'm confused by the naming. It advertises itself as "Thinking" so is this the release of the new "Deep Think" model or not?
  • tibbara day ago
    Interesting, I just learned about matharena.ai. Google cherry-picks one result where they're the best here, but in the overall results, it's still O3 and o4-mini-high who are in the lead.
  • General first impressions are that it's not as capable as 05-06, although it's technically testing better on the leaderboards... interesting.
  • energy123a day ago
    So there's both a 05-06 model and a 06-05 model, and the launch page for 06-05 has some graphs with benchmarks for the 05-06 model but without the 06-05 model?
  • BDivyesh8 hours ago
    It depends on where and how you use it, I only use the gemini pro model on aistudio, and set the temperature to 0.05 or 0.1 in rare cases I bump it to 0.3 if I need some frontend creativity, it still isn't impressive, I see that claude is still far better, o4-mini-high too. When it comes to o3 I despise it, despite being ranked very high on benchmarks, the best version of it is only available through api.
  • simianwordsa day ago
    I feel stupid for asking but how do I enable deepthink?
  • _pdp_a day ago
    Is it still rate limited though?
  • sergiotapiaa day ago
    In Cursor this is called "gemini-2.5-pro-preview-06-05" you have to enable it manually.
  • InTheArena18 hours ago
    RIght now, the claude code tooling and ChatGPT codex are far better then anything else I have seen for massive code development. Is there a better option out there with Gemini at the heart of it? I noticed the command line codex might support it.
  • kisamoto15 hours ago
    Amateur question, how are people using this for coding?

    Direct chat and copy pasting code? Seems clunky.

    Or manually switching in cursor? Although is extra cost and not required for a lot of tasks where Cursor tab is faster and good enough. So need to opt in on demand.

    Cline + open router in VSCode?

    Something else?

    • 4d66ba068 hours ago
      Consider taking a look at Zed, it can use Gemini with an API key and has an agentic “write” mode if you don’t want to copy and paste.