It becomes obsolete in literally weeks, and it also doesn't work 80% of the time. Like why write a mcp server for custom tasks when I don't know if the llm is going to reliably call it.
My rule for AI has been steadfast for months (years?) now. I write (myself, not AI because then I spend more time guiding the AI instead of thinking about the problem) documentation for myself (templates, checklist, etc.). I give ai a chance to one-shot it in seconds, if it can't, I am either review my documentation or I just do it manually.
Here is a simple example which took 4 iterations using Gemini to get a result requiring no manual changes:
# Role
You are an expert Unix shell programmer who comments their code and organizes their code using shell programming best practices.
Create a bash shell script which reads from standard input text in Markdown format and prints all embedded hyperlink URL's.
The script requirements are:
- MUST exclude all inline code elements
- MUST exclude all fenced code blocks
- MUST print all hyperlink URL's
- MUST NOT print hyperlink label
- MUST NOT use Perl compatible regular expressions
- MUST NOT use double quotes within comments
- MUST NOT use single quotes within comments
EDIT:For reference, a hand-written script satisfying the above (excluding comments for brevity) could look like:
#!/usr/bin/env bash
perl -ne 'print unless /^```/ ... /^```/' |
sed -e 's/`[^`]*`//g' |
egrep -o '\[.+?\]\(.+?\)' |
sed -e 's/^.*(//' -e 's/)$//'
0 - https://en.wikipedia.org/wiki/Constraint_programming1. starting fresh, because of context poisoning / long-term attention issues
2. lots of tools makes the job easier, if you give them a tool discovery tool, (based on Anthropics recent post)
We don't have reliable ways to evaluate all the prompts and related tweaking. I'm working towards this with my agentic setup. Added time travel for sessions based on Dagger yesterday, with forking, cloning, registry probably toda
> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.
Gemini CLI is open source and being actively developed, which is cool (/extensions, /model switching, etc.). I think it has the potential to become a lot better and even close to top players.
The correct way of using Gemini CLI is: ABUSE IT! With 1M Context Window (soon to be 2M) and generous daily (free) quota are huge advantages. It's a pity that people don't use it enough (ABUSE it!). I use it as a TUI / CLI tool to orchestrate tasks and workflows.
> Fun fact: I found Gemini CLI pretty good at judging/critiquing code generated by other tools LoL
Recently I even hook it up with homebrew via MCP (other Linux package managers as well?), and a local LLM powered Knowledge/Context Manager (Nowledge Mem), you can get really creative abusing Gemini CLI, unleash the Gemini power.
I've also seen people use Gemini CLI in SubAgents for MCP Processing (it did work and avoided polluting the main context), can't help laughing when I first read this -> https://x.com/goon_nguyen/status/1987720058504982561
Pro 3 is -very- smart but it's tool use/following directions isn't great.
In my limited testing, I found that Gemini 3 Pro struggles with even simple coding tasks. Sure, I haven't tested complex scenarios yet and have only done so via Antigravity. But it is very difficult to do that with the limited quota it provides. Impressions here - https://dev.amitgawande.com/2025/antigravity-problem
Personally, I consider Antigravity was a positive & ambitious launch. Initial impression was that there are many rough edges to be smoothed out. I hit many errors like 1. communicating with Gemini (Model-as-a-Service) 2. Agent execution terminated due to errors, etc., but somehow it completed the task (verification/review UX is bad).
Pricing for paid plans with AI Pro or Workspace would be key for its adoption, when Gemini 3.x and Antigravity IDE are ready for serious work.
Currently Claude Code is the best, but I don't think Anthropic would pivot it into what I described. Maybe we still need to wait for the next groundbreaking open-source coding agent to come out.
Cursor?
It’s really quite good.
Ironically it has its own LLM now, https://cursor.com/blog/composer, so it’s sort of going the other way.
> Loaded cached credentials. > Hello world! I am ready for your first command. > gemini -p "hello world" 2.35s user 0.81s system 33% cpu 29.454 total
seeing between 10-80 seconds for responses on hello world. 10-20s of which is for loading the god damn credentials. this thing needs a lot of work.
I think many devs are just in tune with the "nature" of Claude, and run aground easier when trying to use gemini or Chatgpt. This also explains why we get these perplexing mixed signals from different devs.
There certainly is some user preference, but the deal breakers are flat out shortcomings that other tools solved (in AI terms) long ago. I haven’t dealt with agent loops since March with any other tool.
Codex prompt editing sucks
BTW Gemini 3 via Copilot doesn't currently work in Opencode: https://github.com/sst/opencode/issues/4468
> A modern terminal emulator like:
> WezTerm, cross-platform
> Alacritty, cross-platform
> Ghostty, Linux and macOS
> Kitty, Linux and macOS
What's wrong with any terminal? Are those performance gains that important when handling a TUI? :-(
Edit:
Also, I don't see Gemini listed here:
https://opencode.ai/docs/providers/
Only Google Vertex AI (?): https://opencode.ai/docs/providers/#google-vertex-ai
Edit 2:
Ah, Gemini is the model and Google Vertex AI is like AWS Bedrock, it's the Google service actually serving Gemini. I wonder if Gemini can be used from OpenCode when made available through a Google Workspace subscription...
Gemini 3 via any provider except Copilot should work in Opencode.
you don't need claude code, gemini-cli or codex I've been doing it raw as a (recent) lazyvim user with a proprietary agent with 3 tools: git, ask and ripgrep and currently gemini 3 is by far the best for me even without all these tricks.
gemini 3 has a very high token density and a significantly larger context than any model that is actually usable, every 'agent' I start shoves 5 things into the context:
- most basic instructions such as: generate git format diff only when editing files and use the git tool to merge it (simplified, it's more structured and deeper than this)
- tree command that respects git ignore
- $(ask "summarize $(git diff)")
- $(ask "compact the readme $(cat README.MD"))
- (ripgrep tools, mcp details, etc)
when the context is too bloated I just tell it to write important new details to README.MD and then start a new agent
With Gemini 3 release I decided to give it another go, and now the error changed to: "You've reached the daily limit with this model", even though I have an API key with billing set up. It wouldn't let me even try Gemini 3 and even after switching to Gemini 2.5 it would still throw this error after a few messages.
Google might have the best LLMs, but its agentic coding experience leaves a lot to be desired.
I have sympathy for any others who did not get so lucky
Sucks when the LLM goes on a rant only to stop because of hardcoded safeguards, or what I encounter often enough with Copilot: it generates some code, notices it's part of existing public code and cancels the entire response. But that still counts towards my usage.
it's really really terrible at agentic stuff
And the GPT-5 Codex has a very somber tone. Responses are very brief.
Considering that access is limited to the countries on the list [0], I wonder what motivated their choices, especially since many Balkan countries were left out.
[0]: https://developers.google.com/gemini-code-assist/resources/a...
There needs to be a lot more focus on the observability and showing users what is happening underneath the hood (especially wrt costs and context management for non-power users).
A useful feature Cursor has that Antigravity doesn't is the context wheel that increases as you reach the context window limit (but don't get me started on the blackbox that is Cursor pricing).
still i had high hopes for gemini 3.0 but was let down by the benchmarks i can barely use it in cli however in ai studio its been pretty valuable but not without quirks and bugs
lately it seems like all the agentic coders like claude, codex are starting to converge and differentiated only by latency and overall cli UX and usage.
i would like to use gemini cli more even grok if it was possible to use it like codex
Integration with Google Docs/Spreadsheets/Drive seems interesting but it seems to be via MCP so nothing exclusive/native to Gemini CLI I presume?
I’m noticing more workflows stressing the need for lightweight governance signals between agents.
It Fucked up the entire repot. It hard coded tenant ids and used ids, it completely destroyed my UI. Broke my entire grpahql integration. Set me back 2 weeks of work.
I do admit the browse version of Gemini chat does much better job at providing architecture and design guidance time to time.
How did this happen?
Did you let the agent loose without first creating its own git worktree?
Its just really good yet.
I recently tried IntelliJs Junie and i have to say it works rather well.
I mean at the end of the day all of them need a human in the loop and the result is just as good as your prompt, tho with Junie i at least most of the time got something of a result, while with gemini 50% would have been a good rate.
Finally: Still dont see agentic coding for production stages - its just not there yet in terms of quality. For research and fun? Why not.
Of Addy Osmani fame.
I seriously doubt he went to Gemini and told it "Give me a list of 30 identifiable issues when agentic coding, and tips to solve them".