Of course I am aware that the caveat here is that all my interaction is part of training, but I’m fine with that. Even Qwen Cli discontinued the free plan.
I'm using paid on TypeScript and it's genuinely terrific. Subjectively I think it has the edge over Opus.
I'd be surprised if OpenAI is hamstringing the free version. That would seem crazy from a GTM PoV. If anything the labs seem to throttle the heavy paid users.
I also put together this ridiculous thing[1] because I missed the font and color scheme of Claude.
[0] https://gist.githubusercontent.com/dmd/91e9ca98b2c252a185e8e...
I can go through a 5-hour limit with a $20/mo Plus subscription in a few minutes with 5.5 Extra High. This causes me to reserve the latest/best rev for the harder problems.
5.5 really does seem to be very superior to 5.4, but it's also very expensive to run: The gas gauge moves fast. It's not very clearly defined whether 5.5 will cost less to get a problem solved quickly, or if a bunch of automatic iterations of 5.4 will solve it less-expensively. Both are often frustrating to me on the $20 plan.
(Also: Are you sure you're seeing it right? 5.5 has been in the wild for less than a month, so far. https://openai.com/index/introducing-gpt-5-5/ )
I was initially quite excited, but I’ve found the results are less than great compared to being at a keyboard.
Something about the smaller screen size and/or lack of keyboard causes me to direct the agent less, which in turn creates more tech debt/code churn/etc.
Maybe I’m just showing my age, and I should practice voice dictation or something more, but my thoughts flow faster and more clearly on a keyboard (less ums).
Edit: Running into issues setting it up on Windows. There's no "/remote-control" command in the CLI, so I installed the Windows Codex app. Then I updated the iOS app which now has the "Codex" feature in the sidebar, which should allow remote access to the Windows machine's instance - except it doesn't connect. The iOS app shows my desktop's hostname, so it knows there's an instance there, but refuses to connect. Issues like this would persuade a lot of folks to switch back to Claude.
My experience today with the new Codex remote control has been that it doesn't connect at all.
They might just not have cut a new build yet, today. It 'works' on master, but the mobile app thinks that your build is outdated (v0.0.0) if you build from master without overriding version, so probably easiest to wait until they cut a build if they haven't.
Woah, hadn't seen this before!
Off-topic, how long compile times do people have for codex-rs in openai/codex? Even my very beefy computer takes like 30 minutes to compile in release mode, makes me wonder why it's so slow and how this TUI got so large. But then I remember, agents like to write a lot of code, compilers get slower when they have to compile a lot of code :)
In my experience, although the build is a little slow, it's that LTO step that takes a million years.
I can do some tasks on mobile, especially if they are follow up and steering only, greatly increasing productivity as you can keep working whilst in transit, etc.
Both of the Codex apps are very good.
I tried this out and it works significantly better than Claude's remote control in fact the first few times I tried Claude's remote control it didn't even work and to this day is very buggy.
Or ask Codex to create image that explains xyz.
But a person can use subagents, if they want, to filter that down. This burns tokens in a big hurry, but I think subagents can be arbitrary local commands (eg, a local LLM).
Or, you know: Just slow down. :) It doesn't always have to be a race, does it?
they added some new stuff, like remote control to wherever the desktop codex app is running, but these companies need to work much more on their press releases.
> Stay connected to active work from anywhere
... (and anytime because it's on your phone). No thanks.
Feels like a testament to the value in taking time and doing it properly.
Now if only codex got its 1M token context window back.
---
Edit: Hmmm. Maybe I spoke too soon. Sigh. Definitely _more_ reliable by far overall, but still have queued messages with responses on my phone that don't show up on my computer, and responses that don't show up on my phone.
Edit 2: New threads created from my phone seem to have a little stall-out, but ones that are underway are behaving reasonably well.
Claude on the other hand has been jank all around from the UX to the UI to the AI itself that it's baffling how it's more popular here on HN: https://i.imgur.com/jYawPDY.png
Sadly this remote control feature doesn't seem to be for Mac to Mac yet? I love the MacBook Neo as a "thin client" for AI and keep the MacBook Pro at home/hotel, and it would be nice to share Codex desktop sessions (without SSH → resume link)
You can run your local LLM and just connect the docker containers. I'm paranoid of being disconnected from the LLM, so I never run any of this on the same machine, so orchestrating a docker-compose file that provides the necessary services is important.
I'm still trying to find a good remote file system to loop into the setup for improved switching between cli and these web containers.
In those scenarios, the goal is not "work at any time" but to "be anywhere at any time", or, rather, to "be able to work from anywhere, doing anything".
Sort of....I guess.
It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.
you need a model server - ollama/llama.cpp/lm studio
Do you mean supporting oai-compatible api URLs in copilot? If so then you need either VS Code Insiders, or a VS Code extension I believe?