Was the model too big to run locally?
That’s one of the reasons I went with phi-4-mini - surprisingly high quality for its size and speed. It handled multi-step reasoning, math, structured data extraction, and code pretty well, all on modest hardware. Phi-1.5 / Phi-2 (quantized versions) also run on raspberry pi as others have demonstrated.
When trying out "phi4" locally with:
open-codex --provider ollama --full-auto --project-doc README.md --model phi4:latest
I get this error:
OpenAI rejected the request. Error details: Status: 400, Code: unknown, Type: api_error, Message: 400
registry.ollama.ai/library/phi4:latest does not support tools. Please verify your settings and try again.Technically you can use the original Codex CLI with a local LLM - if your inference provider implements the OpenAI Chat Completions API, with function calling, etc. included.
But based on what I had in mind - the idea that small models can be really useful if optimized for very specific use cases - I figured the current architecture of Codex CLI wasn't the best fit for that. So instead of forking it, I started from scratch.
Here's the rough thinking behind it:
1. You still have to manually set up and run your own inference server (e.g., with ollama, lmstudio, vllm, etc.).
2. You need to ensure that the model you choose works well with Codex's pre-defined prompt setup and configuration.
3. Prompting patterns for small open-source models (like phi-4-mini) often need to be very different - they don't generalize as well.
4. The function calling format (or structured output) might not even be supported by your local inference provider.
Codex CLI's implementation and prompts seem tailored for a specific class of hosted, large-scale models (e.g. GPT, Gemini, Grok). But if you want to get good results with small, local models, everything - prompting, reasoning chains, output structure - often needs to be different.So I built this with a few assumptions in mind:
- Write the tool specifically to run _locally_ out of the box, no inference API server required.
- Use model directly (currently for phi-4-mini via llama-cpp-python).
- Optimize the prompt and execution logic _per model_ to get the best performance.
Instead of forcing small models into a system meant for large, general-purpose APIs, I wanted to explore a local-first, model-specific alternative that's easy to install and extend — and free to run.A bit like how Android came after iPhone with open source implementation.
What really convinced me, though, was the focus on the kinds of tasks I actually care about: multi-step reasoning, math, structured data extraction, and code understanding.There’s a great Microsoft paper on this: "Textbooks Are All You Need" and solid follow-ups with Phi‑2 and Phi‑3.
Technically you can use the original Codex CLI with a local LLM - if your inference provider implements the OpenAI Chat Completions API, with function calling, etc. included.
But based on what I had in mind - the idea that small models can be really useful if optimized for very specific use cases - I figured the current architecture of Codex CLI wasn't the best fit for that. So instead of forking it, I started from scratch.
Here's the rough thinking behind it:
1. You still have to manually set up and run your own inference server (e.g., with ollama, lmstudio, vllm, etc.).
2. You need to ensure that the model you choose works well with Codex's pre-defined prompt setup and configuration.
3. Prompting patterns for small open-source models (like phi-4-mini) often need to be very different - they don't generalize as well.
4. The function calling format (or structured output) might not even be supported by your local inference provider.
Codex CLI's implementation and prompts seem tailored for a specific class of hosted, large-scale models (e.g. GPT, Gemini, Grok). But if you want to get good results with small, local models, everything - prompting, reasoning chains, output structure - often needs to be different.
So I built this with a few assumptions in mind: - Write the tool specifically to run _locally_ out of the box, no inference API server required.
- Use model directly (currently for phi-4-mini via llama-cpp-python).
- Optimize the prompt and execution logic _per model_ to get the best performance.
Instead of forcing small models into a system meant for large, general-purpose APIs, I wanted to explore a local-first, model-specific alternative that's easy to install and extend — and free to run.1. ~ codex --provider ollama 2. Run: /model 3. Pick your model 4. Profit!
So this isn't really codex then?