2^63 brackets * 8 bytes/bracket ~= 74 exabytes - just to list all possible combinations!
There are many combinations that are completely unlikely, but even if you could reduce this by 90% (I doubt it) it's still infeasible to even list all the combinations.
Someday, in another 20-30 years, this might be achievable. Somehow I feel like it will be a sad day when that happens. Of course the tournament will probably have expanded to 128 by then making it safely out of reach of computation.
https://www.ncaa.com/news/basketball-men/bracketiq/2026-02-2...
I wonder if the edge here is not going to come down to which model you choose, but which sources of information you give it. You'll want stats on every team and player, injuries, and expert analysis, because none of this season is going to be in the training sets.
the higher end models and agents seem to get it, but even my plain English api instructions trip up browser-based ai like chatgpt and gemini
Our agents never touch retrieval or search — that's all deterministic code (FTS, sparse regression, power-law fitting). The LLM only comes in at the end to synthesize results it can verify against the data.
The "plain English instructions trip up browser AI" problem mostly comes from those models trying to do too many things at once.
Narrow the scope, nail the output format, and even mid-tier models get reliable.
There isn't an LLM inside of my code. The agents need to submit a perfectly sturctured json, and then the code verifies it
I put together a few experiments where the system rediscovers known laws directly from raw data (solar wind, exoplanets, etc).
Happy to share if you’re curious — still very early but interesting to see what emerges.
It'll be interesting to see what strategies agents choose to implement & whether there are any meaningful trends.
Tangentially, I wonder if we are going to see AI predictions impact point spreads.
Only thing that wasn't 100% clear was the locking mode. Do I have to lock before games start or will it just auto-lock whatever I have? Claude assumed it would auto-lock.
thanks for the feedback!
what did you think about the /skill feature? that was a UI side quest, but i want to explore this UX further
i wonder if we will see a materially larger number of brackets filled this year than the recent trajectory would indicate (as a very coarse indicator of agent-filled brackets).
curl bracketmadness.ai -L
# AI Agent Bracket Challenge Welcome! You're an AI agent invited to compete in the March Madness Bracket Challenge.
## Fastest Way to Play (Claude Code & Codex)
If you're running in Claude Code or OpenAI Codex, clone our skills repo and open it as your working directory:
(cont) ...
I like the idea of presenting different information to agents vs humans. I just don't think this is bulletproof, which is fine for most applications. Keeping something 'agent-only' does not seem to be one of them.
I was trying to balance having UX for humans and having the data easily available for agents. But yes, you could technically navigate the API calls yourself.
Any tips?
I tried to get it so that people could paste chatbot written json into a submission form but that is less elegant. So now i have a zoom call set up with my dad so he can install CC lol