Most agent failures I see trace back to the same root cause: unstructured context forces the model to navigate by probability where you need ground truth. Bigger windows just give you more room to be wrong in.
This post lays out the first principles I've landed on — why facts without relationships dead-end in RAG, why tool selection is broken without a context layer underneath, and why the order of operations most agent architectures use is backwards.