The problem is that treating an LLM as an assistant fails here. RLHF has trained models to be sycophantic. If you ask an LLM for strategic advice, it will validate your inefficiencies and give you a polite numbered list.
I wrote an open-source framework of deterministic prompts to bypass this empathy alignment. It weaponizes the LLM to act as a strict dependency constraint solver and algorithmic risk auditor for your psychology.
Instead of asking for advice, you feed it your workflow. The prompts force the model to: - Map your tasks as a Directed Acyclic Graph (DAG) - Demand literal, mechanical justification for every node - Prune hanging nodes (tasks dedicated to pre-optimizing or monitoring) as vanity friction - Calculate the EV of your delays and expose irrational penalty functions applied to low-probability errors
It doesn't use therapy speak; it outputs execution logs and prunes variables that do not generate yield.
Curious to hear how other engineers are hacking model alignment to force adversarial feedback.