If you force the output of an LLM to begin with an error, the LLM tends to continue down that erroneous path.
In practice, we didn't see much of this kind of EP. A solution to this would be to give some agent the task of occasionally reviewing the NERDs for contradictions as well as the ability to search through the source material as needed. That of course creates the possibility of catastrophic forgetting, where the agent rewrites a NERD in an effort to remove a contraction and end's up deleting something important.
We didn't see a lot of error propagation, but one example where we did: in Harry Potter, Prof Dumbledore is introduced as a mysterious hooded character. So the NERD-writer would create a NERD for "mysterious hooded man." There's no tool for the agent to change the title of a NERD, so the system is stuck with that title now. Sometimes the system would build the entire Dumbledore entry under "mysterious hooded man"; sometimes it would make a new Dumbledore entity and like a reference back to the "mysterious hooded man" entity, and sometimes it wouldn't link them. None of those outcomes are great.
Only later did we adapted to the technique to work to long books. The existing long book benchmarks seemed like the most appropriate way to show the core idea to a wider audience.
So ya, I'm confident that this central idea can be applied in many different domains.