Even if coherence is learned implicitly, choosing a language, model, or representation already defines a bounded idea-space. The bias doesn’t disappear, it just moves into data and architecture. So my question isn’t whether coherence should be explicit or learned, but whether absolute exploration of an abstract idea-space is possible at all once any boundaries are imposed.
Guess absolute exploration hits the heat-death limit. You are hinting at a Drake-equation for bounded idea-space to guide AI: anchors x pressures x connectors x depth. Shift the boundaries for novelty.
Yeah, that’s a good way to put it. Absolute coverage feels like heat death, but changing the factors changes the space itself. That’s the part I’m still stuck on.