1 pointby tysonjeffreys6 hours ago1 comment
  • tysonjeffreys6 hours ago
    I wrote this as a conceptual architecture piece, not an empirical paper.

    The core claim is that many failures in embodied AI and agent systems come from missing a baseline regulation layer beneath planning/learning — relying on correction alone leads to cascades under load.

    I tried to keep it concrete with design patterns rather than metaphors. Curious how people here think about this relative to whole-body control, passivity, or active inference.