41 pointsby aanet6 hours ago8 comments
  • zhangchen2 hours ago
    Has anyone tried implementing something like System M's meta-control switching in practice? Curious how you'd handle the reward signal for deciding when to switch between observation and active exploration without it collapsing into one mode.
    • robot-wrangleran hour ago
      > Curious how you'd handle the reward signal for deciding when to switch between observation and active exploration without it collapsing into one mode.

      If you like biomimetic approaches to computer science, there's evidence that we want something besides neural networks. Whether we call such secondary systems emotions, hormones, or whatnot doesn't really matter much if the dynamics are useful. It seems at least possible that studying alignment-related topics is going to get us closer than any perspective that that's focused on learning. Coincidentally quanta is on some related topics today: https://www.quantamagazine.org/once-thought-to-support-neuro...

      • t-writescode11 minutes ago
        Or possibly “in addition to”, yeah. I think this is where it needs to go. We can’t keep training HUGE neural networks every 3 months and throw out all the work we did and the billions of dollars in gear and training just to use another model a few months.

        That loops is unsustainable. Active learning needs to be discovered / created.

  • Frannky8 minutes ago
    Can I run it?
  • tranchms32 minutes ago
    We are rediscovering Cybernetics
  • aanet6 hours ago
    by Emmanuel Dupoux, Yann LeCun, Jitendra Malik

    "he proposed framework integrates learning from observation (System A) and learning from active behavior (System B) while flexibly switching between these learning modes as a function of internally generated meta-control signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt to real-world, dynamic environments across evolutionary and developmental timescales. "

    • iFire2 hours ago
      https://github.com/plastic-labs/honcho has the idea of one sided observations for RAG.
    • dasil0035 hours ago
      If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison. And I'm not sure our legal and social structures have the capacity to absorb that without very very bad things happening.
      • gotwaz38 minutes ago
        Not just CEOs, Legal and social structures will also be run by AI. Chimps with 3 inch brains cant handle the level of complexity global systems are currently producing.
      • marsten3 hours ago
        Agents playing the iterated prisoner's dilemma learn to cooperate. It's usually not a dominant strategy to be entirely sociopathic when other players are involved.
        • ehnto3 hours ago
          You don't get that many iterations in the real world though, and if one of your first iterations is particularly bad you don't get any more iterations.
          • cortesoftan hour ago
            But AI will train in the artificial world
            • ehnto32 minutes ago
              They still fail in the real world, where a single failure can be highly consequential. AI coding is lucky it has early failure modes, pretty low consequence. But I don't see how that looks for an autonomous management agent with arbitrary metrics as goals.

              Anyone doing AI coding can tell you once an agent gets on the wrong path, it can get very confused and is usually irrecoverable. What does that look like in other contexts? Is restarting the process from scratch even possible in other types of work, or is that unique to only some kinds of work?

  • beernet6 hours ago
    The paper's critique of the 'data wall' and language-centrism is spot on. We’ve been treating AI training like an assembly line where the machine is passive, and then we wonder why it fails in non-stationary environments. It’s the ultimate 'padded room' architecture: the model is isolated from reality and relies on human-curated data to even function.

    The proposed System M (Meta-control) is a nice theoretical fix, but the implementation is where the wheels usually come off. Integrating observation (A) and action (B) sounds great until the agent starts hallucinating its own feedback loops. Unless we can move away from this 'outsourced learning' where humans have to fix every domain mismatch, we're just building increasingly expensive parrots. I’m skeptical if 'bilevel optimization' is enough to bridge that gap or if we’re just adding another layer of complexity to a fundamentally limited transformer architecture.

  • jdkee3 hours ago
    LeCun has been talking about his JEPA models for awhile.

    https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/

  • theLewisLu6 minutes ago
    [dead]
  • lock-locku4 hours ago
    [dead]