3 pointsby jatinkk3 days ago1 comment
  • o1inventor3 days ago
    From what I gather it boils down to this: Just as parameter counts increased, at a sufficient number of specialized skills, new, more general skills may emerge or be engineered.

    There are already examples of this in the wild, language and vision models not just performing scientific experiments, but coming up with new hypothesis on their own, designing experiments from scratch, laying out plans on how to carry out those experiments, and then instructing human helpers to carry those experiments out, gathering data, validating or invalidating hypothesis, etc.

    The open question is can we derive a process, come up with data, and train models such that they can 1. detect when some task or question is outside the training distribution, 2. and develop models capable of coming up with a process for exploring the new task or question distribution such that they (eventually) arrive at (if not a good answer), an acceptable one.

    • jatinkk3 days ago
      That is definitely the industry's hope—that quantity eventually becomes quality (emergence). But my concern comes from the history of the model itself. In psychology, Guilford’s "cube" of 150 specialized factors never emerged into a unified intelligence. It just remained a complex list of separate abilities. The "open question" you mention (how to handle tasks outside the training distribution) is exactly where I think the Guilford architecture hits a wall. If we build by adding specific modules, the system might never learn how to reason through the "unknown"—it just waits for a new module to be added.