3 pointsby tsurg_dot_com5 hours ago1 comment
  • tsurg_dot_com5 hours ago
    This recent paper from Fudan University is a highly relevant read given the current industry focus on RL for LLMs (like GRPO). The authors investigate a very practical question: do the improvements brought by reinforcement fine-tuning (RFT) actually generalize beyond their training distribution when applied to multi-turn agents?