> We initially trained a 32B model using 3–4K math problems from the Numina dataset (provided by STILL-2), achieving a significant improvement in AIME24 accuracy from 16.7% to 43.3%. However, when we incorporated coding data generated from the APPs dataset into the training process, AIME24 accuracy dropped to 36.7%. We hypothesize that this decline is due to the distinct reasoning approaches required for math and coding tasks.
This is interesting. For large models that were trained on much more data. I wonder if o1 is trained in a different way that GPT-4o. Do they only rely on synthetic data (plus some hand crafted datasets). But then how would O1 knows a lot of facts like GPT-4o indicating that these were in the training.
Can someone with more understanding and knowledge weight on this?