Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability
Abstract
Supervised finetuning and reinforcement learning exhibit conditional cross-domain generalization in reasoning tasks, influenced by optimization dynamics, data quality, and model capability, with asymmetric outcomes between reasoning improvement and safety degradation.
A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. We revisit this claim for reasoning SFT with long chain-of-thought (CoT) supervision and find that cross-domain generalization is not absent but conditional, jointly shaped by optimization dynamics, training data, and base-model capability. Some reported failures are under-optimization artifacts: cross-domain performance first degrades before recovering and improving with extended training (a dip-and-recovery pattern), so shorttraining checkpoints can underestimate generalization. Data quality and structure both matter: low-quality solutions broadly hurt generalization,while verified long-CoT traces yield consistent cross-domain gains. Model capability is essential: stronger models internalize transferable procedural patterns (e.g., backtracking) even from a toy arithmetic game, while weaker ones imitate surface verbosity. This generalization is asymmetric, however: reasoning improves while safety degrades, reframing the question from whether reasoning SFT generalizes to under what conditions and at what cost.
Community
Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability
We have open-sourced all our models and datasets: https://huggingface.co/collections/jasonrqh/rethink-sft-generalization
Also find them on modelscope: https://modelscope.cn/collections/nebularaid/Rethink_SFT_generalization
the dip-and-recovery pattern in cross-domain generalization when training reasoning sft with long cot traces is the kind of result that upends the idea that sft merely memorizes. it makes clear you can’t judge transfer from a single checkpoint—the outcome comes from the trio of optimization sufficiency, data quality/structure, and base-model capability. low-quality solutions hurt generalization, while verified long cot traces and stronger models tend to drive cross-domain gains, even if safety takes a hit. the arxivlens breakdown helped me parse the method and results, e.g. how data design and long-coT traces interact with optimization; https://arxivlens.com/PaperView/Details/rethinking-generalization-in-reasoning-sft-a-conditional-analysis-on-optimization-data-and-model-capability-7534-48d2cb7c. do you think this dip would appear under rl-based finetuning as well, or is it unique to reasoning sft with long cot?
Thanks for your interest in our paper. We did not conduct experiments on RL checkpoints, but I would say that things are different for SFT and RL. We hypothesize that the dip-and-recovery dynamics is likely caused by a distribution shift (from pretraining distribution to long-CoT data distribution). For RL, such a distribution shift is minimal since the data is on-policy.
Also, we note that the dynamics of response length is totally different for SFT and RL. SFT: first increase sharply, then gradually decrease and stablize. RL: (sometimes first decrease, then) continuously increase.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper