Breaking Training Bottlenecks: Effective and Stable Reinforcement Learning for Coding Models
Abstract
MicroCoder-GRPO enhances code generation through improved policy optimization with innovations in truncation masking, temperature selection, and loss function adjustments, achieving superior performance on LiveCodeBench v6.
Modern code generation models exhibit longer outputs, accelerated capability growth, and changed training dynamics, rendering traditional training methodologies, algorithms, and datasets ineffective for improving their performance. To address these training bottlenecks, we propose MicroCoder-GRPO, an improved Group Relative Policy Optimization approach with three innovations: conditional truncation masking to improve long output potential while maintaining training stability, diversity-determined temperature selection to maintain and encourage output diversity, and removal of KL loss with high clipping ratios to facilitate solution diversity. MicroCoder-GRPO achieves up to 17.6% relative improvement over strong baselines on LiveCodeBench v6, with more pronounced gains under extended context evaluation. Additionally, we release MicroCoder-Dataset, a more challenging training corpus that achieves 3x larger performance gains than mainstream datasets on LiveCodeBench v6 within 300 training steps, and MicroCoder-Evaluator, a robust framework with approximately 25% improved evaluation accuracy and around 40% faster execution. Through comprehensive analysis across more than thirty controlled experiments, we reveal 34 training insights across seven main aspects, demonstrating that properly trained models can achieve competitive performance with larger counterparts.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Scaling Data Difficulty: Improving Coding Models via Reinforcement Learning on Fresh and Challenging Problems (2026)
- Learning to Generate Secure Code via Token-Level Rewards (2026)
- TAROT: Test-driven and Capability-adaptive Curriculum Reinforcement Fine-tuning for Code Generation with Large Language Models (2026)
- Training Large Reasoning Models Efficiently via Progressive Thought Encoding (2026)
- MMR-GRPO: Accelerating GRPO-Style Training through Diversity-Aware Reward Reweighting (2026)
- The Art of Efficient Reasoning: Data, Reward, and Optimization (2026)
- Clipping-Free Policy Optimization for Large Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
