ROCKET: Rapid Optimization via Calibration-guided Knapsack Enhanced Truncation for Efficient Model Compression
Abstract
ROCKET is a training-free model compression method that formulates layer-wise compression as a multi-choice knapsack problem and uses sparse matrix factorization for efficient weight sparsification without iterative optimization.
We present ROCKET, a training-free model compression method that achieves state-of-the-art performance in comparison with factorization, structured-sparsification and dynamic compression baselines. Operating under a global compression budget, ROCKET comprises two key innovations: First, it formulates layer-wise compression allocation as a multi-choice knapsack problem, selecting the optimal compression level for each layer to minimize total reconstruction error while adhering to a target model size. Second, it introduces a single-step sparse matrix factorization inspired by dictionary learning: using only a small calibration set, it sparsifies weight coefficients based on activation-weights sensitivity and then updates the dictionary in closed form via least squares bypassing iterative optimization, sparse coding, or backpropagation entirely. ROCKET consistently outperforms existing compression approaches across different model architectures at 20-50\% compression rates. Notably, it retains over 90\% of the original model's performance at 30\% compression without any fine-tuning. Moreover, when applying a light fine-tuning phase, recovery is substantially enhanced: for instance, compressing Qwen3-14B to an 8B-parameter model and healing it with just 30 million tokens yields performance nearly on par with the original Qwen3-8B. The code for ROCKET is at github.com/mts-ai/ROCKET/tree/main.
Community
ROCKET isnβt just another compression method. It is one of the first methods to shrink massive AI models down to compact sizes without sacrificing performance, often matching or even outperforming vanilla models of the same size trained from scratch π
this is interesting approach, recombination of learned vectors as simple as lora, but yet surprisingly effective
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Zero Sum SVD: Balancing Loss Sensitivity for Low Rank LLM Compression (2026)
- SAES-SVD: Self-Adaptive Suppression of Accumulated and Local Errors for SVD-based LLM Compression (2026)
- SkipCat: Rank-Maximized Low-Rank Compression of Large Language Models via Shared Projection and Block Skipping (2025)
- Don't be so Stief! Learning KV Cache low-rank approximation over the Stiefel manifold (2026)
- OPTIMA: Optimal One-shot Pruning for LLMs via Quadratic Programming Reconstruction (2025)
- SALAAD: Sparse And Low-Rank Adaptation via ADMM for Large Language Model Inference (2026)
- Preserve-Then-Quantize: Balancing Rank Budgets for Quantization Error Reconstruction in LLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper