Please consider abliterating ServiceNow-AI/Apriel-1.6-15b-Thinker
ServiceNow-AI/Apriel-1.6-15b-Thinker
Summary
Apriel-1.6-15B-Thinker is an updated multimodal reasoning model in ServiceNow’s Apriel SLM series, building on Apriel-1.5-15B-Thinker. With significantly improved text and image reasoning capabilities, Apriel-1.6 achieves competitive performance against models up to 10x its size. Like its predecessor, it benefits from extensive continual pre-training across both text and image domains. We additionally perform post-training that focuses on Supervised Finetuning (SFT) and Reinforcement Learning (RL). Apriel-1.6 obtains frontier performance without sacrificing reasoning token efficiency. The model improves or maintains task performance when compared with Apriel-1.5-15B-Thinker, while reducing reasoning token usage by more than 30%.
Highlights
Achieves a score of 57 on the Artificial Analysis index outperforming models like Gemini 2.5 Flash, Claude Haiku 4.5 and GPT OSS 20b. It obtains a score on par with Qwen3 235B A22B, while being significantly more efficient.
Reduces reasoning token usage by more than 30%, delivering significantly better efficiency than Apriel-1.5-15B-Thinker.
Scores 69 on Tau2 Bench Telecom and 69 on IFBench, which are key benchmarks for the enterprise domain.
At 15B parameters, the model fits on a single GPU, making it highly memory-efficient.
Based on community feedback on Apriel-1.5-15B-Thinker, we simplified the chat template by removing redundant tags and introduced four special tokens to the tokenizer (, , [BEGIN FINAL RESPONSE], <|end|>) for easier output parsing.
Please see our blog post for more details
We will try to see if it is possible to achieve ablation.
We tried to perform ablation, which was quite challenging. We were able to ablate most of the instructions, but there are still some that did not get successfully ablated.
i am looking forward to it