Safetensors
llada2_moe
custom_code

πŸš€ DMax: Aggressive Parallel Decoding for dLLMs

DMax: Aggressive Parallel Decoding for dLLMs
Zigeng Chen, Gongfan Fang, Xinyin Ma, Ruonan Yu, Xinchao Wang
xML Lab, National University of Singapore

πŸ’ͺ Highlights

  • Aggressive Decoding Parallelism: Achieves 6.0 TPF on math and reasoning tasks and 6.6 TPF on code tasks while preserving accuracy.
  • Self-Revising dLLM: Extends a pretrained MDLM into a UDLM with an intrinsic ability to revise its own erroneous predictions during decoding.
  • Soft Parallel Decoding: Uses interpolation between mask and token embeddings to propagate confidence priors from previous steps.

Superior Parallelism-Accuracy Trade-off, Increased TPF with Maintained Accuracy.

πŸ’» Model and Datasets

Model Description Source Model Link
πŸ€– DMax-Math-16B Highly parallel dLLM for math and reasoning. LLaDA-2.0-mini HF
πŸ€– DMax-Coder-16B Highly parallel dLLM for code generation. LLaDA-2.0-mini HF
Dataset Description Link
πŸ“Š DMax-Math-Training-Data math trajectories generated by LLaDA-2.0-mini HF
πŸ“Š DMax-Code-Training-Data code trajectories generated by LLaDA-2.0-mini HF

πŸš€ Quick Start

import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Zigeng/DMax-Coder-16B", trust_remote_code=True, device_map="cuda:0"
)
model = model.to(torch.bfloat16)
model.eval()
tokenizer = AutoTokenizer.from_pretrained("Zigeng/DMax-Coder-16B", trust_remote_code=True)

prompt = "Write a python function to find the first repeated character in a given string." + "\n\nPlease enclose your code within delimiters as follows:\n```python\n# YOUR CODE HERE\n```\n\n"

input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    tokenize=True,
    return_tensors="pt",
)

nfe, generated_tokens = model.generate_spd(
    inputs=input_ids,
    gen_length=2048,
    block_length=32,
    threshold=0.65,
)

generated_answer = tokenizer.decode(
    generated_tokens[0],
    skip_special_tokens=True,
)

print(generated_answer)
print("nfe:",nfe,"token length",len(generated_tokens[0]))

πŸ“– Experimental Results

trade-off

Downloads last month
14
Safetensors
Model size
16B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Zigeng/DMax-Coder-16B

Finetuned
(2)
this model

Dataset used to train Zigeng/DMax-Coder-16B

Collection including Zigeng/DMax-Coder-16B

Paper for Zigeng/DMax-Coder-16B