dTRPO: Trajectory Reduction in Policy Optimization
of Diffusion Large Language Models

1Meta AI    2KAUST    * Work done at Meta   † Joint last author

ARM vs dLLM Generation

dTRPO: Trajectory Reduction in Policy Optimization

Speed: 1.0x

Left: ARMs generate tokens sequentially via causal conditioning, while dLLMs unmask multiple tokens in parallel through a multi-step diffusion process. Right: dTRPO estimates trajectory probability ratios by sampling one timestep per block, collecting only the newly unmasked token probabilities. All schedule-dependent coefficients cancel in the ratio, enabling efficient offline policy optimization with a single forward pass.

Abstract

Diffusion Large Language Models (dLLMs) introduce a new paradigm for language generation, which in turn presents new challenges for aligning them with human preferences. In this work, we aim to improve the policy optimization for dLLMs by reducing the cost of the trajectory probability calculation, thereby enabling scaled-up offline policy training. We prove that: (i) under reference policy regularization, the probability ratio of the newly unmasked tokens is an unbiased estimate of that of intermediate diffusion states, and (ii) the probability of the full trajectory can be effectively estimated with a single forward pass of a re-masked final state. By integrating these two trajectory reduction strategies into a policy optimization objective, we propose dTRPO (dLLM Trajectory Reduction Policy Optimization). We evaluate dTRPO on 7B dLLMs across instruction-following and reasoning benchmarks. Results show that it substantially improves the core performance of state-of-the-art dLLMs, achieving gains of up to 9.6% on STEM tasks, up to 4.3% on coding tasks, and up to 3.0% on instruction-following tasks. Moreover, dTRPO exhibits strong training efficiency due to its offline, single-forward nature, and achieves improved generation efficiency through high-quality outputs.

Method

State Reduction (Theorem 1)

The trajectory probability can be unbiased estimated with one sampled timestep per block:

$$\log \pi_\theta(\pmb{\tau}) = \sum_{s=1}^{N_B} \mathbb{E}_{t \sim U(1, T_B)} \; T_B \log \pi_\theta(\pmb{\tau}_{s,t} \mid \pmb{\tau}_{s,t-1}, t)$$

This reduces computation from $T$ timesteps to $N_B$ blocks, enabling a single forward pass via block attention.

Ratio Reduction (Theorem 2)

If we only need ratios between the current policy and a reference policy in policy optimization, all schedule-dependent coefficients cancel in the ratio:

$$\frac{\pi_\theta(\pmb{\tau}_{t-1} \mid \pmb{\tau}_t, t)}{\pi_{\text{ref}}(\pmb{\tau}_{t-1} \mid \pmb{\tau}_t, t)} = \prod_{i \in \mathcal{I}_t} \frac{\mu_\theta(\tau_{t-1}^{(i)} \mid \pmb{\tau}_t)}{\mu_{\text{ref}}(\tau_{t-1}^{(i)} \mid \pmb{\tau}_t)}$$

where $\mathcal{I}_t$ is the set of newly unmasked coordinates at step $t$. The ratio depends only on categorical probabilities at newly unmasked positions, independent of the masking schedule.

dTRPO Objective

Combining both reductions with DPO, the final objective is:

$$\mathcal{L}_{\text{dTRPO}}(\theta) = -\mathbb{E}_{(\pmb{y}^+, \pmb{y}^-) \sim \mathcal{D}} \log \sigma\!\left(\lambda T_B \left[S(\pmb{y}^+; \theta, \text{ref}) - S(\pmb{y}^-; \theta, \text{ref})\right]\right)$$

where $S(\pmb{y}; \theta, \text{ref})$ sums log probability ratios over newly unmasked tokens across all blocks. This enables offline, single-forward-pass training comparable to DPO for ARMs.

Results

Performance of dLLMs under zero-shot evaluation. dTRPO achieves overall best performance and closes the gap to strong ARMs.

Model GPQA(cot, diamond) GSM8K(cot) MATH LCBv6 MBPP+(extra) HEval+(extra) IFEval(prompt) ArenaHard(V2.0) MTBench
dLLM from Scratch
LLaDA Instruct 19.1978.4742.486.0738.134.153.23--
LLaDA 1.5 19.1979.4543.646.5437.039.059.52--
Qwen 2.5 7B Instruct vs. dLLM from Qwen2.5 7B Instruct
Qwen2.5 Instruct 36.3687.8773.0624.4267.574.471.3810.438.08
Dream Instruct 28.7975.3650.2212.6154.553.050.656.793.88
Fast-dLLM-v2 20.7182.3460.2611.5651.659.162.111.263.17
Fast-dLLM-v2+ELBO 12.6379.9858.4811.5652.459.151.020.171.01
Fast-dLLM-v2+VRPO 24.2483.1763.3212.8950.557.365.067.326.37
Fast-dLLM-v2+DPO w/ MF 23.7485.3763.2011.0046.351.865.626.026.48
Fast-dLLM-v2+dTRPO (Ours) 30.30 85.97 64.30 15.17 51.6 63.4 65.06 7.41 6.53

Qualitative Comparison

Side-by-side comparison of generation outputs on Arena-Hard V2.0 from Qwen 2.5, Fast-dLLM-v2, and our dTRPO model.

Prompt (Arena-Hard V2.0): A ball is dropped from a height of 100 meters. It bounces back to 50% of its previous height after each bounce. How far does the ball travel before it comes to rest?
Qwen 2.5 7B Instruct
Initial Drop: 100 meters (down). First Bounce: up 50m + down 50m = 100m. Second Bounce: up 25m + down 25m = 50m. ... The bounce distances form a geometric series: S = 2 × 50 / (1 - 0.5) = 200 meters Total distance: D = 100 + 200 = 300 meters
Fast-dLLM-v2
Initial Drop: 100 meters. First Bounce: 50 + 50 = 100 meters. Second Bounce: 25 + 25 = 50 meters. Third Bounce: 12.5 + 12.5 = 25 meters. Fourth Bounce: 6.25 + 6.25 = 12.5 meters. Geometric series with a = 100, r = 0.5: S = 100 / (1 - 0.5) = 200 meters Total distance: 200 meters (forgot to add initial drop separately)
Fast-dLLM-v2 + dTRPO (Ours)
Initial Drop: 100 meters. First Bounce: 50 + 50 = 100 meters. Second Bounce: 25 + 25 = 50 meters. Bounce distances form geometric series: a = 100, r = 0.5 S = 100 / (1 - 0.5) = 200 meters Total distance: = Initial drop + bounce series = 100 + 200 = 300 meters