ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation
Abstract
ArcFlow is a few-step distillation framework that uses non-linear flow trajectories to approximate teacher diffusion models, achieving fast inference with minimal quality loss through lightweight adapter training.
Diffusion models have achieved remarkable generation quality, but they suffer from significant inference cost due to their reliance on multiple sequential denoising steps, motivating recent efforts to distill this inference process into a few-step regime. However, existing distillation methods typically approximate the teacher trajectory by using linear shortcuts, which makes it difficult to match its constantly changing tangent directions as velocities evolve across timesteps, thereby leading to quality degradation. To address this limitation, we propose ArcFlow, a few-step distillation framework that explicitly employs non-linear flow trajectories to approximate pre-trained teacher trajectories. Concretely, ArcFlow parameterizes the velocity field underlying the inference trajectory as a mixture of continuous momentum processes. This enables ArcFlow to capture velocity evolution and extrapolate coherent velocities to form a continuous non-linear trajectory within each denoising step. Importantly, this parameterization admits an analytical integration of this non-linear trajectory, which circumvents numerical discretization errors and results in high-precision approximation of the teacher trajectory. To train this parameterization into a few-step generator, we implement ArcFlow via trajectory distillation on pre-trained teacher models using lightweight adapters. This strategy ensures fast, stable convergence while preserving generative diversity and quality. Built on large-scale models (Qwen-Image-20B and FLUX.1-dev), ArcFlow only fine-tunes on less than 5% of original parameters and achieves a 40x speedup with 2 NFEs over the original multi-step teachers without significant quality degradation. Experiments on benchmarks show the effectiveness of ArcFlow both qualitatively and quantitatively.
Community
In this work, we revisit few-step distillation from a geometric perspective. Based on the observation that teacher trajectories exhibit inherently non-linear dynamics, ArcFlow introduces a momentum-based velocity parameterization with an analytic solver to enable more faithful alignment under very few NFEs.
Across Qwen-Image-20B and FLUX.1-dev, we find that this formulation results in stable 2-step generation with lightweight LoRA tuning, while maintaining strong alignment with the teacher distribution. We hope this perspective may provide a useful direction for improving efficiency without sacrificing trajectory fidelity.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Transition Matching Distillation for Fast Video Generation (2026)
- Few-Step Distillation for Text-to-Image Generation: A Practical Guide (2025)
- GPD: Guided Progressive Distillation for Fast and High-Quality Video Generation (2026)
- FlowCast: Trajectory Forecasting for Scalable Zero-Cost Speculative Flow Matching (2026)
- Self-Evaluation Unlocks Any-Step Text-to-Image Generation (2025)
- Parallel Diffusion Solver via Residual Dirichlet Policy Optimization (2025)
- Look-Ahead and Look-Back Flows: Training-Free Image Generation with Trajectory Smoothing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper