Training-free image editing has attracted increasing attention for its efficiency and independence from training data. However, existing approaches predominantly rely on inversion-reconstruction trajectories, which impose an inherent trade-off: longer trajectories accumulate errors and compromise fidelity, while shorter ones fail to ensure sufficient alignment with the edit prompt. Previous attempts to address this issue typically employ backbone-specific feature manipulations, limiting general applicability. To address these challenges, we propose FlowBypass, a novel and analytical framework grounded in Rectified Flow that constructs a bypass directly connecting inversion and reconstruction trajectories, thereby mitigating error accumulation without relying on feature manipulations. We provide a formal derivation of two trajectories, from which we obtain an approximate bypass formulation and its numerical solution, enabling seamless trajectory transitions. Extensive experiments demonstrate that FlowBypass consistently outperforms state-of-the-art image editing methods, achieving stronger prompt alignment while preserving high-fidelity details in irrelevant regions.
翻译:免训练图像编辑因其高效性及对训练数据的独立性而日益受到关注。然而,现有方法主要依赖于反转-重建轨迹,这带来了一个固有的权衡:较长的轨迹会累积误差并损害保真度,而较短的轨迹则无法确保与编辑提示的充分对齐。先前解决此问题的尝试通常采用针对特定骨干网络的特征操作,限制了其普适性。为应对这些挑战,我们提出了FlowBypass,一个基于整流流的新型分析框架,它构建了一个直接连接反转与重建轨迹的旁路,从而在不依赖特征操作的情况下缓解误差累积。我们形式化推导了两条轨迹,并由此获得了一个近似旁路公式及其数值解,实现了轨迹间的无缝过渡。大量实验表明,FlowBypass在多种情况下均优于当前最先进的图像编辑方法,在实现更强提示对齐的同时,有效保留了无关区域的高保真细节。