Personalized text-to-image generation aims to seamlessly integrate specific identities into textual descriptions. However, existing training-free methods often rely on rigid visual feature injection, creating a conflict between identity fidelity and textual adaptability. To address this, we propose FlexID, a novel training-free framework utilizing intent-aware modulation. FlexID orthogonally decouples identity into two dimensions: a Semantic Identity Projector (SIP) that injects high-level priors into the language space, and a Visual Feature Anchor (VFA) that ensures structural fidelity within the latent space. Crucially, we introduce a Context-Aware Adaptive Gating (CAG) mechanism that dynamically modulates the weights of these streams based on editing intent and diffusion timesteps. By automatically relaxing rigid visual constraints when strong editing intent is detected, CAG achieves synergy between identity preservation and semantic variation. Extensive experiments on IBench demonstrate that FlexID achieves a state-of-the-art balance between identity consistency and text adherence, offering an efficient solution for complex narrative generation.
翻译:个性化文本到图像生成旨在将特定身份无缝整合到文本描述中。然而,现有的无需训练方法通常依赖于僵化的视觉特征注入,导致身份保真度与文本适应性之间存在冲突。为解决这一问题,我们提出了FlexID,一种利用意图感知调制的新型无需训练框架。FlexID将身份正交解耦为两个维度:语义身份投影器(SIP)将高层先验注入语言空间,以及视觉特征锚点(VFA)确保潜在空间内的结构保真度。关键的是,我们引入了上下文感知自适应门控(CAG)机制,该机制根据编辑意图和扩散时间步长动态调制这些信息流的权重。通过检测到强烈编辑意图时自动放松僵化的视觉约束,CAG实现了身份保持与语义变化之间的协同。在IBench上的大量实验表明,FlexID在身份一致性与文本遵循性之间达到了最先进的平衡,为复杂叙事生成提供了一种高效的解决方案。