Existing video personalization methods preserve visual likeness but treat video and audio separately. Without access to the visual scene, audio models cannot synchronize sounds with on-screen actions; and because classical voice-cloning models condition only on a reference recording, a text prompt cannot redirect speaking style or acoustic environment. We propose ID-LoRA (Identity-Driven In-Context LoRA), which jointly generates a subject's appearance and voice in a single model, letting a text prompt, a reference image, and a short audio clip govern both modalities together. ID-LoRA adapts the LTX-2 joint audio-video diffusion backbone via parameter-efficient In-Context LoRA and, to our knowledge, is the first method to personalize visual appearance and voice in a single generative pass. Two challenges arise. Reference and generation tokens share the same positional-encoding space, making them hard to distinguish; we address this with negative temporal positions, placing reference tokens in a disjoint RoPE region while preserving their internal temporal structure. Speaker characteristics also tend to be diluted during denoising; we introduce identity guidance, a classifier-free guidance variant that amplifies speaker-specific features by contrasting predictions with and without the reference signal. In human preference studies, ID-LoRA is preferred over Kling 2.6 Pro by 73% of annotators for voice similarity and 65% for speaking style. On cross-environment settings, speaker similarity improves by 24% over Kling, with the gap widening as conditions diverge. A preliminary user study further suggests that joint generation provides a useful inductive bias for physically grounded sound synthesis. ID-LoRA achieves these results with only ~3K training pairs on a single GPU. Code, models, and data will be released.
翻译:现有的视频个性化方法虽能保留视觉相似性,但将视频与音频分开处理。由于无法获取视觉场景,音频模型难以将声音与屏幕动作同步;而传统语音克隆模型仅基于参考录音进行条件生成,文本提示无法改变说话风格或声学环境。我们提出ID-LoRA(身份驱动的上下文LoRA),该模型在单一框架中联合生成主体的外观与声音,使文本提示、参考图像和短音频片段共同调控两种模态。ID-LoRA通过参数高效的上下文LoRA技术适配LTX-2联合音视频扩散主干网络,据我们所知,这是首个能在单次生成过程中同步实现视觉外观与语音个性化的方法。研究面临两大挑战:参考标记与生成标记共享相同的位置编码空间,导致难以区分;我们通过负时间位置编码解决该问题,将参考标记置于独立的RoPE区域,同时保持其内部时序结构。此外,说话人特征在去噪过程中易被稀释;我们提出身份引导技术——一种无分类器引导的变体,通过对比有无参考信号时的预测结果来增强说话人特异性特征。在人类偏好研究中,ID-LoRA在语音相似度方面以73%的标注者偏好优于Kling 2.6 Pro,在说话风格方面以65%的偏好胜出。在跨环境设定中,其说话人相似度较Kling提升24%,且环境差异越大优势越显著。初步用户研究表明,联合生成为物理基础的声音合成提供了有效的归纳偏置。ID-LoRA仅需约3K训练样本并在单GPU上即达成上述成果。代码、模型与数据将公开。