Personal photo albums are not merely collections of static images but living, ecological archives defined by temporal continuity, social entanglement, and rich metadata, which makes the personalized photo retrieval non-trivial. However, existing retrieval benchmarks rely heavily on context-isolated web snapshots, failing to capture the multi-source reasoning required to resolve authentic, intent-driven user queries. To bridge this gap, we introduce PhotoBench, the first benchmark constructed from authentic, personal albums. It is designed to shift the paradigm from visual matching to personalized multi-source intent-driven reasoning. Based on a rigorous multi-source profiling framework, which integrates visual semantics, spatial-temporal metadata, social identity, and temporal events for each image, we synthesize complex intent-driven queries rooted in users' life trajectories. Extensive evaluation on PhotoBench exposes two critical limitations: the modality gap, where unified embedding models collapse on non-visual constraints, and the source fusion paradox, where agentic systems perform poor tool orchestration. These findings indicate that the next frontier in personal multimodal retrieval lies beyond unified embeddings, necessitating robust agentic reasoning systems capable of precise constraint satisfaction and multi-source fusion. Our PhotoBench is available.
翻译:个人相册不仅是静态图像的集合,更是由时间连续性、社会关联性和丰富元数据定义的动态生态档案,这使得个性化照片检索任务变得复杂。然而,现有的检索基准严重依赖于上下文孤立的网络快照,无法捕捉解决真实意图驱动用户查询所需的多源推理能力。为弥补这一差距,我们引入了PhotoBench,这是首个基于真实个人相册构建的基准。其设计旨在将范式从视觉匹配转向个性化的多源意图驱动推理。基于一个严谨的多源画像框架——该框架为每张图像整合了视觉语义、时空元数据、社会身份和时间事件——我们合成了根植于用户生活轨迹的复杂意图驱动查询。在PhotoBench上的广泛评估揭示了两个关键局限:模态鸿沟,即统一嵌入模型在非视觉约束条件下失效;以及源融合悖论,即智能体系统在工具协调方面表现不佳。这些发现表明,个人多模态检索的下一个前沿领域超越统一嵌入,需要能够精确满足约束并实现多源融合的鲁棒智能体推理系统。我们的PhotoBench已公开可用。