Understanding urban perception from street view imagery has become a central topic in urban analytics and human centered urban design. However, most existing studies treat urban scenes as static and largely ignore the role of dynamic elements such as pedestrians and vehicles, raising concerns about potential bias in perception based urban analysis. To address this issue, we propose a controlled framework that isolates the perceptual effects of dynamic elements by constructing paired street view images with and without pedestrians and vehicles using semantic segmentation and MLLM guided generative inpainting. Based on 720 paired images from Dongguan, China, a perception experiment was conducted in which participants evaluated original and edited scenes across six perceptual dimensions. The results indicate that removing dynamic elements leads to a consistent 30.97% decrease in perceived vibrancy, whereas changes in other dimensions are more moderate and heterogeneous. To further explore the underlying mechanisms, we trained 11 machine learning models using multimodal visual features and identified that lighting conditions, human presence, and depth variation were key factors driving perceptual change. At the individual level, 65% of participants exhibited significant vibrancy changes, compared with 35-50% for other dimensions; gender further showed a marginal moderating effect on safety perception. Beyond controlled experiments, the trained model was extended to a city-scale dataset to predict vibrancy changes after the removal of dynamic elements. The city level results reveal that such perceptual changes are widespread and spatially structured, affecting 73.7% of locations and 32.1% of images, suggesting that urban perception assessments based solely on static imagery may substantially underestimate urban liveliness.
翻译:从街景图像理解城市感知已成为城市分析和人本城市设计的核心议题。然而,现有研究大多将城市场景视为静态,基本忽略了行人、车辆等动态元素的作用,这引发了基于感知的城市分析可能存在偏差的担忧。为解决该问题,我们提出一个受控框架,通过语义分割和MLLM引导的生成式修复技术,构建包含与不包含行人及车辆的配对街景图像,从而分离动态元素的感知效应。基于中国东莞的720组配对图像,我们开展了感知实验,参与者从六个感知维度对原始场景与编辑后场景进行评价。结果表明,移除动态元素会导致感知活力度一致性下降30.97%,而其他维度的变化则相对温和且异质性更强。为深入探究内在机制,我们利用多模态视觉特征训练了11个机器学习模型,发现光照条件、人的存在及景深变化是驱动感知变化的关键因素。在个体层面,65%的参与者表现出显著的活力度变化,而其他维度的比例仅为35-50%;性别因素对安全感知则显示出微弱的调节作用。在受控实验之外,我们将训练模型扩展至城市尺度数据集,以预测移除动态元素后的活力度变化。城市层面的结果显示,此类感知变化具有广泛性和空间结构性,影响73.7%的地理位置和32.1%的图像,表明仅基于静态影像的城市感知评估可能严重低估城市活力。