Cross-domain panoramic semantic segmentation has attracted growing interest as it enables comprehensive 360° scene understanding for real-world applications. However, it remains particularly challenging due to severe geometric Field of View (FoV) distortions and inconsistent open-set semantics across domains. In this work, we formulate an open-set domain adaptation setting, and propose Extrapolative Domain Adaptive Panoramic Segmentation (EDA-PSeg) framework that trains on local perspective views and tests on full 360° panoramic images, explicitly tackling both geometric FoV shifts across domains and semantic uncertainty arising from previously unseen classes. To this end, we propose the Euler-Margin Attention (EMA), which introduces an angular margin to enhance viewpoint-invariant semantic representation, while performing amplitude and phase modulation to improve generalization toward unseen classes. Additionally, we design the Graph Matching Adapter (GMA), which builds high-order graph relations to align shared semantics across FoV shifts while effectively separating novel categories through structural adaptation. Extensive experiments on four benchmark datasets under camera-shift, weather-condition, and open-set scenarios demonstrate that EDA-PSeg achieves state-of-the-art performance, robust generalization to diverse viewing geometries, and resilience under varying environmental conditions. The code is available at https://github.com/zyfone/EDA-PSeg.
翻译:跨域全景语义分割因其能够为现实应用提供全面的360°场景理解而日益受到关注。然而,由于严重的几何视场畸变以及跨域开放集语义的不一致性,该任务仍然极具挑战性。在本工作中,我们构建了一个开放集域自适应设定,并提出了外推式域自适应全景分割框架,该框架在局部透视视图上进行训练,并在完整的360°全景图像上进行测试,明确地解决了跨域的几何视场偏移以及由先前未见类别引起的语义不确定性。为此,我们提出了欧拉-间隔注意力机制,该机制引入角度间隔以增强视角不变的语义表示,同时执行幅度和相位调制以提高对未见类别的泛化能力。此外,我们设计了图匹配适配器,该适配器构建高阶图关系以对齐视场偏移中的共享语义,同时通过结构适配有效分离新类别。在相机偏移、天气条件和开放集场景下的四个基准数据集上的大量实验表明,EDA-PSeg实现了最先进的性能,对多样化观察几何具有鲁棒的泛化能力,并在变化的环境条件下表现出良好的适应性。代码可在 https://github.com/zyfone/EDA-PSeg 获取。