Multimodal learning aims to capture both shared and private information from multiple modalities. However, existing methods that project all modalities into a single latent space for fusion often overlook the asynchronous, multi-level semantic structure of multimodal data. This oversight induces semantic misalignment and error propagation, thereby degrading representation quality. To address this issue, we propose Cross-Level Co-Representation (CLCR), which explicitly organizes each modality's features into a three-level semantic hierarchy and specifies level-wise constraints for cross-modal interactions. First, a semantic hierarchy encoder aligns shallow, mid, and deep features across modalities, establishing a common basis for interaction. And then, at each level, an Intra-Level Co-Exchange Domain (IntraCED) factorizes features into shared and private subspaces and restricts cross-modal attention to the shared subspace via a learnable token budget. This design ensures that only shared semantics are exchanged and prevents leakage from private channels. To integrate information across levels, the Inter-Level Co-Aggregation Domain (InterCAD) synchronizes semantic scales using learned anchors, selectively fuses the shared representations, and gates private cues to form a compact task representation. We further introduce regularization terms to enforce separation of shared and private features and to minimize cross-level interference. Experiments on six benchmarks spanning emotion recognition, event localization, sentiment analysis, and action recognition show that CLCR achieves strong performance and generalizes well across tasks.
翻译:多模态学习旨在从多种模态中捕获共享信息与私有信息。然而,现有方法通常将所有模态投影至单一潜在空间进行融合,往往忽视了多模态数据异步、多层次的语义结构。这种忽视会导致语义失准与误差传播,从而降低表征质量。为解决该问题,本文提出跨层级协同表征(CLCR),该方法显式地将各模态特征组织为三层语义层次结构,并为跨模态交互指定层级约束。首先,语义层次编码器对齐跨模态的浅层、中层与深层特征,为交互建立共同基础。随后,在每一层级中,层级内协同交换域(IntraCED)将特征分解为共享子空间与私有子空间,并通过可学习的令牌预算将跨模态注意力限制在共享子空间内。该设计确保仅交换共享语义,并防止私有通道的信息泄露。为整合跨层级信息,层级间协同聚合域(InterCAD)利用学习到的锚点同步语义尺度,有选择地融合共享表征,并通过门控机制处理私有线索,以形成紧凑的任务表征。我们进一步引入正则化项以强制分离共享与私有特征,并最小化跨层级干扰。在涵盖情感识别、事件定位、情感分析与动作识别的六个基准数据集上的实验表明,CLCR 实现了优异的性能,并具有良好的跨任务泛化能力。