Independent Component Analysis (ICA) is a classical method for recovering latent variables with useful identifiability properties. For independent variables, cumulant tensors are diagonal; relaxing independence yields tensors whose zero structure generalizes diagonality. These models have been the subject of recent work in non-independent component analysis. We show that pairwise mean independence answers the question of how much one can relax independence: it is identifiable, any weaker notion is non-identifiable, and it contains the models previously studied as special cases. Our results apply to distributions with the required zero pattern at any cumulant tensor. We propose an algebraic recovery algorithm based on least-squares optimization over the orthogonal group. Simulations highlight robustness: enforcing full independence can harm estimation, while pairwise mean independence enables more stable recovery. These findings extend the classical ICA framework and provide a rigorous basis for blind source separation beyond independence.
翻译:独立成分分析(ICA)是一种恢复潜在变量的经典方法,具有良好的可辨识性特性。对于独立变量,累积量张量是对角化的;放宽独立性条件将产生其零结构推广了对角化性质的张量。这些模型已成为近期非独立成分分析研究的主题。我们证明,成对均值独立性回答了“独立性条件可放宽至何种程度”的问题:该条件具有可辨识性,任何更弱的概念均不可辨识,且其包含了先前研究的模型作为特例。我们的结果适用于在任何累积量张量上具有所需零模式的分布。我们提出了一种基于正交群上最小二乘优化的代数恢复算法。仿真实验突显了其鲁棒性:强制完全独立性可能损害估计效果,而成对均值独立性则能实现更稳定的恢复。这些发现扩展了经典ICA框架,并为超越独立性的盲源分离提供了严格的理论基础。