Probabilistic Coalition Structure Generation (PCSG) is NP-hard and can be recast as an $l_0$-type sparse recovery problem by representing coalition structures as sparse coefficient vectors over a coalition-incidence design. A natural question is whether standard sparse methods, such as $l_1$ relaxations and greedy pursuits, can reliably recover the optimal coalition structure in this setting. We show that the answer is negative in a PCSG-inspired regime where overlapping coalitions generate highly coherent, near-duplicate columns: the irrepresentable condition fails for the design, and $k$-step Orthogonal Matching Pursuit (OMP) exhibits a nonvanishing probability of irreversible mis-selection. In contrast, we prove that Sparse Bayesian Learning (SBL) with a Gaussian-Gamma hierarchy is support consistent under the same structural assumptions. The concave sparsity penalty induced by SBL suppresses spurious near-duplicates and recovers the true coalition support with probability tending to one. This establishes a rigorous separation between convex, greedy, and Bayesian sparse approaches for PCSG.
翻译:概率联盟结构生成(PCSG)是NP难问题,可以通过将联盟结构表示为联盟-关联设计上的稀疏系数向量,将其重构为$l_0$型稀疏恢复问题。一个自然的问题是,标准的稀疏方法(如$l_1$松弛和贪婪追踪)能否在此设定下可靠地恢复最优联盟结构。我们证明,在一个受PCSG启发的机制中,答案是否定的:其中重叠联盟生成高度相干、近乎重复的列——该设计的不可表示条件不成立,且$k$步正交匹配追踪(OMP)表现出不可逆误选的非零概率。相比之下,我们证明了在相同结构假设下,采用高斯-伽马层次结构的稀疏贝叶斯学习(SBL)具有支撑一致性。SBL诱导的凹稀疏惩罚抑制了虚假的近似重复列,并以概率趋于1恢复真实的联盟支撑集。这为PCSG问题中的凸方法、贪婪方法与贝叶斯稀疏方法建立了严格的理论分离。