We investigate the parameterized complexity of Bayesian Network Structure Learning (BNSL), a classical problem that has received significant attention in empirical but also purely theoretical studies. We follow up on previous works that have analyzed the complexity of BNSL w.r.t. the so-called superstructure of the input. While known results imply that BNSL is unlikely to be fixed-parameter tractable even when parameterized by the size of a vertex cover in the superstructure, here we show that a different kind of parameterization - notably by the size of a feedback edge set - yields fixed-parameter tractability. We proceed by showing that this result can be strengthened to a localized version of the feedback edge set, and provide corresponding lower bounds that complement previous results to provide a complexity classification of BNSL w.r.t. virtually all well-studied graph parameters. We then analyze how the complexity of BNSL depends on the representation of the input. In particular, while the bulk of past theoretical work on the topic assumed the use of the so-called non-zero representation, here we prove that if an additive representation can be used instead then BNSL becomes fixed-parameter tractable even under significantly milder restrictions to the superstructure, notably when parameterized by the treewidth alone. Last but not least, we show how our results can be extended to the closely related problem of Polytree Learning.
翻译:本研究探讨了贝叶斯网络结构学习问题的参数化复杂性,该经典问题在实证研究与纯理论研究领域均受到广泛关注。我们基于前人针对输入超结构分析BNSL复杂性的研究展开工作。已知结果表明,即使以超结构中顶点覆盖规模作为参数,BNSL也不太可能属于固定参数可解问题。本文则证明,采用不同类型的参数化策略——特别是以反馈边集规模作为参数——能够实现固定参数可解性。我们进一步证明该结果可强化至反馈边集的局部化版本,并通过给出相应下界补充了前人研究成果,从而针对几乎所有被深入研究的图参数完成了BNSL的复杂性分类。随后,我们分析了BNSL的复杂性如何依赖于输入表示形式。值得注意的是,尽管该领域既往理论研究主要采用所谓的非零表示法,本文证明若改用加性表示法,则即使在对超结构限制显著减弱的情况下(特别是仅以树宽为参数时),BNSL仍能保持固定参数可解性。最后,我们展示了如何将研究结果拓展至紧密相关的多树学习问题。