Within the model-theoretic framework for supervised learning introduced by Grohe and Tur\'an (TOCS 2004), we study the parameterized complexity of learning concepts definable in monadic second-order logic (MSO). We show that the problem of learning an MSO-definable concept from a training sequence of labeled examples is fixed-parameter tractable on graphs of bounded clique-width, and that it is hard for the parameterized complexity class para-NP on general graphs. It turns out that an important distinction to be made is between 1-dimensional and higher-dimensional concepts, where the instances of a k-dimensional concept are k-tuples of vertices of a graph. For the higher-dimensional case, we give a learning algorithm that is fixed-parameter tractable in the size of the graph, but not in the size of the training sequence, and we give a hardness result showing that this is optimal. By comparison, in the 1-dimensional case, we obtain an algorithm that is fixed-parameter tractable in both.
翻译:在Grohe和Turán(TOCS 2004)引入的监督学习模型论框架内,我们研究了学习一阶二阶逻辑(MSO)可定义概念的参数化复杂性。我们证明了在有界团宽度的图上,从带标签示例的训练序列中学习MSO可定义概念的问题是固定参数可解的,而在一般图上,该问题对于参数化复杂性类para-NP是困难的。结果表明,一个重要的区别在于一维概念和高维概念之间,其中k维概念的实例是图的顶点的k元组。对于高维情况,我们给出了一种学习算法,该算法在图的大小上是固定参数可解的,但在训练序列的大小上不是,并且我们给出了一个硬度结果,表明这是最优的。相比之下,在一维情况下,我们获得了一种在两者上都是固定参数可解的算法。