Structured prediction involves learning to predict complex structures rather than simple scalar values. The main challenge arises from the non-Euclidean nature of the output space, which generally requires relaxing the problem formulation. Surrogate methods build on kernel-induced losses or more generally, loss functions admitting an Implicit Loss Embedding, and convert the original problem into a regression task followed by a decoding step. However, designing effective losses for objects with complex structures presents significant challenges and often requires domain-specific expertise. In this work, we introduce a novel framework in which a structured loss function, parameterized by neural networks, is learned directly from output training data through Contrastive Learning, prior to addressing the supervised surrogate regression problem. As a result, the differentiable loss not only enables the learning of neural networks due to the finite dimension of the surrogate space but also allows for the prediction of new structures of the output data via a decoding strategy based on gradient descent. Numerical experiments on supervised graph prediction problems show that our approach achieves similar or even better performance than methods based on a pre-defined kernel.
翻译:结构化预测涉及学习预测复杂结构而非简单标量值。主要挑战源于输出空间的非欧几里得性质,这通常需要对问题表述进行松弛。代理方法建立在核诱导损失或更广义的、允许隐式损失嵌入的损失函数基础上,并将原始问题转化为回归任务及随后的解码步骤。然而,为具有复杂结构的对象设计有效的损失函数存在显著挑战,且通常需要领域专业知识。在本工作中,我们引入了一个新颖框架,其中参数化由神经网络的结构化损失函数,在解决监督代理回归问题之前,直接通过对比学习从输出训练数据中学习。因此,由于代理空间的有限维度,该可微损失不仅使得神经网络的学习成为可能,还允许通过基于梯度下降的解码策略来预测输出数据的新结构。在监督图预测问题上的数值实验表明,我们的方法取得了与基于预定义核的方法相当甚至更优的性能。