When solving optimization problems under uncertainty with contextual data, utilizing machine learning to predict the uncertain parameters' values is a popular and effective approach. Decision-focused learning (DFL) aims at learning a predictive model such that decision quality, instead of prediction accuracy, is maximized. Common practice is to predict a single scenario representing the uncertain parameters, implicitly assuming that there exists a deterministic problem approximation (proxy) that allows for optimal decision-making. The opposite has also been considered, where the underlying distribution is estimated with a parameterized distribution. However, little is known about when either choice is valid. This paper investigates for the first time problem properties that justify using a certain decision proxy. Using this, we present alternative decision proxies for DFL, with little or no compromise on the complexity of the learning task. We show the effectiveness of presented approaches in experiments on continuous and discrete problems, as well as problems with uncertainty in the objective function and in the constraints.
翻译:在利用上下文数据解决不确定性下的优化问题时,采用机器学习预测不确定参数值是一种流行且有效的方法。决策导向学习旨在学习一个预测模型,以最大化决策质量而非预测精度。常见做法是预测代表不确定参数的单一情景,这隐含假设存在一个允许最优决策的确定性问题近似(代理)。反之,也有研究考虑通过参数化分布估计底层分布。然而,关于何种情况下任一选择是有效的,目前知之甚少。本文首次研究了证明使用特定决策代理合理性的问题属性。基于此,我们提出了用于DFL的替代决策代理,这些代理对学习任务的复杂性影响极小甚至没有影响。我们通过在连续和离散问题、以及目标函数和约束条件中存在不确定性的问题上的实验,展示了所提方法的有效性。