Selecting an optimization algorithm requires comparing candidates across problem instances, but the computational budget for deployment is often unknown at benchmarking time. Current methods either collapse anytime performance into a scalar, require manual interpretation of plots, or produce conclusions that change when algorithms are added or removed. Moreover, methods based on raw objective values require normalization, which needs bounds or optima that are often unavailable and breaks coherent aggregation across instances. We propose a framework that formulates anytime algorithm comparison as Pareto optimization over time: an algorithm is non-dominated if no competitor beats it at every timepoint. By using rankings rather than objective values, our approach requires no bounds, no normalization, and aggregates coherently across arbitrary instance distributions without requiring known optima. We introduce PolarBear (Pareto-optimal anytime algorithms via Bayesian racing), a procedure that identifies the anytime Pareto set through adaptive sampling with calibrated uncertainty. Bayesian inference over a temporal Plackett-Luce ranking model provides posterior beliefs about pairwise dominance, enabling early elimination of confidently dominated algorithms. The output Pareto set together with the posterior supports downstream algorithm selection under arbitrary time preferences and risk profiles without additional experiments.
翻译:选择优化算法需要在问题实例间比较候选算法,但部署时的计算预算在基准测试阶段往往未知。现有方法要么将任意时间性能压缩为标量,要么需要人工解读图表,要么得出的结论会因算法增减而改变。此外,基于原始目标值的方法需要归一化处理,这通常依赖于难以获取的边界值或最优值,且会破坏跨实例的一致性聚合。我们提出一个将任意时间算法比较建模为时间维度上帕累托优化的框架:若不存在在任何时间点都优于某算法的竞争者,则该算法处于非支配状态。通过采用排序而非目标值,本方法无需边界值或归一化处理,且能在任意实例分布上实现一致性聚合,无需已知最优解。我们提出PolarBear(基于贝叶斯竞赛的帕累托最优任意时间算法),该流程通过具有校准不确定性的自适应采样识别任意时间帕累托集。基于时序普拉凯特-卢斯排序模型的贝叶斯推断提供关于成对支配关系的后验置信度,从而实现对确信被支配算法的早期淘汰。输出的帕累托集与后验分布共同支持在任意时间偏好和风险配置下的下游算法选择,无需额外实验。