The combinatorial problem Max-Cut has become a benchmark in the evaluation of local search heuristics for both quantum and classical optimisers. In contrast to local search, which only provides average-case performance guarantees, the convex semidefinite relaxation of Max-Cut by Goemans and Williamson, provides worst-case guarantees and is therefore suited to both the construction of benchmarks and in applications to performance-critic scenarios. We show how extended floating point precision can be incorporated in algebraic subroutines in convex optimisation, namely in indirect matrix inversion methods like Conjugate Gradient, which are used in Interior Point Methods in the case of very large problem sizes. Also, an estimate is provided of the expected acceleration of the time to solution for a hardware architecture that runs natively on extended precision. Specifically, when using indirect matrix inversion methods like Conjugate Gradient, which have lower complexity than direct methods and are therefore used in very large problems, we see that increasing the internal working precision reduces the time to solution by a factor that increases with the system size.
翻译:组合优化问题Max-Cut已成为评估量子与经典优化器中局部搜索启发式算法的基准。与仅提供平均情况性能保证的局部搜索不同,Goemans和Williamson提出的Max-Cut凸半定规划松弛方法提供了最坏情况保证,因此既适用于基准测试的构建,也适用于性能关键场景的应用。本文阐述了如何在凸优化的代数子程序(具体而言,在用于求解大规模问题的内点法中采用的间接矩阵求逆方法,如共轭梯度法)中引入扩展浮点精度。同时,本文还估算了在原生支持扩展精度的硬件架构上预期能获得的求解时间加速效果。具体而言,当使用共轭梯度法等间接矩阵求逆方法时(这些方法比直接法具有更低的计算复杂度,因此被用于求解超大规模问题),提高内部运算精度可将求解时间缩短一个因子,且该加速因子随系统规模的增大而增加。