The "classical" (weak) greedy algorithm is widely used within model order reduction in order to compute a reduced basis in the offline training phase: An a posteriori error estimator is maximized and the snapshot corresponding to the maximizer is added to the basis. Since these snapshots are determined by a sufficiently detailed discretization, the offline phase is often computationally extremely costly. We suggest to replace the serial determination of one snapshot after the other by a parallel approach. In order to do so, we introduce a batch size $b$ and add $b$ snapshots to the current basis in every greedy iteration. These snapshots are computed in parallel. We prove convergence rates for this new batch greedy algorithm and compare them to those of the classical (weak) greedy algorithm in the Hilbert and Banach space case. Then, we present numerical results where we apply a (parallel) implementation of the proposed algorithm to the linear elliptic thermal block problem. We analyze the convergence rate as well as the offline and online wall-clock times for different batch sizes. We show that the proposed variant can significantly speed-up the offline phase while the size of the reduced problem is only moderately increased. The benefit of the parallel batch greedy increases for more complicated problems.
翻译:"经典"(弱)贪婪算法在模型降阶领域被广泛用于离线训练阶段计算降基:通过最大化后验误差估计器,将对应极大值的快照加入基中。由于这些快照需要通过足够精细的离散化确定,离线阶段通常计算成本极高。我们提出用并行方法替代逐个串行确定快照的传统方式。为此,我们引入批量大小$b$,在每次贪婪迭代中向当前基添加$b$个快照,这些快照通过并行计算获得。我们证明了这种新型批量贪婪算法的收敛率,并在希尔伯特空间和巴拿赫空间情形下将其与经典(弱)贪婪算法的收敛率进行比较。随后,我们通过将所提算法的(并行)实现应用于线性椭圆热块问题,展示数值计算结果。我们分析了不同批量大小下的收敛率以及离线与在线实际计算时间。研究表明,所提方法能在仅适度增加降阶问题规模的前提下,显著加速离线阶段。对于更复杂的问题,并行批量贪婪算法的优势将更为明显。