Reliability assessment of engineering systems often requires repeated evaluations of limit-state functions that may rely on computationally expensive high-fidelity models, rendering direct sampling-based reliability analysis impractical. An effective solution is to approximate the limit-state function with a surrogate model that can be iteratively refined through active learning, thereby reducing the number of model evaluations. At each iteration, an acquisition strategy selects the next sample for evaluation by balancing two competing objectives: exploration, to reduce global predictive uncertainty, and exploitation, to improve accuracy near the failure boundary. Conventional strategies such as the U-function, EFF, ERF, REIF, and portfolio-based schemes encode this balance through single pointwise scores, concealing the underlying trade-off. In this work, we formulate sample acquisition as a multi-objective optimization (MOO) problem in which exploration and exploitation are explicit competing objectives, yielding a compact Pareto set that provides a quantifiable trade-off representation. To select samples from the Pareto set, we investigate principled MOO criteria and propose adaptive trade-off rules, including a scheduled exploration-to-exploitation shift and a reliability-aware selection rule. Across diverse limit-state functions, we evaluate all tested strategies through relative failure-probability error trajectories, sample-efficiency comparisons, and global rankings, showing that the adaptive MOO-based strategies achieve robust overall performance while consistently meeting strict error targets.
翻译:暂无翻译