Neural networks (NNs) have gained significant attention across various engineering disciplines, particularly in design optimization, where they are used to build surrogate models for high-dimensional regression problems. Despite their power as global approximators, NNs often fail to accurately capture local nonlinearities without relying on a large number of training parameters. To address these limitations, in this paper we propose domain decomposition methods (DDM), which divide the input feature space into multiple local subdomains, each modeled by a simpler NN, trained in parallel. To recover the accuracy of a global approximation, interface constraints are introduced in the local loss functions to enforce continuity between subdomains. The interface constraints are enforced with two different approaches, by utilizing Lagrange multiplier or augmented Lagrange multiplier methods. Both approaches are validated using synthetic data from 2D and 3D linear compression problems, numerically solved using the finite element method. The study investigates computational time and accuracy across varying numbers of subdomains to identify optimal partitioning strategies. Compared to unconstrained approximations, both methods significantly improve continuity across subdomain interfaces. Also, the use of DDMs improves approximation accuracy in nonlinear regions when compared to standard global NN training. The augmented Lagrange method outperforms the standard Lagrange formulation by converging faster due to lower convergence requirements, albeit with a slightly lower accuracy. Its scalability makes it the preferred choice for large-scale problems, as the faster convergence outweighs the minor loss in accuracy. Overall, these results highlight the augmented Lagrange method as a promising DDM approach for training efficient and scalable NN surrogate models.
翻译:暂无翻译