Cooper is an open-source package for solving constrained optimization problems involving deep learning models. Cooper implements several Lagrangian-based first-order update schemes, making it easy to combine constrained optimization algorithms with high-level features of PyTorch such as automatic differentiation, and specialized deep learning architectures and optimizers. Although Cooper is specifically designed for deep learning applications where gradients are estimated based on mini-batches, it is suitable for general non-convex continuous constrained optimization. Cooper's source code is available at https://github.com/cooper-org/cooper.
翻译:Cooper 是一个用于解决涉及深度学习模型的约束优化问题的开源软件包。它实现了多种基于拉格朗日函数的一阶更新方案,使得约束优化算法能够便捷地与 PyTorch 的高级特性(如自动微分)以及专用的深度学习架构和优化器相结合。尽管 Cooper 专为基于小批量数据估计梯度的深度学习应用而设计,但它同样适用于一般的非凸连续约束优化问题。Cooper 的源代码可在 https://github.com/cooper-org/cooper 获取。