In parallel with the continuously increasing parameter space dimensionality, search and optimization algorithms should support distributed parameter evaluations to reduce cumulative runtime. Intel's neuromorphic optimization library, Lava-Optimization, was introduced as an abstract optimization system compatible with neuromorphic systems developed in the broader Lava software framework. In this work, we introduce Lava Multi-Agent Optimization (LMAO) with native support for distributed parameter evaluations communicating with a central Bayesian optimization system. LMAO provides an abstract framework for deploying distributed optimization and search algorithms within the Lava software framework. Moreover, LMAO introduces support for random and grid search along with process connections across multiple levels of mathematical precision. We evaluate the algorithmic performance of LMAO with a traditional non-convex optimization problem, a fixed-precision transductive spiking graph neural network for citation graph classification, and a neuromorphic satellite scheduling problem. Our results highlight LMAO's efficient scaling to multiple processes, reducing cumulative runtime and minimizing the likelihood of converging to local optima.
翻译:随着参数空间维度持续增加,搜索与优化算法需支持分布式参数评估以缩短累积运行时间。英特尔神经形态优化库Lava-Optimization作为一种抽象优化系统被提出,可与更广泛的Lava软件框架中开发的神经形态系统兼容。本研究提出具备分布式参数评估原生支持的Lava多智能体优化系统,该系统通过与中央贝叶斯优化系统通信实现并行评估。LMAO为在Lava软件框架内部署分布式优化与搜索算法提供了抽象框架。此外,LMAO引入了随机搜索与网格搜索支持,并实现了跨多级数学精度的进程连接。我们通过传统非凸优化问题、用于引文图分类的固定精度转导脉冲图神经网络以及神经形态卫星调度问题评估LMAO的算法性能。实验结果表明LMAO能高效扩展至多进程,显著降低累积运行时间并有效减少陷入局部最优解的可能性。