A fundamental step in the development of machine learning models commonly involves the tuning of hyperparameters, often leading to multiple model training runs to work out the best-performing configuration. As machine learning tasks and models grow in complexity, there is an escalating need for solutions that not only improve performance but also address sustainability concerns. Existing strategies predominantly focus on maximizing the performance of the model without considering energy efficiency. To bridge this gap, in this paper, we introduce Spend More to Save More (SM2), an energy-aware hyperparameter optimization implementation based on the widely adopted successive halving algorithm. Unlike conventional approaches including energy-intensive testing of individual hyperparameter configurations, SM2 employs exploratory pretraining to identify inefficient configurations with minimal energy expenditure. Incorporating hardware characteristics and real-time energy consumption tracking, SM2 identifies an optimal configuration that not only maximizes the performance of the model but also enables energy-efficient training. Experimental validations across various datasets, models, and hardware setups confirm the efficacy of SM2 to prevent the waste of energy during the training of hyperparameter configurations.
翻译:机器学习模型开发中的一个基本步骤通常涉及超参数调优,这往往需要进行多次模型训练以确定最佳性能配置。随着机器学习任务和模型复杂度的增加,不仅需要提升性能的解决方案,同时也要关注可持续性问题。现有策略主要集中于最大化模型性能,而未考虑能量效率。为弥补这一差距,本文提出了多投入以更节省(SM2)——一种基于广泛采用的连续减半算法的能量感知型超参数优化实现。与包括对单个超参数配置进行高能耗测试的传统方法不同,SM2采用探索性预训练以最小能量消耗识别低效配置。通过整合硬件特性与实时能耗追踪,SM2不仅能确定最大化模型性能的最优配置,还能实现能量高效的训练。在不同数据集、模型和硬件设置上的实验验证证实了SM2在超参数配置训练过程中有效防止能量浪费的能力。