Pervasive AI increasingly depends on on-device learning systems that deliver low-latency and energy-efficient computation under strict resource constraints. Liquid State Machines (LSMs) offer a promising approach for low-power temporal processing in pervasive and neuromorphic systems, but their deployment remains challenging due to high hyperparameter sensitivity and the computational cost of traditional optimization methods that ignore energy constraints. This work presents EARL, an energy-aware reinforcement learning framework that integrates Bayesian optimization with an adaptive reinforcement learning based selection policy to jointly optimize accuracy and energy consumption. EARL employs surrogate modeling for global exploration, reinforcement learning for dynamic candidate prioritization, and an early termination mechanism to eliminate redundant evaluations, substantially reducing computational overhead. Experiments on three benchmark datasets demonstrate that EARL achieves 6 to 15 percent higher accuracy, 60 to 80 percent lower energy consumption, and up to an order of magnitude reduction in optimization time compared to leading hyperparameter tuning frameworks. These results highlight the effectiveness of energy-aware adaptive search in improving the efficiency and scalability of LSMs for resource-constrained on-device AI applications.
翻译:泛在人工智能日益依赖于在严格资源约束下提供低延迟与高能效计算的设备端学习系统。液态状态机为泛在系统与神经形态系统中的低功耗时序处理提供了一种前景广阔的方法,但其部署仍面临挑战,原因在于其超参数高度敏感,且传统优化方法忽略了能量约束,导致计算成本高昂。本研究提出了EARL,一种能量感知强化学习框架,该框架将贝叶斯优化与基于自适应强化学习的候选策略选择相结合,以联合优化精度与能耗。EARL采用代理模型进行全局探索,利用强化学习实现动态候选优先级排序,并引入早期终止机制以消除冗余评估,从而显著降低计算开销。在三个基准数据集上的实验表明,与主流超参数调优框架相比,EARL实现了6%至15%的精度提升、60%至80%的能耗降低,并将优化时间减少了一个数量级。这些结果凸显了能量感知自适应搜索在提升资源受限设备端人工智能应用中液态状态机效率与可扩展性方面的有效性。