We introduce a fast Self-adapting Forward-Forward Network (SaFF-Net) for medical imaging analysis, mitigating power consumption and resource limitations, which currently primarily stem from the prevalent reliance on back-propagation for model training and fine-tuning. Building upon the recently proposed Forward-Forward Algorithm (FFA), we introduce the Convolutional Forward-Forward Algorithm (CFFA), a parameter-efficient reformulation that is suitable for advanced image analysis and overcomes the speed and generalisation constraints of the original FFA. To address hyper-parameter sensitivity of FFAs we are also introducing a self-adapting framework SaFF-Net fine-tuning parameters during warmup and training in parallel. Our approach enables more effective model training and eliminates the previously essential requirement for an arbitrarily chosen Goodness function in FFA. We evaluate our approach on several benchmarking datasets in comparison with standard Back-Propagation (BP) neural networks showing that FFA-based networks with notably fewer parameters and function evaluations can compete with standard models, especially, in one-shot scenarios and large batch sizes. The code will be available at the time of the conference.
翻译:本文提出一种快速的自适应前向-前向网络(SaFF-Net)用于医学图像分析,以缓解当前主要因依赖反向传播进行模型训练与微调而产生的功耗与资源限制问题。基于近期提出的前向-前向算法(FFA),我们提出了卷积前向-前向算法(CFFA),这是一种参数高效的重构方法,适用于高级图像分析任务,并克服了原始FFA在速度与泛化能力上的局限。针对FFA的超参数敏感性问题,我们同时引入了自适应框架SaFF-Net,可在预热与训练阶段并行微调参数。该方法实现了更有效的模型训练,并消除了FFA中原先必须人为设定“优度函数”的要求。我们在多个基准数据集上评估了所提方法,并与标准反向传播(BP)神经网络进行对比。实验表明,基于FFA的网络在参数量与函数评估次数显著减少的情况下,仍能与标准模型竞争,尤其在单次学习场景与大批量训练中表现突出。相关代码将在会议召开时公开。