Self-supervised adaptation (SSA) improves foundation model transfer to medical domains but is computationally prohibitive. Although parameter efficient fine-tuning methods such as LoRA have been explored for supervised adaptation, their effectiveness for SSA remains unknown. In this work, we introduce efficient self-supervised adaptation (ESSA), a framework that applies parameter-efficient fine-tuning techniques to SSA with the aim of reducing computational cost and improving adaptation performance. Among the methods tested, Attention Projection Layer Adaptation (APLA) sets a new state-of-the-art, consistently surpassing full-parameter SSA and supervised fine-tuning across diverse medical tasks, while reducing GPU memory by up to 40.1% and increasing training throughput by 25.2%, all while maintaining inference efficiency.
翻译:自监督适应(SSA)能够提升基础模型向医学领域的迁移性能,但其计算成本过高。尽管已有研究探索了如LoRA等参数高效微调方法在监督适应中的应用,但这些方法在SSA中的有效性尚不明确。本研究提出了高效自监督适应(ESSA)框架,该框架将参数高效微调技术应用于SSA,旨在降低计算成本并提升适应性能。在测试的方法中,注意力投影层适应(APLA)取得了新的最优性能,在多种医学任务中持续超越全参数SSA与监督微调,同时将GPU内存占用降低达40.1%,训练吞吐量提升25.2%,且推理效率保持不变。